- Features
- Pricing
- Use Cases
Personalized conversational responses based on ALL your business content.
Let your leads ask questions and get instant answers like ChatGPT.
No more typing keywords in search boxes! Potential customers instantly get the answers from your business content.
- Resources
- Company
- Login
- Free Trial
Have you deployed AI for your business?
Build business grade GPTs in minutes
7 Comments
Hello, this sounds good but I still have a significant problem, I have uploaded a file with contact information about a UK distributor that had errors, and when I ask the bot about the UK distributor contact information it responds with the error from the file I uploaded to the data set this is OK, to eliminate the mistake I deleted the file including the UK distributor contact information from the data set hopping the wrong contact information will no longer be in the bot responses. After a few days, the bot still responds with the UK distributor contact information including the error, so not knowing what to do I uploaded a new file saying NO UK distributor exists, now if I ask if there is a UK distributor the bot thinks forever and the bot is blocked maybe because of conflicting information? so do you have a solution to refresh some cash memory somewhere when a page is added or deleted, also a way for the end user to stop the bot from thinking and clear the chatbot for a new start to see the Example questions again. This problem will not allow us to deploy to the public, hopping for a response, thanks
Hi Mat — this problem has now been fixed. Sorry about that — now, when you delete a file, it will be removed from the chatbot’s index.
What if we want to use CustomGPT to supercharge our marketing division? We want to upload all of our documents like brand voice, marketing strategy, previous articles, and video transcripts, and ask CustomGPT to generate new content ideas. Will that work with this tool? Or will it only get back existing information in order to avoid hallucinations?
Yes – you can do that. There are two modes:
1. “My Content” — this mode is recommended if you want to generate TRUSTED content based on your knowledgebase — without having ChatGPT making up stuff. This is good for regulated industries or any brand where TRUSTED content is absolutely critical (aka: no making up facts!). In simple terms, if brand trust is critical for content creation, this mode is recommended.
2. “My Content + ChatGPT” — this mode is recommended if you want to use your knowledgebase, but still have ChatGPT be a little creative.
Amazing, thank you! Does CustomGPT have access to the internet? Or it will only give replies based on the ChatGPT-4 training data and the data we provided?
Man, you can’t say it completely avoids hallucinations. That can’t be true, you are giving false information.
You can’t assure in any way that an answer is 100% correct. You just can have a very high probability near 100%, but 100% it’s impossible.
You can build any wall you want , but you can’t be 100% sure the answer is correct.
GPTs are stochastic models, and they are not interpretable, this means you can’t grant 100% accuracy.
If you are meaning that hallucinations are very unlikely but not impossible, ok, in this i trust you.
Bocchese Giacomo
Deep Learning Engineer
Just to be clear: I dont think I implied that its 100% — In fact to be clear, even though we have the industry’s BEST anti-hallucination system for RAG pipelines, we are clear with our customers that hallucinations is like “security” or “uptime” — nobody (not even AWS) will get 100% — but the main question is: Who has the BEST systems for it?
In particular, “99% anti-hallucination is as good as 100% useless” — so the big question to ask is: What anti-hallucinations has a platform implemented to do this?
PS: Since you are technical, I wrote a Medium article about this that you will like: https://medium.com/@aldendorosario/build-it-or-buy-it-deployment-options-for-retrieval-augmented-generation-rag-f6d43df8212a — hoping to write a full-fledged white paper concentrating on hallucinations (based on lessons learnt from millions of queries from our thousands of customers).
PPS: Thanks for the technical note — YES — like security and uptime, anti-hallucinations will never reach 100%.