Introducing CustomGPT Security And Privacy Principles – The Comprehensive Guide
In an era where data is the new oil, the importance of data privacy and security cannot be overstated. Every day, businesses and individuals generate vast amounts of data, much of which is sensitive and confidential. This data, if not properly protected, can fall into the wrong hands, leading to devastating consequences. Therefore, it is crucial to choose platforms and services that prioritize data privacy and security.
Enter CustomGPT, the industry’s best AI chatbot platform that places a high premium on data privacy and security. At CustomGPT, we understand the value of your data and the trust you place in us to protect it. That’s why we’ve implemented stringent measures to secure your data, including SOC 2 Type 2 certification, securely storing uploaded data in AWS, and using OpenAI’s public ChatGPT instance via APIs, which promises not to use any data passed as context for training its models.
But our commitment to data privacy and security doesn’t stop there. We’ve designed our service to be fully private by default, ensuring that only approved users can access it. We also offer the option to delete files immediately after processing, providing an added layer of protection for your data.
In this comprehensive guide, we will delve deeper into the measures we take to ensure data privacy and security at CustomGPT, and answer some of the most frequently asked questions about our data handling practices. These questions have been compiled based on conversations with hundreds of our 2000+ paying business customers.
seeing a lot of confusion about this, so for clarity:
openai never trains on anything ever submitted to the api or uses that data to improve our models in any way.
Our goal is to provide you with a clear understanding of how we protect your data, so you can use our service with complete peace of mind. Let’s get started.
Understanding Data Privacy and Security in CustomGPT
A. Measures Taken by CustomGPT to Ensure Data Privacy and Security
At CustomGPT, we understand the importance of data privacy and security. That’s why we’ve implemented stringent measures to protect your data. We are SOC 2 Type 2 certified, after undergoing a widely recognized technical audit that ensures we follow the best practices in information security.
Additionally, we securely store all uploaded data in Amazon Web Services (AWS), a leading cloud service provider known for its robust security features. You can find more details about our security principles at https://customgpt.ai/security
B. Use of APIs to Call into OpenAI’s Public ChatGPT Instance
CustomGPT does not use a private instance on Azure. Instead, we make use of APIs to call into OpenAI’s ChatGPT API. This approach allows us to leverage the power of OpenAI’s advanced AI models while ensuring that your data remains secure.
OpenAI has made a commitment not to use any data passed as context for training its models, further enhancing the privacy of your data. As of March 1st 2023, OpenAI has now clarified that they do not use data from API calls in their training (aka: the infamous Samsung issue!).
C. Protection of Data During Processing
We take great care to protect your data during processing. All files uploaded for processing in CustomGPT are immediately deleted after processing if you choose the “Delete immediately after processing” option.
The text from the file is processed, converted into vectors and chunks and used by the chatbot, and your chatbot is fully private by default. This means that the knowledge from your data is not only secure but also inaccessible to anyone but you.
D. Restricting CustomGPT Service to Approved Users Only
By default, our service is private and can only be accessed by approved users. This means that unless you choose to make your chatbot “Public”, only users approved by you can interact with it. This feature allows you to maintain control over who can access your chatbot, further enhancing the privacy and security of your data.
Moreover, the data from your chatbot is not inter-mixed with other chatbots – even within your own account.
For example: You could have a chatbot for your HR department and any data uploaded to that chatbot will have no bearing on the chatbot from your sales department. There is no intermingling of data – or chat behaviour – or any form of cross-chatbot training.
Data Collection and Storage in CustomGPT
A. Overview of the Type of Data CustomGPT Collects from Site Visitors
CustomGPT is designed to collect minimal user data required for service operation and improvement. This includes the IP address and session history of site visitors. The session history is crucial for maintaining the turn-by-turn conversations with the bot.
It’s important to note that all collected data is accessible to you via the Dashboard and API, ensuring transparency in our data collection practices.
This also means that you have full access to see what users are asking the bot and the bot’s responses (Hint: This is a goldmine of information to understand customer behavior)
B. How CustomGPT Handles User Data
At CustomGPT, we prioritize user data privacy and security. We collect minimal user data required for service operation and improvement, such as queries asked by users and session management. This data is securely stored and is not used for any other purposes, including model training.
IMPORTANT: So unlike ChatGPT’s public interface at chat.openai.com , there is no machine learning happening based on the user conversations. This greatly reduces the risk of data leakage.
Furthermore, each chatbot is siloed, meaning no data is shared between bots, even within your own account. This ensures that your data remains private and inaccessible to anyone but you.
C. Security Measures Around the Data Uploaded to CustomGPT
We take data security seriously at CustomGPT. The data you upload is securely saved in AWS, a leading cloud service provider known for its robust security features.
We also offer the option to delete your files immediately after processing for added security. This means that once your files have been processed, they are immediately removed from our system, ensuring that your original files and data do not remain on our servers any longer than necessary.
D. Upholding the Confidentiality of Proprietary Information
At CustomGPT, privacy isn’t an afterthought — it’s at the core of what we do. We’re committed to maintaining the strictest standards of confidentiality for your proprietary information. Rest assured, any data you upload to a bot remains securely within that bot’s environment, insulated from other bots — even those within the same account. Likewise, any data loaded into CustomGPT via PDF or CSV files is kept entirely private.
Our platform is underpinned by a robust operational framework, which ensures a clear segregation of roles between our development and operations teams. This helps reinforce our stringent data security measures. As a result, only a limited number of carefully vetted employees have eyes-on access to chatbot data, solely for essential tasks such as debugging, quality assurance, and system improvements.
“OpenAI will not use data submitted by customers via our API to train or improve our models, unless you explicitly decide to share your data with us for this purpose“
Furthermore, we’re in alignment with OpenAI’s practices, which clarify that data from API calls is not used for model training. This reinforces our commitment to you: your data will not contribute to the learning of other AI models. We’re steadfast in our pledge to uphold your privacy and data security, building a platform you can trust.
CustomGPT and OpenAI: A Clear Distinction
A. Clarification on the Use of Documents Loaded into CustomGPT and Its Relation to OpenAI
CustomGPT ensures the privacy of your data, even when it’s loaded via documents or CSV files. This data remains entirely private and is not used to train the public version of ChatGPT.
OpenAI, the organization behind ChatGPT, has clarified that it does not use data from API calls for training their models. This means that the data you load into CustomGPT stays within CustomGPT and does not contribute to the learning of OpenAI’s models.
B. Explanation of How Data Used on CustomGPT Does Not Contribute to ChatGPT’s Learning
When you interact with CustomGPT, the data you use is confined to your specific bot. This ensures that your content remains local and private. It’s important to note that this data does not end up on OpenAI servers and does not contribute to ChatGPT’s learning.
For example: When a user asks a question in the chatbot, relevant “chunks” from your content are included in the ChatGPT API call to allow ChatGPT to respond to the question based on the context. This context is NOT used for training OpenAI’s ML models, a commitment that OpenAI has publicly made. Do note that this is unlike the public ChatGPT interface where the prompts and responses are indeed used to train the modes.
This is a key aspect of our commitment to data privacy and security, ensuring that your data is used solely for the purpose of enhancing your chatbot experience.
C. Assurance that Confidential Data Shared with CustomGPT Will Not Be Used to Learn for Other People
At CustomGPT, we respect the confidentiality of your data. Any data you share with us remains private and is not used to teach or provide insight for others. Each bot within CustomGPT is its own data silo, meaning that the data from one bot does not affect or influence other bots, even those within the same account. This ensures that the confidential data you share with us stays confidential and is used solely for the purpose of improving your specific bot.
CustomGPT’s Commitment to Data Privacy
A. How CustomGPT Handles Data Privacy and Ensures Business Data Safety
CustomGPT is built on a foundation of strong privacy principles. We prioritize data privacy and ensure that your business data is stored in isolated environments per bot. This data is not used for any other purposes, including model training.
As we have achieved SOC 2 Type 2 certification, we have implemented stringent measures to secure data, which ensures that we follow the best practices in information security.
B. How Client’s Documents Uploaded to CustomGPT are Handled
When you upload documents to CustomGPT, including sensitive documents like an employee handbook, they are not used by OpenAI or contribute to its model training. Your documents remain strictly within the context of your specific CustomGPT bot.
We also offer the option to delete your files immediately after processing for added security. This ensures that your files do not remain on our servers any longer than necessary, providing an extra layer of data protection.
C. Information on Data Automatically Collected from the User
CustomGPT ensures that we handle your data in compliance with privacy laws and regulations. We collect minimal user data required for service operation and improvement. This includes the IP address and session history, which is important for maintaining the turn-by-turn conversations with the bot. All collected data is accessible to you via the Dashboard and API, ensuring transparency in our data collection practices.
Trusting CustomGPT with Your Projects
A. Assurance of Project Security and Isolation in CustomGPT
CustomGPT is designed with a high level of security and ensures that every project is completely isolated from others, even under the same account. This means that each chatbot is siloed, so no data is shared between bots, even within your own account.
This isolation extends to our infrastructure as well. CustomGPT operates within its private VPC instance in Amazon AWS US East, ensuring that your data and interactions are segregated and not mixed with other AWS accounts.
B. Explanation of the Option to Delete Files Immediately After Processing in CustomGPT
CustomGPT offers an option to immediately delete the original files after processing, providing added protection. This means that all files uploaded for processing in CustomGPT are immediately deleted after processing if the “Delete immediately after processing” option is chosen.
How this works: The text from the file is processed and used by the chatbot, and your chatbot is fully private by default. But the original file (e.g. a PDF documents) is deleted from the systems after processing. This ensures that your files do not remain on our servers any longer than necessary, providing an extra layer of data protection.
C. Addressing Concerns Related to Privacy Incidents with Other Companies
In the wake of privacy incidents with other companies (e.g. Samsung) , it’s natural to have concerns about data privacy. However, CustomGPT takes data protection seriously and has strong security measures in place, including encryption, access controls, and a robust system architecture.
Regular updates and improvements are made to ensure effective data protection. Furthermore, all data uploaded to a bot stays within that silo and is not shared with other bots in the same account. OpenAI has clarified that they do not use data from API calls in their training. This means that your data remains private and secure, providing you with peace of mind.
Frequently Asked Questions
Can you help me evaluate whether an AI chatbot is safe enough for sensitive business data?
Yes. You can set a clear go or no-go checklist before any sensitive upload. Require a current SOC 2 Type II report, with a reporting period in the last 12 months, covering Security and Confidentiality, plus a bridge letter if the report end date is over 3 months old. Confirm encryption in transit (TLS 1.2+) and at rest (AES-256), a signed DPA, AWS data residency region, and the full subprocessor list. Require a written deletion SLA: deletion on request immediately, fully completed within 30 days, including backups where applicable. Verify tenant isolation details: uploads and chat history are private by default per employee inside your org, admin approval controls are defined, and audit logs show who accessed what, when, with retention duration stated. Confirm API retention policy and that customer context is excluded from model training unless you explicitly opt in. Documentation audit and competitive research show buyers often benchmark these terms against Microsoft Copilot and Claude Enterprise.
Can each employee have private transcript history and restricted access inside one organization?
Not by default: you only get per-employee transcript privacy if you configure it explicitly. Based on API usage patterns plus the vendor’s Enterprise Security Overview (Access Control) and DPA (Support Access and Deletion), you can scope visibility by role and workspace, but Admin can still open or export transcripts unless admin override is disabled. Require these go or no-go thresholds before rollout: Member cannot search, open, or export another user’s transcripts; Admin override is either off or fully audited; support access is ticket-based, customer approved, and time-limited; retention can be set to 0, 30, 90, or custom days; offboarding hard delete SLA is 30 days or less; audit export includes actor_id, target_user_id, transcript_id, action, timestamp, IP, and API token ID. Run a live test: Employee A cannot access Employee B history under both Member and Admin profiles. Compare this control depth with Slack Enterprise Grid or Microsoft 365 Copilot.
What privacy issue appears most often after launch, and how do teams prevent it?
The post-launch privacy issue you are most likely to face is over-permissioning: people who should have lost access still can view data. In Freshdesk escalation data from enterprise AI rollouts, the most common incident pattern was former project members or contractors keeping access after a role change, usually found during later audits.
You can prevent this with a strict control sequence: set private-by-default access per user, grant permissions by role, require manager approval for exceptions, automatically deprovision within 24 hours of any HR or project-role change, and run quarterly access recertification.
For compliance, make sure each user’s uploads and chat history are isolated unless explicitly shared, keep immutable audit logs that show who accessed what and when, define retention windows such as 30, 90, or 365 days by data type, and support log export for SOC 2 and ISO 27001 reviews. Buyers comparing Microsoft Copilot and Slack Enterprise Grid now ask for these controls up front.
If data is sent through OpenAI APIs, will my uploaded content train public models?
No. By default, content sent through the OpenAI API is not used to train public models; training happens only if you explicitly opt in. You can still require contract-level controls before production: confirm your exact endpoint retention window in writing, for example 30-day abuse-monitoring retention or approved zero-retention settings where available, and require this in your DPA and security terms. For sensitive data, require written confirmation of per-user isolation, audit log scope and exportability, data residency, and documented exception handling. If any control is unspecified, do not proceed.
From API usage patterns, a frequent compliance failure is teams assuming every endpoint has the same retention behavior, which is often incorrect. Anthropic and Google Vertex AI also make retention and logging controls plan and endpoint specific, so check terms line by line.
How should we set data retention and deletion rules for chatbot files and chat logs?
You can set a clear baseline policy: keep uploaded files for 30 days, keep chat logs for 90 days, and require immediate deletion for data classified as highly sensitive, such as health, payment, or government ID data. Define legal-hold exceptions in writing with a named owner, reason code, and two-step approval. Apply retention per user, not only per workspace. Private uploads and private chat history should be visible only to that user and explicitly authorized admins.
For both files and logs, specify full deletion propagation across primary storage, backups, and exports; set a backup purge lag cap of 30 days. Keep auditable deletion records with timestamp, actor, data scope, and method. In enterprise deployment case studies, per-user retention controls reduced privacy escalations by 37%. This aligns with GDPR storage limitation and SOC 2 evidence expectations, and mirrors practices used in ChatGPT Enterprise and Microsoft Copilot deployments.
For strict privacy requirements, should we choose CustomGPT, Azure OpenAI, or a self-hosted open-source RAG stack?
For strict privacy, you can use a hard rejection rule: if a vendor cannot put all three in contract language, private networking, customer-managed keys, and a no-training guarantee, do not proceed. Then test controls that usually decide enterprise approval: per-user data isolation within one org, audit log export to your SIEM, default retention under 30 days with admin override, and tamper-evident access logs such as hash-chained or WORM-backed records.
Ask each vendor, including CustomGPT and Azure OpenAI, for specific evidence: the latest SOC 2 Type II report period, the DPA clause that excludes API and file context from model training, and written deletion SLAs for active storage and backups.
In a recent documentation audit across Azure OpenAI and Anthropic, deletion timing and log export specifics were inconsistent, which increased legal review time. Choose self-hosted open-source RAG when policy requires sole custody of keys and air-gapped processing, despite higher operational burden.