
Watch the demo video at the top for a live walkthrough and explanation of this guide! (coming soon)
1. Introduction
Hi there! In this guide, we’ll show you how to create a custom agent using the CustomGPT.ai RAG APIs Python SDK implementation. Unlike our previous tutorials that used REST APIs, this guide will walk you through the process using the SDK. You’ll learn how to install the SDK, create a chatbot from a file, fetch project details to ensure your chatbot is active, start a conversation, and finally send messages. This guide is designed specifically for developers who want to leverage the power of the CustomGPT.ai API with an easier, more Pythonic interface and pair it with custom instructions.
Make sure you have read our Getting Started with CustomGPT.ai for New Developers blog to get an overview of the entire platform.
By the end, you’ll be able to programmatically inspect where your chatbot’s content comes from. For full API reference, see the CustomGPT API docs.
Notebook link – https://github.com/Poll-The-People/customgpt-cookbook/blob/main/examples/SDK_Create_bot_from_file.ipynb
2. Setting Up the Environment
Before creating your chatbot, you need to set up the SDK in your Google Colab notebook. The first step is to install the customgpt-client package.
!pip install customgpt-clientWhat This Does:
- Installation:
Installs the CustomGPT client SDK so you can use its classes and functions to interact with the API directly through Python. - SDK Benefits:
This SDK simplifies the way you interact with the API by handling many of the lower-level details for you.
Now that the SDK is installed, let’s move on to creating your custom chatbot using a sitemap and a file.
3. Creating Your Custom Chat Bot Using a Sitemap
First, we’ll start with a basic example that uses a sitemap to create your chatbot (or “project” in earlier terms). Set your API key to authenticate your SDK calls.
from customgpt_client import CustomGPT
CustomGPT.api_key = "YOUR_API_KEY"
Get API keys
To get your API key, there are two ways:
Method 1 – Via Agent
- Agent > All Agents.
- Select your agent and go to deploy, click on the API key section and create an API.
Method 2 – Via Profile section.
- Go to profile (top right corner of your screen)
- Click on My Profile
- You will see the screen something like this (below screenshot). Here you can click on “Create API key”, give it a name and copy the key.
Please save this secret key somewhere safe and accessible, especially if you’ll be using CustomGPT.ai integrations. For security reasons, You won’t be able to view it again through your CustomGPT.ai account. If you lose this secret key, you’ll need to generate a new one.
What This Does:
- Setting API Key:
Configures the SDK with your API key (replace “YOUR_API_KEY” with your actual API token) so that every request is authenticated.
Next, we create a project (agent) using a file as an input source rather than creating one via sitemap.
4. Creating a Project from a File
In this example, we’ll create a project using a file upload. We use Google Colab’s file upload feature to select the file, then pass the file content to the SDK
from google.colab import files
from customgpt_client.types import File
# Give a name to your project
project_name = 'Example ChatBot using Sitemap'
uploaded_file = files.upload()
file_content = next(iter(uploaded_file.values()))
print(file_content)
create_project = CustomGPT.Project.create(
project_name=project_name,
file=File(payload=file_content, file_name='Yes.doc')
)
print(create_project)
What This Does:
- File Upload:
Uses the files.upload() function from Google Colab to let you select a file from your local system. - Reading File Content:
Retrieves the file content from the uploaded files dictionary. - Project Creation:
Calls the CustomGPT.Project.create method with the project name and a File object. The File object takes two parameters:
- payload: The binary content of your file.
- file_name: The name of the file (here, ‘Yes.doc’).
- payload: The binary content of your file.
- Output:
Prints the response, which contains details about the newly created project/agent.
After creating the project, let’s verify that our chatbot is active by fetching its details
5. Fetching Project Details
Before sending any messages, it’s a good practice to check if the chatbot (agent) is active. This is done by retrieving the project details and checking the is_chat_active flag before any conversation message retrieval
# Check status of the project if chat bot is active
data = create_project.parsed.data
# Get project id from response for created project
project_id = data.id
# Check if chat bot is active using `is_chat_active` flag in project detail response
# GET project details
get_project = CustomGPT.Project.get(project_id=project_id)
project_data = get_project.parsed
is_chat_active = project_data.data.is_chat_active
print(is_chat_active)
# One can poll this GET project details API to check if chat bot is active before starting a conversationWhat This Does:
- Extracting Project ID:
Parses the response from the project creation to get the unique project ID. - Fetching Details:
Calls CustomGPT.Project.get with the project ID to retrieve detailed information about your project. - Checking Activation Status:
Extracts the is_chat_active flag to see if your chatbot is active. You can poll this endpoint to continuously check the activation status before initiating a conversation. - Output:
Prints the status so you know if the chatbot is ready to receive messages.
Now that we’ve verified your project is active, it’s time to start a conversation with your chatbot.
6. Creating a Conversation
Before you send any messages, create a conversation within your project. This conversation acts as a session for interacting with your chatbot.
# Create a conversation to send a message to
project_conversataion = CustomGPT.Conversation.create(
project_id=project_id,
name="My First Conversation"
)
project_data = project_conversataion.parsed
print(project_data)
What This Does:
- Conversation Creation:
Uses CustomGPT.Conversation.create to start a new conversation within your project, assigning it a name (“My First Conversation”). - Session Management:
The conversation response includes a session ID that is crucial for tracking the chat history. - Output:
Prints the details of the newly created conversation.
With the conversation successfully created, we are ready to send a message to your chatbot.
7. Sending a Message to Your Chatbot
Now you’ll send a message to your chatbot and receive its response. In this example, the message is sent with streaming enabled, so you can see real-time updates.
session_id = project_data.data.session_id
# Define the prompt and custom persona for the chatbot
prompt = "Who do you work for and what are you called and who is vanka in 10 words."
custom_persona = (
"You are a custom chatbot assistant called *Story Teller*, a friendly story teller "
"who works for Test and answers questions based on the given context. "
"Be as helpful as possible. Always prioritize the customer. Escalate complex issues. "
"Stay on topic. Use appropriate language, Acknowledge limitations."
)
stream_response = CustomGPT.Conversation.send(
project_id=project_id,
session_id=session_id,
prompt=prompt,
custom_persona=custom_persona,
stream=True
)
for event in stream_response.events():
print(event.data)session_id = project_data.data.session_id
What This Does:
- Retrieve Session ID:
Gets the session ID from the conversation details to ensure the message is part of the correct session. - Set Prompt & Custom Persona:
- The prompt is the question you want to ask the chatbot.
- The custom persona adds context to the chatbot’s responses (here, it’s set as a friendly “Story Teller”).
- The prompt is the question you want to ask the chatbot.
- Send Message with Streaming:
- Uses CustomGPT.Conversation.send to send the message with the streaming flag set to True.
- This enables you to see real-time events as the chatbot processes the message.
- Uses CustomGPT.Conversation.send to send the message with the streaming flag set to True.
- Output:
- Iterates over the streamed events and prints the data from each event.
This completes the process of sending a message to your custom chatbot using the CustomGPT SDK. With the SDK, interacting with the chatbot becomes more streamlined and Pythonic
8. Conclusion
Congratulations, developer! In this guide, you learned how to:
- Install and set up the CustomGPT Python SDK.
- Create a custom agent (project) from a file using Google Colab’s file upload.
- Fetch project details to verify that your chatbot is active.
- Create a conversation within your project.
- Send a message to your chatbot with streaming enabled, incorporating a custom persona.
- By transitioning to the CustomGPT Python SDK, you simplify many interactions with the API, making it easier to build, manage, and scale your chatbot solutions. If you have any questions or need further assistance, feel free to consult the official CustomGPT SDK documentation or join our developer community.
Happy coding, and enjoy building your custom chatbot with the CustomGPT SDK!
Frequently Asked Questions
What does the Python SDK save compared with building a file-based RAG agent from scratch?
It saves you from handling much of the low-level API plumbing yourself. The documented flow is to install the SDK, set an API key, create a project from a file, fetch project details until the agent is active, start a conversation, and then send messages. That gives you a faster path from raw file to working agent because the common API interactions are already packaged in a Python-friendly workflow.
How do I create a custom agent from a local file in Python?
Barry Barresi described his workflow this way: u0022Powered by my custom-built Theory of Change AIM GPT agent on the CustomGPT.ai platform. Rapidly Develop a Credible Theory of Change with AI-Augmented Collaboration.u0022 In Python, the basic sequence is straightforward: install the client package, set your API key, upload or select the local file, create the project from that file, check project details until the agent is active, then open a conversation and send your first message.
How do I know when a file-based agent is ready to answer questions?
Wait until the project details show the agent is active before sending the first user message. The guide places the project-details check between project creation and conversation start, so the safest pattern is: create the project, poll its status, and only then begin chatting.
Why use file upload in the SDK instead of manually pasting content?
Stephanie Warlick recommends a workflow where you u0022dump all your knowledge to automate proposals, customer inquiries and the knowledge base that exists in your head so your team can execute without you.u0022 For a Python integration, file upload is usually the better choice because it creates a repeatable ingestion step and supports programmatic inspection of where the agent’s content comes from. Manual pasting can work for a quick experiment, but file-based creation fits automation much better.
Can I preprocess files in my own app or database and still use the Python SDK?
Yes. If your team already stores documents in another system, you can keep preprocessing, access control, or approval logic there and then use the Python SDK to create the agent from the final file or other supported source. The platform supports multiple input formats, and it also outperformed OpenAI in a RAG accuracy benchmark, which makes this split workflow reasonable when you want to keep your existing content pipeline.
How should I handle API keys and data security in a Python integration?
Store the API key in an environment variable or secret manager, not in notebook cells, screenshots, or committed source files. The guide notes that you cannot view the secret again after creation, so save it securely and rotate it if it leaks. For security context, the service is SOC 2 Type 2 certified, GDPR compliant, and states that uploaded data is not used for model training.
Can I use the same agent in my own app after creating it in Python?
Bill French, Technology Strategist, said, u0022They’ve officially cracked the sub-second barrier, a breakthrough that fundamentally changes the user experience from merely ‘interactive’ to ‘instantaneous’.u0022 After you create the agent in Python, you can call it from your own application through the API rather than exposing the API key to end users. Documented deployment options include embed widget, live chat, search bar, API, and MCP server, so Python can be the build workflow even when the final experience runs somewhere else.
Related Resources
If you’re building beyond file-based setup, these guides cover the next CustomGPT.ai workflows to know.
- Create Agents And Messages — Learn how to create a CustomGPT.ai agent and send messages through the API as part of a complete conversation flow.
- Citation Details Guide — This walkthrough shows how to retrieve citation details from a CustomGPT.ai agent using the RAG API for more transparent responses.
- Update And Delete Conversations — See how to modify or remove conversations in CustomGPT.ai through the API to keep chat data organized and current.

Priyansh is Developer Relations Advocate who loves technology, writer about them, creates deeply researched content about them.