Retrieve Messages for a Conversation with CustomGPT RAG API – A Step-by-Step Guide

Retrieve Messages for a Conversation with CustomGPT RAG API – A Step-by-Step Guide

Watch the demo video at the top for a live walkthrough of this guide! (coming soon)

1. Introduction

Hi there! In this guide, we’ll walk through how to retrieve messages for a conversation using the CustomGPT RAG API in a Google Colab notebook. We’ll cover everything from setting up your environment and creating an agent, to starting a conversation and retrieving messages—both with and without streaming. By the end, you’ll understand how to manage and track your conversation messages easily.

Make sure you have read our Getting Started with CustomGPT.ai for New Developers blog to get an overview of the entire platform.

Get the cookbook link here – https://github.com/Poll-The-People/customgpt-cookbook/blob/main/examples/Retrieve_messages_for_a_conversation.ipynb

2. Setting Up the Environment

Before diving into the code, let’s get our workspace ready in Google Colab. We start by defining the RAG API endpoint, setting up your RAG API token, and importing the necessary libraries.

# setup RAG API URL and API Token

api_endpoint = 'https://app.customgpt.ai/api/v1/'

api_token = 'ADD_YOUR_API_TOKEN_HERE'

headers = {

    'Content-type': 'application/json',

    'Authorization': 'Bearer ' + api_token

}

# imports

import requests

import json

What this does:

  • RAG API Endpoint & Token: Sets the base URL for all RAG API calls and uses your RAG API token for authentication (remember to replace ‘ADD_YOUR_API_TOKEN_HERE’ with your actual token).
  • Headers: Prepares the HTTP headers needed for the RAG API calls, including content type and authorization.
  • Imports: Loads the requests library to handle HTTP requests and json for managing JSON data.

Now that our environment is set up, let’s make sure you have everything in place to follow along.

3. Prerequisites

Before you start, ensure you have:

  • CustomGPT.ai Account: Sign up and log in at CustomGPT.ai.
  • RAG API Key: Generate your RAG API token from your account dashboard.
  • Basic Python Knowledge: Familiarity with Python and REST APIs will help.
  • Google Colab: We’re using Google Colab—no local setup required!

Get the RAG API keys

To get your RAG API key, there are two ways:

Method 1 – Via Agent

  1. Agent > All Agents.
  2. Select your agent and go to deploy, click on the API key section and create an API. 

Method 2 – Via Profile section.

  1. Go to profile (top right corner of your screen)
  2. Click on My Profile
  3. You will see the screen something like this (below screenshot). Here you can click on “Create API key”, give it a name and copy the key.

Please save this secret key somewhere safe and accessible. For security reasons, You won’t be able to view it again through your CustomGPT.ai account. If you lose this secret key, you’ll need to generate a new one.

With these prerequisites checked, let’s move on to creating your agent.

4. Creating an Agent (Project)

Even though the code refers to “project,” we now call these agents. In this step, you’ll create an agent using a sitemap as its content source.

# Give a name to your project
project_name = 'Example ChatBot using Sitemap'

sitemap_path = 'https://adorosario.github.io/small-sitemap.xml'

payload = json.dumps({

    "project_name": project_name,

    "sitemap_path": sitemap_path

})

url = api_endpoint + 'projects'

create_project = requests.request('POST', url, headers=headers, data=payload)

print(create_project.text)

Key Points:

  • Agent Details: Sets the agent’s name and specifies a sitemap URL as its content source.
  • Payload Creation: Converts the agent details into a JSON string.
  • POST Request: Sends a POST request to create your agent on the CustomGPT platform.
  • Output: Prints the response containing your agent’s details, including its unique ID.

Great! Now that your agent is created, let’s set up a conversation within it.

5. Creating a Conversation for the Agent

Next, we start a conversation within your agent. This conversation is essential to maintain chat history and to retrieve messages later.

data = json.loads(create_project.text)["data"]

project_id = data["id"]

# Get the project settings
name = 'Test Converasation'

payload = json.dumps({

    "name": project_name

})

url = api_endpoint + 'projects' + f"/{project_id}" + '/conversations'

create_conversation = requests.request('POST', url, headers=headers, data=payload)

print(create_conversation.text)

Key Points:

  • Extract Agent ID: Parses the agent creation response to get its unique ID.
  • Set Conversation Name: Prepares the payload with the conversation name.
  • Create Conversation: Sends a POST request to start a new conversation within the agent.
  • Output: Prints the conversation details, including the session_id used to track the conversation.

With your conversation created, let’s move on to sending messages to it.

6. Sending a Message with Streaming Response

Now we’ll send a message to the conversation and get a streaming response. Streaming responses allow you to receive data in real time.

First, we need to install and import the SSE Client for handling streaming events:

# for streaming response import SSE Client

!pip install sseclient-py

from sseclient import SSEClient

Then, we send the message:

# Create a message to the above conversation

conversation_data = json.loads(create_conversation.text)["data"]

# session_id is important to maintain chat history

session_id = conversation_data["session_id"]

# pass in your question to prompt

prompt = "Who is Tom"

# set stream to 1 to get a streaming response

stream = 1

url = api_endpoint + 'projects/' + str(project_id) + '/conversations/' + str(session_id) + '/messages'

payload = json.dumps({

    "prompt": prompt,

    "stream": stream

})

headers["Accept"] = "text/event-stream"

stream_response = requests.post(url, stream=True, headers=headers, data=payload)

client = SSEClient(stream_response)

for event in client.events():

    print(event.data)

Key Points:

  • Extract Session ID: Retrieves the session_id from the conversation response to maintain context.
  • Prompt and Streaming Flag: Sets your question (prompt) and enables streaming by setting stream to 1.
  • SSE Client: Uses the SSEClient to handle server-sent events from the streaming response.
  • Output: The code prints events as they are received in real time.

Now that we’ve seen how to handle streaming responses, let’s send a message without streaming.

7. Sending a Message without Streaming

For non-streaming responses, the process is similar except we set the stream flag to 0.

# Create a message to the above conversation

conversation_data = json.loads(create_conversation.text)["data"]

# session_id is important to maintain chat history

session_id = conversation_data["session_id"]

# pass in your question to prompt

prompt = "Who is Tom"

# set stream to 0 to get a non-streaming response

stream = 0

url = api_endpoint + 'projects/' + str(project_id) + '/conversations/' + str(session_id) + '/messages'

payload = json.dumps({

    "prompt": prompt,

    "stream": stream

})

non_stream_response = requests.post(url, stream=False, headers=headers, data=payload)

print(non_stream_response.text)

Key Points:

  • Stream Flag Set to 0: Disables streaming, so you receive the entire response at once.
  • Output: Prints the response with details such as the response message, timestamp, and citations.

With your messages sent, it’s time to retrieve all messages from the conversation.

8. Retrieving All Messages from the Conversation

Finally, we retrieve all messages from the conversation. This is useful for reviewing the entire conversation history.

url = api_endpoint + 'projects/' + str(project_id) + '/conversations/' + str(session_id) + '/messages'

project_messages = requests.request('GET', url, headers=headers)

print(project_messages.text)

Key Points:

  • Build the URL: Constructs the URL using the agent’s project_id and the conversation’s session_id.
  • GET Request: Sends a GET request to fetch all messages in the conversation.
  • Output: Prints the conversation history in JSON format, including details like the user query, response, and citations.

If you encounter any issues during these steps, check out our troubleshooting section below.

9. Troubleshooting Common Issues

Here are some common problems you might face and how to resolve them:

  • Invalid API Token: Double-check that you’ve replaced ‘ADD_YOUR_API_TOKEN_HERE‘ with your actual RAG API key.
  • JSON Parsing Errors: Print the raw response to verify its structure if you face errors while parsing JSON.
  • Connection Problems: Ensure your internet connection is stable and that the RAG API endpoint URL is correct.
  • Streaming Issues: If streaming responses don’t work, ensure the SSEClient is installed correctly and that your headers include “Accept”: “text/event-stream”.

If these troubleshooting tips don’t help, consult the official CustomGPT documentation or ask for help in the community forum.

Conclusion

Great job! In this guide, you learned how to:

  • Set up your Google Colab environment with the necessary RAG API endpoint, token, and libraries.
  • Create an agent (project) and start a conversation within it.
  • Send messages to the conversation, both with streaming and non-streaming responses.
  • Retrieve all messages from a conversation to review the complete chat history.

By following these steps, you can easily manage and retrieve conversation messages using the CustomGPT RAG API. If you have any questions or need further assistance, feel free to check out the CustomGPT documentation or join our community for support.

Happy coding, and enjoy building and managing your chatbot conversations with CustomGPT!

Build a Custom GPT for your business, in minutes.

Deliver exceptional customer experiences and maximize employee efficiency with custom AI agents.

Trusted by thousands of organizations worldwide

Related posts

Leave a reply

Your email address will not be published. Required fields are marked *

*

3x productivity.
Cut costs in half.

Launch a custom AI agent in minutes.

Instantly access all your data.
Automate customer service.
Streamline employee training.
Accelerate research.
Gain customer insights.

Try 100% free. Cancel anytime.