CustomGPT.ai Blog

How do I pass user context into my chatbot?

To pass user context into a chatbot, decide what information you really need, store it in your backend, and send only minimal identifiers or attributes on each request. With CustomGPT.ai, use custom_context and Logged-In User Awareness to personalize safely.

Scope:
Last updated – November 2025. Applies globally; align user-context handling with local privacy laws like GDPR in the EU and CCPA/CPRA in California, emphasizing data minimization and pseudonymization.

Identify what user context your chatbot really needs

Before you send anything to a model, get clear on what “user context” actually means for your use case and how sensitive it is.

Typical user context includes:

  • Identity: user ID, email hash, account ID
  • Profile: name, language, role, subscription/plan
  • Behavior: current page, last action, last ticket, cart contents
  • Preferences: notification settings, content interests

Keep principles:

  1. Minimize: only include attributes that change the answer (e.g., plan or language), not everything you know about the user.
  2. Classify: treat PII (name, email, phone) differently from anonymous IDs.
  3. Hash and pseudonymize where possible so you send stable identifiers rather than raw PII.
  4. Separate storage and compute: keep the rich profile in your own datastore; send only safe, relevant slices to the LLM.

This framing makes later architecture choices much easier and keeps you within privacy and security best practices.

Pass user context directly in the model request

For many apps, the simplest pattern is: “look up user context → inject into the LLM call.”

Core pattern

  1. Look up the user by ID in your backend.
  2. Build a short, structured description of relevant facts (e.g., “User is a Pro subscriber, prefers Spanish, has open ticket #1234”).
  3. Add this to the system or assistant prompt at call time.
  4. Optionally, send a stable user identifier (safety_identifier, hashed user ID, or session ID) to the model provider so abuse monitoring and safety systems can track behavior without seeing PII.

Example (OpenAI-style)

You might include both:

  • A text snippet in the system message:
    “The end user is a Pro plan customer named Ana who prefers Spanish. Prioritize answers in Spanish.”
  • A stable identifier in the safety_identifier field, such as user_3fa9e4c.

Best practices

  • Keep context under tight length limits to control cost and latency.
  • Avoid sending full history or raw database records; summarize.
  • Use hashed IDs instead of email/username when possible.

Store user context in a server-side session or database

If you want the bot to “remember” across turns or sessions, you need state outside the model.

Typical architecture

  1. State model
    • User state: profile, preferences, long-term facts (e.g., UserProfile class).
    • Conversation state: last questions, pending information, timestamps.
  2. Storage choice
    • Dev/staging: in-memory cache.
    • Production: Redis, SQL/NoSQL DB, or vendor-specific state store.
  3. Request flow per message
    • Receive message.
    • Load user + conversation state from storage.
    • Build prompt including the latest user context.
    • Send a request to LLM.
    • Update and re-save state for the next turn.
  4. Lifetime & cleanup
    • Decide how long to keep the conversation state (e.g., 1–7 days).
    • Periodically prune or anonymize old records.

The Microsoft Bot Framework state-management docs and samples exemplify this approach: the bot keeps a UserProfile and ConversationData in a storage-backed UserState and ConversationState, then reloads and updates them each turn.

Use identity tokens or user IDs to reference context on each request

Instead of sending all user context to the model every time, you can send a pointer and resolve context in your own systems.

How it works

  1. Your app authenticates the user (SSO, OAuth, email login, etc.).
  2. The identity provider issues a token (e.g., JWT) containing claims like sub, roles, tenant, and custom attributes.
  3. Your backend validates the token and extracts a stable user ID.
  4. For each chatbot call, you:
    • Use the ID to fetch user profile/permissions from your database or CIAM system.
    • Build a safe context summary.
    • Optionally, pass a derived ID (user_<hash>) to the LLM as the model-level identifier.

Why this is powerful

  • You can enforce fine-grained authorization based on roles, groups, or attributes (e.g., tenant, department).
  • You avoid leaking internal IDs or PII directly to the model provider.
  • You can change user permissions centrally without modifying chatbot prompts.

This is similar to how platforms like Descope push roles and attributes into JWTs and session data, which your app then uses for access control and contextual responses.

Use platform-specific user attributes and session parameters

If you’re using a managed conversational platform, it often ships with its own concept of “session parameters” or “user attributes.”

Dialogflow CX-style pattern (illustrative)

  1. The user says something; your agent matches an intent.
  2. Dialogflow CX extracts parameters (like car_brand, plan_type) and stores them in the session.
  3. On each subsequent turn, those parameter values are available to:
    • Conditions and routing
    • Response templates
    • Webhooks and external APIs

This gives you “free” context persistence without building your own storage layer, as long as your use case fits the platform’s model.

General tips

  • Use platform parameters/attributes for short-lived context (recent answers, selected items, language).
  • Keep long-term profiles (billing data, history) in your own systems and join them via custom code.
  • Treat platform attributes as a cache of context, not the source of truth.

How to do it with CustomGPT.ai

With CustomGPT.ai, user context flows through two key features: Custom Context and Logged-In User Awareness, plus your own application logic around them.

Decide what to pass as custom_context

Examples from the docs include:

  • User’s name or segment (“Pro customer in LATAM”),
  • Subscription status (“trial vs enterprise”),
  • Order or ticket status,
  • Other profile information that helps responses be more relevant.

Because custom_context is a short text field (max 500 characters) and isn’t stored by the agent, keep it focused and concise.

Enable Custom Context for your agent

In the CustomGPT.ai dashboard:

  1. Open All Agents and select your agent.
  2. Click Deploy Agent.
  3. Choose your deployment method (Embed, Live Chat, Website Copilot, social integration, etc.; Search Generative Experience is excluded).
  4. Click the Settings (gear) icon.
  5. Toggle Custom Context on.
  6. Save settings.

Custom Context works across all deployment types and uses the custom_context parameter as part of the prompt on each request.

Pass user context from your website or app

In your embed code, add the custom_context attribute:

<div id=“customgpt_chat”></div>
<script
  src=“https://cdn.customgpt.ai/js/embed.js”
  defer
  div_id=“customgpt_chat”
  p_id=“123456”
  p_key=“abcdefgh”
  custom_context=“Pro customer on annual plan, prefers Spanish”>
</script>

Replace p_id, p_key, and the custom_context string with your own values.

Behind the scenes:

  • The agent receives this text as part of the prompt.
  • It uses it to tailor answers.
  • The data is only used for that request; if you want continuity, you must send it on every prompt.

For API-based chatbots (for example, custom frontends or integrations), you can also send custom_context on each message request; this is supported in the RAG API and integrations starter kits.

Use Logged-In User Awareness for the user’s name

CustomGPT.ai can automatically pass the logged-in user’s name to your agent.

Steps:

  1. Go to your agent and click Personalize.
  2. Open the AI Intelligence tab.
  3. Find Logged-In User Awareness.
  4. Toggle it Enabled for authenticated experiences, or Disabled for fully anonymous agents.
  5. Save changes.

When enabled:

  • CustomGPT.ai securely sends the logged-in user’s name to the agent for each session.
  • The agent can reference the user by name in its responses.
  • If you want the agent to always use the name, add that instruction to its Persona configuration (for example, “Address the user by their first name when responding”).

Combine your own auth with CustomGPT.ai features

A common pattern:

  1. Your app authenticates the user (SSO or login) and determines their role/plan.
  2. Your backend builds a short text context, e.g.,
    “User is an Enterprise admin, Spanish locale, active subscription, last order #4821.”
  3. You inject that context into:
    • The custom_context attribute in your embed snippet, or
    • The custom_context field in your API request to CustomGPT.ai.
    • Logged-In User Awareness takes care of the user’s name automatically.

This way, CustomGPT.ai remains stateless per request while your own systems own the authoritative user context.

Example — personalized support chatbot for logged-in users

Let’s put it together with a SaaS support chatbot scenario.

Context you care about

For each logged-in user, you want the bot to know:

  • Name
  • Role (admin vs member)
  • Plan (Free/Pro/Enterprise)
  • Primary language
  • Most recent ticket or order

Your product backend already knows this, based on your identity provider and internal database.

Request flow without CustomGPT.ai (generic)

  1. User opens the in-app chat.
  2. Frontend sends the user’s session token to your backend.
  3. Backend validates the token, then loads the user profile + latest ticket from your DB.
  4. Backend calls your LLM provider with:
    • A system message including key context.
    • The user’s message.
    • A hashed user ID in the safety identifier field.
  5. Backend returns the model’s answer to the UI.

Same scenario with CustomGPT.ai

  1. In your app
    • User logs in; your app knows their name, role, plan, language, etc.
  2. In your CustomGPT.ai agent
    • Enable Custom Context for your deployment.
    • Enable Logged-In User Awareness so the agent can greet them by name.

In your embed or front-end integration
Your frontend builds something like:

const contextualString = `User: ${name}, Role: ${role}, Plan: ${plan}, Locale: ${lang}, Last ticket: ${lastTicketId}`;
  1. and injects it into your embed snippet or into the custom_context field for API requests.
  2. At runtime
    • CustomGPT.ai receives the conversation plus custom_context every time the user sends a message.
    • Logged-In User Awareness adds the user’s name automatically.
    • The agent uses both to give answers like:
      “Hi Ana, as an Enterprise admin you can change workspace permissions from Settings → Teams …”
  3. State and compliance
    • Your backend retains the full profile and history.
    • CustomGPT.ai sees only the limited context string you send per request plus the logged-in name.

Conclusion

Treating a loyal enterprise client the same as a first-time visitor creates unnecessary friction and erodes trust. CustomGPT.ai eliminates this disconnect by using custom_context and Logged-In User Awareness to safely inject critical details—such as account roles or active subscriptions—directly into the interaction. This approach allows you to deliver hyper-relevant answers while keeping sensitive PII securely managed on your own servers. Enable personalized context injection today to turn generic responses into intelligent, identity-aware conversations.

FAQs

How should I pass user context into a chatbot safely?

Pass user context safely by deciding which attributes genuinely change the answer, then keeping the full profile in your own backend. For each request, send only minimal, relevant details—such as plan, language, or recent activity—ideally using hashed IDs or pseudonymous identifiers instead of raw PII, and avoid dumping full histories or records into the model.

How does CustomGPT.ai handle user context in chatbots?

CustomGPT.ai uses a short custom_context field and Logged-In User Awareness so you can personalize replies without exposing full profiles. Your app keeps the rich user data, then injects a concise context string and optional logged-in name on every request, letting the agent tailor responses while remaining stateless and privacy-aware.

3x productivity.
Cut costs in half.

Launch a custom AI agent in minutes.

Instantly access all your data.
Automate customer service.
Streamline employee training.
Accelerate research.
Gain customer insights.

Try 100% free. Cancel anytime.