Benchmark

Claude Code is 4.2x faster & 3.2x cheaper with CustomGPT.ai plugin. See the report →

CustomGPT.ai Blog

Give Your Trae AI Agents a Custom Brain with CustomGPT.ai’s hosted MCP (Model Context Protocol) server

Author Image

Written by: Priyansh Khodiyar

Current image: trae + customgpt hosted mcp server

Using Trae is like having a team of little AI agents on your desktop, ready to help. But what if you could give those agents a superpower: perfect memory of your specific information?

Refer to our Trae + CustomGPT.ai’s Hosted MCP server docs for more info.

By connecting Trae to your CustomGPT.ai knowledge base, your agents go from general helpers to specialized experts trained on your data.

What’s this MCP thing again?

MCP (Model Context Protocol) is like a universal adapter. It lets your AI agents in Trae plug directly into a data source, like your project on CustomGPT.ai. It’s the bridge that lets your agents access your private knowledge securely.

Why use CustomGPT.ai for your agent’s brain?

  • Super accurate: Our system is independently benchmarked as #1 for finding correct answers in documents. Your agents will be more reliable.
  • Zero hassle: We manage the servers, security, and all the techy stuff. You just build your agents.
  • No extra cost: This is part of your existing CustomGPT.ai plan.

Get the full technical rundown on our Hosted MCP launch blog post.

How Your Agent Gets its Information

Trae uses the same kind of helper as Claude to connect to your data

You, chatting with an agent in Trae

                  │

                  ▼

┌─────────────────────────────────┐

Your PC runs a little helper  │

│      (called `supergateway`)    │

└─────────────────────────────────┘

                  │ (Sends question securely)

                  ▼

┌─────────────────────────────────┐

│     CustomGPT.ai Server         │

│ (Searches your private files)   │

└─────────────────────────────────┘

                  │ (Sends answer back)

                  ▼

┌─────────────────────────────────┐

Your PC's helper gets the       

│             answer              │

└─────────────────────────────────┘

                  │

                  ▼

 Your Trae agent gives you the smart answer!

What you’ll need first:

  • A CustomGPT.ai account (the free trial is fine).
  • A project on CustomGPT.ai with your files uploaded.
  • The Trae application.
  • Node.js (version 18 or newer) installed on your computer.

Let’s Hook It Up: Step-by-Step

  1. Get your secret token: Log in to CustomGPT.ai, open your project, and head to Deploy ➞ MCP Server (Beta). Hit Generate MCP Token.
  2. Open Trae’s agent settings: In the Trae app, press Ctrl+U to open the Agents panel.
  3. Go to manual config: Click the gear icon (AI Management) ➜ MCP ➜ Configure Manually.
  4. Paste the code: Paste this block into the window. Just remember to put your own <PROJECT_ID> and <TOKEN> in there.
{

  "mcpServers": {

    "customgpt-mcp-server": {

      "command": "npx",

      "args": [

        "-y",

        "supergateway",

        "--sse",

        "https://mcp.customgpt.ai/projects/<PROJECT_ID>/sse",

        "--header",

        "Authorization: Bearer <TOKEN>"

      ]

    }

  }

}
  1. Confirm and restart: Click Confirm and then restart the Trae app.

Now you can pick your new customgpt-mcp-server from the Agents list and start chatting. Your agent is now an expert on your stuff!

Watch it in action:

Your Turn to Build!

So there you have it. No more generic answers! Your favorite AI tool is now your personal expert, with a direct line to the documents and data you care about most. It’s a true game-changer for getting things done.

Why not give it a try? Head over to CustomGPT.ai to start a free trial and create your own private AI brain.

We can’t wait to see what you build with it!

P.S If are into MCP world, you will definitely live this MCP AMA that we did with Pinecone folks and other MCP folks (Santiago).

Find it here:

Further Reading

For those interested in exploring MCP in greater depth, check out the following resources:

Frequently Asked Questions

Do I need to run my own MCP server to connect Trae to my private knowledge base?

No. In the standard Trae setup, you do not run your own MCP server. Your team runs a lightweight local bridge with your project ID and MCP token, while the MCP endpoint is hosted by CustomGPT.ai.

The practical split is simple: the provider hosts the endpoint; you manage only the local bridge process and your credentials unless you choose a separate customer-hosted deployment. Choose the default hosted setup if you want the fastest connection and do not want to manage backend infrastructure. Choose a customer-hosted MCP endpoint only if your security, networking, or compliance rules require your team to host and control that endpoint, such as inside your own VPC.

That distinction exists because MCP supports both local and remote server transports in Anthropic’s spec. The same hosted-versus-owned decision comes up in Claude Desktop and Cursor. For scale, Lehigh University uses AI search across 400M+ words of newspaper archives without needing customers to stand up their own MCP infrastructure.

Why is Trae still giving generic answers after I connect MCP?

Trae gives generic answers after MCP connection when it falls back to its base model because the MCP tool was not selected, authenticated, or refreshed. A green connection only shows the server is reachable, not that retrieval is active.

Check the active agent first, then the project ID, token, and tool refresh. A 401 in the MCP log usually means the token is invalid or expired. A 403 usually means the token lacks access to that project. A project-not-found error usually means the project ID is wrong. If no tool call appears at all, Trae is likely still using a non-MCP agent or a cached tool list. In the MCP spec, clients discover tools through tools/list, so stale manifests can persist until reconnect or restart, including in Cursor and Claude Desktop. Fully quit and relaunch Trae. Then ask something only your CustomGPT.ai knowledge base can answer and confirm the MCP log shows a tool call.

Should I use a hosted MCP server or self-host one for Trae?

Use Trae’s hosted MCP server by default. Choose the hosted option unless your security review requires internal audit logs, customer-managed credentials, private network routing, or regional data handling controls; in those cases, ask Trae support to confirm whether a customer-hosted MCP connection is supported for your workspace.

As of Trae Docs, “MCP Setup,” available in March 2026, only a hosted endpoint is described, not a customer-hosted connection flow. Anthropic’s Model Context Protocol documentation allows MCP servers over local stdio and HTTP transports, but protocol support does not mean Trae currently exposes customer-hosted MCP connections in the product. Hosted is therefore the lower-risk path for faster rollout, simpler support, and fewer auth, uptime, patching, and secret-rotation tasks. By comparison, Claude Desktop and Cursor publicly document broader MCP connection patterns. This recommendation is specific to Trae and should not be assumed for CustomGPT.ai or other platforms.

Can Trae answer from my company documents instead of generic model knowledge?

Yes. Trae can answer from your company documents when it is connected to an approved knowledge base. CustomGPT.ai can provide that knowledge base, and if your setup also uses MCP, Trae retrieves from that approved source at answer time; the hosted versus customer-managed MCP pattern depends on your deployment.

It can cite the exact passage, respect document permissions, and refuse to answer when evidence is weak. A common policy is to require one passage above a confidence threshold such as 0.8, or two matching passages, otherwise Trae returns “I don’t know” instead of guessing. For best results, upload text-based files with clear headings, run OCR on scanned PDFs, and split large manuals into topic-focused documents. Teams evaluating Glean or Microsoft Copilot usually care less about generic AI fluency than whether every answer is auditable and traced to cited source text. At MIT, a source-grounded deployment across 90+ languages reported zero hallucinations because responses were tied to cited source text.

Can Trae use intranet or private wiki content with this MCP setup?

No. MCP does not give Trae live browsing access to your intranet or private wiki. Trae can only answer from content that has first been added and indexed in the connected knowledge project; MCP lets it query that indexed content, not your network directly.

Usable inputs typically include PDF, DOCX, HTML, plain text, individual URLs, and website crawls, including authenticated site ingestion when a connector supports login. If your wiki can be exported as HTML or documents, or exposed as crawlable pages, it can work. If not, it will not be usable through this setup. For example, a team can export Confluence or SharePoint pages as HTML or DOCX, add them to CustomGPT.ai, and then let Trae answer over that indexed copy. Lehigh University queries 400M+ indexed words, which shows this pattern can scale to large knowledge collections. Like Glean or Elastic, this is indexed enterprise search, not a live intranet session.

Is data accessed through Trae and MCP private?

Based on the reviewed setup materials, data access appears limited to the connected knowledge project and protected in transit over HTTPS, but those materials do not establish a full privacy policy. The configuration shown includes a project-specific HTTPS endpoint and bearer-token authentication, which indicates scoped access to that project.

For CustomGPT.ai, that supports project-scoped retrieval, not blanket privacy assurance. The setup screen alone does not confirm retention period, whether prompts or retrieved files are used for model training, who can access logs, SOC 2 scope, GDPR terms, or data-storage region. One technical point from RFC 6750: a bearer token works for whoever possesses it, so secure storage, rotation, and revocation matter as much as HTTPS. Before using Trae and MCP for sensitive data, request written confirmation on retention, training exclusion, log access controls, subprocessors, and regional storage. OpenAI Enterprise and Microsoft Copilot Studio publish these details more explicitly.

3x productivity.
Cut costs in half.

Launch a custom AI agent in minutes.

Instantly access all your data.
Automate customer service.
Streamline employee training.
Accelerate research.
Gain customer insights.

Try 100% free. Cancel anytime.