Port of
Context
Ship AI agents that actually work by converting MCPs and tools into typed, sandboxed code.
Ship AI agents that actually work by converting MCPs and tools into typed, sandboxed code.
Context limits break agents mid-task. Port of Context keeps them flowing.
Open source framework connecting AI agents to tools and services with code. Type-checked execution in secure Deno sandboxes.
Run Locally
No cloud dependency. Your code, your infrastructure.
Bring Any LLM
Claude, GPT, Gemini, or your own models.
Deploy Anywhere
Docker, AWS, GCP, Azure, or on-premise.
Try it locally →
Install and run in under 60 seconds.
One-click deployment of pctx servers. Skip the setup, start building agents immediately.
Join Cloud Waitlist→Ship AI agents that actually work by converting MCPs and tools into typed, sandboxed code.
Context limits break agents mid-task. Port of Context keeps them flowing.
Open source framework connecting AI agents to tools and services with code. Type-checked execution in secure Deno sandboxes.
Run Locally
No cloud dependency. Your code, your infrastructure.
Bring Any LLM
Claude, GPT, Gemini, or your own models.
Deploy Anywhere
Docker, AWS, GCP, Azure, or on-premise.
Try it locally →
Install and run in under 60 seconds.
One-click deployment of pctx servers. Skip the setup, start building agents immediately.
Join Cloud Waitlist→Everything you need to know about Code Mode execution with Port of Context
Code Mode is an approach to AI tool execution where MCP servers are presented as code APIs rather than direct tool calls. Instead of loading all tool definitions upfront and passing large datasets through context, Code Mode enables on-demand tool discovery, processes data in sandboxes, enables parallel execution, and dramatically reduces token usage.
Use pctx when you need better performance, lower costs, or are working with complex multi-tool workflows. pctx reduces token usage by 98.7% (from 150K to 2K tokens), enables parallel execution instead of sequential tool calls, and provides sandbox security. It's especially valuable in production environments where token costs and context limits are concerns.
Absolutely. pctx is designed to work with any LLM (Claude, GPT-4, Gemini, local models, etc.) and integrates with all existing MCP servers. You can connect internal tools, third-party APIs, or custom services. The framework is LLM-agnostic and follows the standard Model Context Protocol, ensuring broad compatibility.
Yes. pctx works seamlessly with any MCP server, whether it's from the official MCP registry, community-built, or your own custom implementation. Simply configure your MCP servers in pctx's config file, and they'll be available as code APIs. You can use existing servers for GitHub, Slack, databases, or build custom ones for your internal tools. The framework automatically handles type generation and sandbox execution for any MCP server you connect.
Migration is straightforward. For traditional MCP: install pctx, configure your existing MCP servers, and update prompts to generate code instead of tool calls. For direct APIs: wrap them as MCP servers (many popular services already have MCP implementations) or create simple adapters. Most teams complete migration in a few hours with immediate performance benefits.
pctx is built on top of MCP and enhances it with Code Mode execution. While MCP defines how tools communicate with AI systems, pctx optimizes this communication by presenting MCP servers as code APIs, reducing token usage by 98.7%, and enabling sandbox execution. Think of pctx as the production-ready Code Mode framework for MCP.
Ready to build AI agents that use tools efficiently?