pctx Releases

Announcing pctx: A Local & Open Source Code Mode Framework

Launching the first open-source framework that turns any MCP server into a Code Mode environment. Built for AI engineers who need efficient tools and MCPs.

Patrick Kelly
Patrick Kelly
Code Mode Engineer
Run Code Mode Locally - Open Source

Code Mode fundamentally changes how AI agents interact with external tools. Instead of serializing tool definitions into context windows and waiting for sequential tool calls, Code Mode presents MCP servers as deterministic APIs that agents can orchestrate through generated code.

Port of Context released pctx, the first open-source framework that provides unified Code Mode execution across any upstream MCP server. This release solves the core architectural challenge: how do you transform arbitrary MCP servers into a cohesive, type-safe execution environment that agents can reason about programmatically?

The Context Window Problem

Traditional MCP implementations suffer from a fundamental constraint: every tool must be serialized into the agent's context window before execution.

Consider a realistic workflow:

  • 50 available tools across GitHub, Stripe, and internal APIs
  • Each tool: ~200 tokens for definition + examples
  • Context cost: 10,000 tokens before any actual work
  • Sequential execution: request → wait → parse → request → wait
  • No parallelization, no composition, no state management

For a complex request requiring 15 tool calls:

  • Traditional MCP: ~150,000 tokens | 47 seconds | $0.45
  • Code Mode: ~2,000 tokens | 8 seconds | $0.006

Architecture: Unified MCP Proxy

pctx acts as a unified proxy layer between agents and upstream MCP servers. Instead of exposing raw tool definitions, pctx generates a type-safe API surface that agents can call through generated code.

System Design

Code Mode Request Flow

Compare this to traditional tool calling, which would require 5 sequential round-trips with full context overhead at each step.

Implementation Deep Dive

1. Dynamic Type Generation

When pctx connects to an upstream MCP server, it introspects the tool definitions and generates TypeScript type signatures at runtime:

// MCP tool definition received from server { "name": "github_create_issue", "description": "Create a GitHub issue", "inputSchema": { "type": "object", "properties": { "repo": { "type": "string" }, "title": { "type": "string" }, "body": { "type": "string" } }, "required": ["repo", "title"] } } // pctx generates this API surface for agents async function github_create_issue({ repo: string; title: string; body?: string; }): Promise<GithubIssue>

Agents receive only the generated function signatures in their context—not the full JSON schemas. This reduces the per-tool context from ~200 tokens to ~15 tokens.

2. Sandbox Execution Model

The execution environment uses a custom deno runtime with strict isolation:

// validated at before execution with an embedded typescript compiler execute("llm-code", { 'github', 'create_issue', 'stripe', 'send_payment' } ), // agent gets typescript compiler errors if types are used incorrectly

Real-World Example: Customer Support Automation

Here's how an agent uses pctx to handle a complex customer support workflow:

Agent prompt: "User @sarah_dev reported billing issues. Investigate and resolve."

Generated Code:

// 1. Parallel data gathering const [githubUser, stripeCustomer, recentIssues] = await Promise.all([ github_get_user("sarah_dev"), stripe_search_customers({ email: "sarah@example.com" }), github_list_issues({ repo: "acme/billing", label: "bug", state: "open" }) ]); // 2. Cross-reference and analyze const userIssues = recentIssues.filter(issue => issue.author === githubUser.id ); if (userIssues.length > 0) { console.log(`Found ${userIssues.length} related issues`); // 3. Check billing status const invoices = await stripe_list_invoices({ customer: stripeCustomer.id, status: "open" }); if (invoices.length > 0) { // 4. Automated resolution await stripe_void_invoice(invoices[0].id); await github_create_comment({ issue: userIssues[0].number, body: "Invoice voided. Billing issue resolved." }); console.log("Issue resolved automatically"); } }

Execution: 3.2 seconds, single round-trip, 156 tokens

Traditional MCP equivalent: 8 tool calls, 1+ mins, thousands of tokens

Security Model

pctx enforces defense-in-depth security:

Key security guarantees for local development

  1. No arbitrary code execution: Only validated tool calls allowed
  2. Network isolation: Tools define all network access explicitly
  3. Audit trail: Every tool call logged with input/output/duration

Installation & Quickstart

# Install pctx npm install -g @portofcontext/pctx # Connect to existing MCP servers pctx mcp add github --config ~/.config/mcp/github.json pctx mcp add stripe --config ~/.config/mcp/stripe.json # Start the unified server pctx mcp start # Your agent now sees a single Code Mode API # instead of multiple MCP servers

About Port of Context: Port of Context unlocks production agentic AI by managing secure and token efficient connections to data and APIs.

Follow pctx

Related Articles