4 min read

MCP isn't dead, it just needs to be more Primitive.

There is a lot of chatter about how MCP is dead because it's bloated and complicated. The problem isn't the protocol — it's how we're using it. The entire surface area of an MCP server can be captured by four primitives.
MCP isn't dead, it just needs to be more Primitive.
Photo by Nipun Haldar / Unsplash

There is a lot of chatter in my feeds about how MCP (Model Context Protocol) is dead because it's bloated and complicated.

The criticism is generally fair.

Browse most MCP server implementations and you'll find dozens of narrowly-scoped tools: create_event, list_events, update_event, delete_event, find_meeting_times, respond_to_event. Multiply that across every service an agent needs to touch — calendar, email, CRM, code hosting, analytics — and you're looking at hundreds of tool definitions competing for context window space before the LLM has done anything useful.

The assumption baked into this design is that the server author should anticipate every operation a user might need and pre-build it. That assumption is wrong.

The real value is the connection, not the schema

Most of the value in MCP comes from being connected to a service — authenticated, authorized, able to act. The tool schema is just a convenience layer on top of that connection. A well-connected endpoint with good docs is more useful than fifty perfectly-typed tools that connect to nothing.

LLMs are already good at reading unstructured docs and constructing the right payload. Arguably better than they are at picking from a flat list of 80+ tools where the names blur together.

So what if we stopped treating the tool catalog as the core abstraction?

Some common primitives for your MCP

I think the entire surface area of an MCP server can often be captured by four primitives:

Docs — Discovery. What can this server do? What entities does it manage? What operations are available, what do they expect? This is the LLM's reference manual, loaded on demand rather than shoved into context upfront.

Exec — Action. A generic execution endpoint. The LLM reads the docs, constructs a payload, and calls exec. Validation happens at the server, but the LLM has full flexibility to compose operations however it needs to.

Session — State. Context that persists across calls — auth tokens, prior results, entity references. Without this, the LLM burns tokens re-establishing context on every call. With it, exec calls can be terse because they reference shared state. This is how humans already use APIs: authenticate once, hold state, make short calls that reference prior context.

Connections — Availability. What am I authenticated to? What's live, what's expired, what needs re-auth? This separates "what can I do" from "what am I currently authorized to do" — a distinction that gets conflated in current MCP implementations where tool availability implicitly signals connection state.

Four universal tools instead of N domain-specific ones.

The tradeoff is real — and temporary

This pushes complexity onto the LLM. Every call requires the model to have internalized the docs, remembered the session state, known which connections are live, and constructed the right exec payload from scratch. With specific tools, the schema is a crib sheet. The LLM doesn't need to remember that gcal_create_event takes start_time as ISO 8601 — the schema tells it.

So the question is: where's the ceiling on LLM capability?

Here's what's worth being honest about: this isn't model-agnostic. Frontier models, Opus, Sonnet, GPT-4 class, already handle abstract, loosely-typed tools fine. They can read a docs payload, hold it in context, and construct a correct exec call without hand-holding. Smaller models can't. They need the crib sheet. They need the typed schema. They need the named tool with constrained parameters to stay on the rails.

That means the right tool granularity isn't a fixed answer, it's a function of the model on the other end. An MCP server designed for Haiku needs a different tool surface than one designed for Opus. The current MCP discourse mostly ignores this. The protocol isn't too bloated or too simple in the abstract. It's mismatched to the model doing the work.

If models keep getting better at holding docs in context and constructing precise payloads — and every indication is they will — the four-primitive model wins long-term. If they plateau, specific tools remain a useful crutch, and specifically if you're using agents that use cheaper models, scoped tools matter.

The pragmatic middle ground

The smartest implementation isn't pure either way. Four primitives as the foundation, with frequently-used exec patterns cached as named tools for efficiency.

Hot paths get named tools. Everything else flows through docs → exec. When the LLM notices a pattern recurring, it promotes that pattern into a reusable tool automatically.

This makes the tool catalog a performance optimization, not an architectural constraint. Common operations crystallize into tools over time. Uncommon operations flow through the generic primitives. And the LLM can always drop down to raw exec when it needs to do something the server author never anticipated.

Who this is really for

Specific tools favor server authors. They control the surface, handle validation, version independently. That's fine.

The four-primitive model favors the orchestrator — the layer composing across services to do what the user actually asked for. It can compose freely without being boxed in by each server's tool design choices.

That's the split. Is the agent a consumer of pre-built tool APIs, or a composer that assembles operations from capabilities? The MCP spec as designed assumes the former. I think the future is the latter.


This is the direction we're building Kyew toward. We're not all the way there yet, we still have about 11, named tools, typed schemas, and some of our servers are as granular as the ones I'm critiquing here. But the core premise is a single MCP that provides the connections and abilities for users to let the LLM what the user needs when they need it.