SOLARISE
DEV

Model Context Protocol in 60 Seconds

  • What is MCP? Model Context Protocol (MCP) is a standardized way for AI models (like Claude or ChatGPT) to interact with external software and tools, acting as a universal language between them.
  • Problem Solved: It allows AIs to go beyond just chatting and actually *do* things – like update your website, manage files, or control smart devices – without custom integrations for each AI/tool pair.
  • Home Automation Example: Imagine telling your AI, "Dim the living room lights and play some jazz," and it actually happens because MCP connects the AI to your smart home system.
  • Website Example: A business owner could ask their AI, "Show me the latest product updates on my WordPress or Laravel site," and MCP would enable the AI to fetch and display that data.
  • Current Maturity: It's bleeding-edge technology*. Promising, but still evolving, with active development and some limitations (especially for remote server connections without workarounds).

(*Update: Microsoft is now integrating MCP directly into Windows, so chances are it's here to stay!)

(*Update 2: As of 22nd May, OpenAI has also updated its Responses API with MCP support - so assume you're good to go with this!)

Okay, so let's discuss LLMs and MCP integration! – that's Model Context Protocol – and how it links in with web development, specifically with platforms like WordPress, Laravel, Craft CMS, and even your home server or desktop.

If you're even slightly interested in AI like I am, you'll likely have heard this term - "Model Context Protocol" on socials and in the tech news. Perhaps in the context of big companies automating systems, or tech enthusiasts upgrading their home network.

The idea of AIs getting even more agency and power to do stuff outside the confines of their context windows though is, well, a tiny bit scary to put it mildly

And if you're firmly in the "AI sucks" camp then this likely just sounds like even more bad news heaped on top of an insurmountable heap. But it's here, it's happening, all we can do is hope it turns out well for us.

Mysteries and Machines

Many people don't even know Model Context Protocol exists yet (I barely knew anything about it before researching this post). But in a few years, it could be quietly running under the hood of everything, much like how HTTP shaped the web or how SFTP still exists as a way of getting files to and from web servers. MCP is a brand new kid on the block which lets AIs like ChatGPT or Claude do more than just chat and gives them arms and legs to reach out into the world with. A little creepy!

What does this mean for developers, customers, business owners? Here I'll go through what MCP actually is, why it matters, how it works across different platforms, and where this could all be heading next. What the heck indeed.

MCP Servers

Meet MCP: The Who, What, and Why

One of the first things to clarify is what Model Context Protocol is and what it isn't. It's easy to get lost in the jargon. So, what exactly is MCP? Is it just a protocol, or is it something else?

When you talk about MCP, you're basically just talking about a set of rules.

Think of it like this: MCP is a guide for how to define tools, a guide for how clients should expose those tools, and a guide for how LLMs can call them. It tells developers how to describe tools (name, inputs, outputs, etc.) using a schema, primarily JSON Schema (other specs like OpenAPI or GraphQL SDL are typically wrapped by helper servers). It lays out how the LLM should send requests and expect responses, and it allows many different types of tools – local scripts, HTTP APIs, etc. – to be treated the same way.

This abstraction is what makes MCP so elegant. It's AI-talk for not reinventing the wheel.

One language that AIs can use to communicate with the world.

The Roles

  • Host (or Runtime): The brain. This is where the LLM itself resides and brokers the calls to tools. For example, Claude Desktop bundles a Host.
  • Client: The chat app or interface. This is the thin shim (like Claude Desktop's UI) that speaks MCP to the Host on behalf of the user.
  • Server: The toolbox. This is the application (like your WordPress site with an MCP plugin, a local script, or a Zapier connection) that exposes the actual tools the LLM can use.

Understanding these roles helps clarify how the different pieces of the MCP puzzle fit together.

{
  "tool_name": "getSalesForMonth",
  "arguments": {
    "month": "May",
    "year": 2025
  }
}
// LLM sends something like the above to a Model Context Protocol Server...
// ...and might get back:
{
  "total_sales": 12345.67,
  "currency": "USD",
  "top_product": "Widget Deluxe"
}

A simplified conceptual example of an LLM requesting to use a tool and the kind of response it might receive.

Key Characteristics of MCP:

  • Tool Descriptions: Uses a schema (primarily JSON Schema) for each tool, detailing its name, parameters (inputs), and expected outputs.
  • Discovery Format: Provides a way for an LLM to ask, "What tools are available here?" allowing for dynamic tool discovery. For example, Claude Desktop starts and introspects the MCP servers you list in its configuration, reading the tool manifest they emit. It fetches tool metadata – name, parameters, descriptions – from your server.
  • Execution Flow:
    • LLM sends a tool call request (e.g., via its native tool-calling mechanism).
    • The MCP Client relays this to the MCP Host, which then communicates with the MCP Server.
    • The Server runs the specified tool with the provided arguments and returns the result.
    • The LLM then uses that result to formulate its reply to the user.

One way to think about it is that instead of a 1-to-1 relationship between AIs and services (so you've have countless LLMs, each with n connections to services, leading to an n*n scenario, you have AIs connecting to the Model Context Protocol Client which connects in turn to the services. So an n+n situation. Far more manageable!

MCP, OpenAI's Tool Calling, and the Ecosystem

It's important to note that while the concepts are similar, different players have their own implementations. MCP was coined and formalized by Anthropic. Claude Desktop is currently a prominent client implementation of Model Context Protocol.

OpenAI’s tool-calling capabilities (the part of the Chat Completions / Assistants API that returns `tool_calls`) play a similar role to an MCP Host's function but it isn’t MCP-compatible at the wire level—think of it as a parallel dialect. It makes GPT behave like it can use tools. Think: “I typed a message into ChatGPT, and instead of just words back, it knows it needs to call a tool — and here’s what it sent.” So, if MCP becomes a widely adopted standard, then anyone can write a backend that various LLMs (those supporting MCP) can call tools from—including your own Laravel app, WordPress admin, or even your Minecraft server. However, OpenAI doesn’t strictly use MCP; it uses its own schema, which is very similar but not identical or directly interoperable.

"Think of MCP like a USB-C port for AI applications, just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools." Anthropic, creators of MCP

Scenario: The Widget Co. - A WordPress Site

The real "aha!" moment with MCP often comes when you realize you can wire LLMs into everyday platforms. Let's focus on one "hero" scenario: a WordPress site for "Widget Co." to see how Model Context Protocol can empower a business.

Conversational Data Access

Widget Co. uses a WordPress site to manage information about their widgets, business locations, staff updates, and production documents. They love widgets there. It's all they think about. The CEO wants to query this information naturally, because of course he does. Bosses don't have time for code and suchlike.

"I would love to be able to talk to my website; that sounds brilliant! I want to query who's been updating widgets recently, who's been the most productive, what's the latest widget that we've produced, and so on. Make it happen." - Boss Man

(Note: The data could be in an internal system, but for our example, WordPress is the source of truth. We're web developers, after all!)

Widgets
Yet more disorganised chaos (continuing the MCP analogy)

Making it Happen: The MCP Workflow for WordPress

"I sit down at my computer, I open up ChatGPT (or Claude Desktop), and I start typing, 'How many widgets have we updated this month?' How does the AI know what I mean? How does it know to call something on my website?"

The Workflow Unpacked:
  1. Model Your Data in WordPress:

    A developer first structures the data. This might involve Custom Post Types for "Widgets," ACF fields for staff, locations, and update details, and Custom Taxonomies for categories.

  2. Expose Data via an MCP Server Plugin:

    The developer installs or builds an MCP server plugin (currently, these are early community plugins requiring manual setup, like `mcp-wp/mcp-server`). This plugin wraps WordPress's REST API. They then define "tools" – mini JSON APIs described by a schema – like `list_widget_updates` or `get_productivity_stats`.

  3. Let the AI Call the Tool:

    The AI (acting as an MCP Host via a Client like Claude Desktop) receives Boss Man's query. It doesn’t know the answer itself, but its training and the available tool descriptions allow it to identify `list_widget_updates` as relevant. It constructs a structured request:

    {
      "method": "tools/call", // Or similar based on specific MCP implementation
      "params": {
        "name": "list_widget_updates",
        "arguments": { "date_range": "2025-05-01 to 2025-05-22" }
      }
    }

    This request is routed to the MCP server on the WordPress site. The server runs the query and returns the data.

  4. Secure and Approve (If Necessary):

    For read-only actions, this might be direct. For actions that change data, a human-in-the-loop approval step is crucial. The MCP server should explicitly request user approval for each tool invocation that modifies data (Claude Desktop, for instance, can pop a grant dialog by default).

  5. Results Returned to the AI:

    The AI receives the raw data and summarizes it for Boss Man: "You've had 87 widget updates this month. The most productive staff member was Maria at the Glasgow location.". Boss Man is happy, his desire for widget-related information satisfied. For now.

Understanding Tools, APIs, and Security in WordPress

The "tool" is defined on the AI side (e.g., in a Custom GPT or understood by Claude via MCP discovery). You specify the endpoint (your MCP server on WordPress), method, and schema. The AI learns to associate user intent with these tools. Your WordPress REST API endpoints are the actual executors.

Crucial: Security and Control

All tools must be sandboxed. No raw database access or arbitrary file system operations unless explicitly and carefully coded. You might expose `read_widget_status` but not `delete_all_users`. Businesses need assurance: the AI isn't roaming free. Every action should be permissioned, scoped, and logged. The developer defines the API, limiting what the AI can do.

You could allow staff to use AI to update widget statuses, but this requires robust role-based access control (RBAC) and potentially OAuth integration within your MCP server plugin.

The Human-in-the-Loop: Essential for Trust

We're likely years away from fully trusting AIs with significant, unreviewed decisions on live business data. For actions like updating records or emailing customers, a human approval step is vital. You could implement a queue in the WordPress admin dashboard showing AI-initiated changes, awaiting acceptance or rejection. Keep the AI on a leash, especially for write operations!

Intermission: Stepping Beyond WordPress

While WordPress offers a great entry point, the principles of MCP extend to more powerful and flexible platforms.

Let's see how Laravel can take MCP integration to the next level.

Power-User Stacks: Laravel & Craft CMS

Widgets

Laravel: Granular Control for Sophisticated MCP Servers

For more complex needs, Laravel provides a robust foundation. It offers greater flexibility, testability, and control over your MCP server implementation. Think Sanctum/Passport for authentication (treating the LLM like a scoped API user), sophisticated gates and policies for authorization, job queues for asynchronous tasks, and powerful Eloquent ORM for data interactions.

You could define dedicated MCP routes (e.g., in `routes/mcp.php`), use resource controllers for tools, and implement rate limiting and detailed logging for AI interactions.

Future-Proofing with the Repository Pattern

A key advantage in Laravel is the ability to use design patterns like the Repository Pattern. You can wrap your MCP tool logic in an interface (e.g., `ToolExecutorInterface`). Your current implementation might use an MCP-specific schema, but you could later swap this out for OpenAI's tool-calling format or a future protocol without rewriting your core business logic.

Key Takeaway for Laravel: "Using the repository pattern, you can safely bet on MCP today—while leaving the back door open to swap in whatever protocol wins tomorrow."

Back on our WordPress site, similar approval queues and security considerations would apply if we were allowing write access.

Bidirectional Communication and Automation

The conversation doesn't have to be one-way. What about the website sending updates *to* the LLM? This is where things get really interesting, especially when you bring in automation tools like Zapier.

For example, a new order on your WordPress site could trigger a webhook. This webhook could send data to an LLM (perhaps via a custom tool or an intermediary service). The LLM processes this, and then, using a Zapier connection (if the LLM platform supports it as a tool), updates a Google Sheet or sends a Slack notification.

In Laravel, queued jobs could monitor for new products and then use a tool to inform an LLM, which in turn might use its own tools (like a Zapier integration) to update external spreadsheets or dashboards. This creates a more dynamic, event-driven interaction.

Intermission: Bringing AI Home

Websites are just one part of the equation. MCP's potential truly shines when we consider local applications.

Let's explore how MCP can transform your desktop and home automation experience.

The AI-Personalised Home & Desktop: Local Power Unleashed

The line between AIs and the clients we use to access them is blurring. From desktop agents to IDE integrations, the ways we interact are multiplying, which can be overwhelming.

Deconstructing the Client-Server Dance in an MCP World

Traditionally, a client (like a browser) had to be programmed to talk to a specific server API. With MCP, the LLM acts as an intelligent intermediary. You provide a natural language query to a client (like Claude Desktop); the LLM (as part of the Host) interprets this and figures out which MCP tool on which MCP Server to call. A text prompt can, in many cases, replace a complex front-end application.

Your Desktop and Home Server: The AI Butler is In

Imagine querying files on your system ("Find PDF invoices from April"), summarizing spreadsheets, or running local scripts ("Run `disk_cleanup.py` every Friday")—all by talking to an AI.

With Claude Desktop, you can run an MCP client that talks to local tools. It starts and introspects an MCP server running on your machine (e.g., `mcp-desktop-automation` or custom scripts). You define tools locally, and Claude (the LLM in the cloud, via the Host in Claude Desktop) uses them. No public IPs or complex port forwarding needed for these local interactions.

This contrasts with using, say, ChatGPT's web interface with Custom GPTs for local tools. That typically requires your local tools to be exposed publicly via a server or a tunneling service like ngrok, and you manually define the tool schemas in the GPT builder. For true local-first power, a dedicated MCP client like Claude Desktop is currently more streamlined.

"With Claude Desktop, your local tools are easily accessible. For web-based AIs to use local tools, those tools often need a bridge to the internet."

Home Automation: A Conversation with Your House

The LLM can become your butler, DJ, and lighting designer. "Turn the kitchen lights red," "Play some uplifting music," or "What's the temperature in the fridge?"

"I Just Want to Talk to My House" – How it Works:

You've got your Nest, Ring, smart lights, Spotify... how do you connect them?

  1. Local MCP Client: Your interface (e.g., Claude Desktop).
  2. Desktop MCP Server: A local server (e.g., `mcp-desktop-automation` or custom script) defining tools like `get_fridge_temp()`. Each tool runs a script or hits a local device API.
  3. Link Devices: Your MCP server's tools use device APIs or platforms like Home Assistant, IFTTT, or Node-RED.
  4. Prompt & Action: You ask Claude, it calls the right local tool via your MCP server, and your house responds!

Security: Runs locally. For remote access, consider VPNs and token auth per tool. Every server should explicitly request user approval for tool invocations (Claude Desktop can show a grant dialog).

"We used to script smart homes with logic blocks—now we just talk to them, and the LLM figures out the rest."

Minecraft Moment: Just Because We Can!

Enough home maintenance and business logic! If you run a Minecraft server (perhaps on a home Linux box, like I do with my son), you can expose MCP tools to manage it: start/stop server, whitelist players, change world settings ("Claude, new creative world, no rain!").

Your Minecraft server becomes a Model Context Protocol Server, with tools running RCON commands or server scripts. Claude Desktop acts as your admin console. It's a fantastic way to teach how computers talk to each other.

You could even imagine tools to get in-game information or trigger events. The possibilities are playful and educational.

MCP Servers

Reality Check: Limits, Roadmap, and the XKCD Prophecy

MCP is exciting, but it's early days. Let's address the current landscape and potential pitfalls.

Limitations Right Now (Heads Up!)

  • Remote Server Connections: Officially, MCP Hosts like the one in Claude Desktop primarily use STDIO (standard input/output) for local tool communication. Direct, streamed HTTP transport for remote MCP Servers (like a WordPress site on a different machine) is often flagged as "in active development" in core specs. This means remote servers currently work best if you:
    1. Tunnel them locally (e.g., via SSH, ngrok).
    2. Write your own custom Host/Client pair that handles remote HTTP.
  • Spec Churn: MCP is young. While Anthropic has formalized it, there's no overarching standards body like for HTTP. The spec could evolve.
  • No OpenAI Parity (Yet): OpenAI's tool calling is similar in concept but not wire-compatible with MCP. You can't just point an MCP client at OpenAI's API or vice-versa without an adapter.
  • Ecosystem Maturity: WordPress plugins for MCP, for example, are "early community plugins" – experimental and requiring manual setup. They don't yet offer comprehensive, out-of-the-box functionality.

Is MCP the Endgame… or Just the First Draft?

Will MCP be *the* standard, or one of many? We risk the "JavaScript framework problem": 14 competing standards, then someone creates a 15th to unify them... (Cue the classic XKCD comic on standards).

"If MCP is HTTP, let’s hope we don’t end up with 14 slightly different versions of it, each needing its own plugin to make ‘turn on my lights’ work."

Can You Rely on MCP Today?

If you build an MCP tool for WordPress, will it be obsolete in six months? Short answer: Yes, to a point—but build with adaptability in mind.

  • What you CAN rely on: Model Context Protocol concepts are sound. It's often just JSON over HTTP/STDIO. Anthropic's Claude Desktop uses it.
  • What you CAN'T assume: Universal adoption or spec stability.

How to Build Safely:

  • Treat MCP tools like modular API endpoints.
  • Use abstractions (like Laravel's Repository Pattern).
  • Consider middleware adapters if targeting multiple AI platforms.
  • Build with graceful failure if no MCP client is present.

"MCP is stable enough to explore—but young enough that you should build like you might have to migrate."

What About the AI "Going Rogue"?

This is where the imagination can, well, run a little wild. We've all seen the movies. An AI, given the keys to the kingdom, decides to lock us all out (or worse!). When we talk about Actionable AI and MCP allowing LLMs to do things, it's natural to wonder if we're inching closer to that kind of scenario. Could MCP be the conduit for an AI to gain too much power?

Let's ground this a bit. MCP itself doesn't grant the AI sentience or independent ambition. The LLM isn't waking up and deciding to cause chaos via your WordPress MCP server. It's responding to prompts and using the specific tools you, the developer, have exposed. If there’s no "delete_database" tool available through MCP, the LLM can't just conjure one up.

The more pertinent concerns, when we strip away the sci-fi gloss, are a bit more down-to-earth but still incredibly important:

  • Overly Powerful or Poorly Designed Tools: If a developer, for instance, creates an MCP tool that can execute arbitrary server commands with high privileges, the potential for misuse (whether instructed by a naive user, a malicious actor, or via a cleverly crafted prompt injection) is significant. The "power" isn't inherent in the AI deciding to rebel, but in the capabilities of the tools it's given.
  • Unforeseen Cascading Effects: Imagine an AI tasked with managing inventory across multiple warehouses via MCP. A perfectly logical (from the AI's perspective) series of actions based on its training and a specific goal could lead to an entirely unforeseen and disruptive outcome in the real world if the initial parameters or the tools' interactions aren't perfectly understood.
  • Human Misuse: A powerful LLM connected to a wide array of real-world systems via MCP could become an incredibly effective tool for someone with malicious intent. The AI isn't "rogue"; it's being directed.
  • Security Flaws Amplified: A vulnerability in your MCP server, or a successful prompt injection attack that tricks the LLM, becomes much more critical when the AI can take direct actions.

So, while the narrative of a spontaneously rebellious AI is compelling, the immediate challenge with MCP is more about robust engineering, meticulous security, the principle of least privilege for exposed tools, and ensuring human oversight where actions have significant consequences. It’s less about the AI wanting to "go rogue" and more about humans ensuring they haven't inadvertently left the entire system vulnerable or designed tools that are too much of a "free rein" for any single point of failure or instruction.

Ultimately, the AI is working within the "fences" you build. The critical task is to design those fences well.

The Evolving Ecosystem: Modularity, Teamwork, and Intelligent Oversight

As MCP and similar protocols mature, we'll see new patterns emerge.

Modularity & Reusability

The AI models are black boxes, but the MCP Servers and Clients are human-designed. This means huge scope for modular, reusable toolkits (e.g., for WordPress content, file operations, team chat integration). Imagine an `npm` for MCP tools!

Multi-User Workflows

When teams use a shared LLM interface, the MCP server can become a team historian. "What did the UK team update yesterday?" The LLM, via MCP tools, could provide summaries of collaborative work.

Logging, Reporting & Self-Diagnosis

With interconnected systems, clear logging is vital. An MCP server could expose tools to query its own status or explain failures. "Why didn't that email send?" This enables self-diagnosing systems.

"You're not just building tools—you're building interfaces for questions you haven't thought to ask yet."

From Widget Reports to Virtual Worlds: This Isn’t Simplification – It’s Evolution

Model Context Protocol won’t simplify tech. If anything, it adds layers and complexity by encouraging more connectivity. It's not making things simpler; it's giving us more options. And options sound good, but not always when they come from a thousand different sources.

"MCP and LLMs offer new doors—but also a hallway full of them. And you still need to decide which ones to open."

But these are layers that can speak our language. You might start wanting to talk to your WordPress site, then wonder if your fridge, lights, and Minecraft server can join the conversation. With MCP, they can.

The LLM isn't just a data processor; it's a tool for understanding how we interact with data and what that means in our daily lives, even when we step away from the computer.

The Future: Interface as Imagination

(This part is more speculative, but exciting!) Imagine wearing earbuds, chatting to your home AI. Or a VR headset where you virtually explore and interact with your connected world – seeing lights respond, orders filter through your website, all visualized in real-time. This isn't just interface design; it’s experience architecture.

AI doesn’t simplify the world; it multiplies possibilities. But with tools like MCP, we can shape those possibilities into something human, meaningful, and perhaps even fun, giving us tools to visualize our increasingly complex world.

The journey into MCP and LLM integration is just beginning. It's a landscape ripe with potential for innovation, for creating more intuitive and powerful ways to interact with the technology that surrounds us.

What are your thoughts? How do you see MCP shaping the future of web development and system interaction?

Robin Metcalfe

Robin is a freelance web strategist and developer based in Edinburgh, with over 15 years of experience helping businesses build effective and engaging online platforms using technologies like Laravel and WordPress.

Get in Touch