Large language models are good at reasoning over text. They are not good at navigating the messy reality of APIs.
Ask a model to “call this REST endpoint,” and you’re implicitly asking it to:
- Understand undocumented conventions.
- Guess authentication flows.
- Parse (often lacking) documentation.
- Handle inconsistent error formats.
- Discover available capabilities without a formal contract.
That works occasionally. It does not scale.
As soon as you want a model to do something real like scan a network, query a system, trigger a workflow—you need more than text generation. You need a structured way for the model to discover what it can do and how to do it safely.
That is where the Model Context Protocol (MCP) comes in.
MCP is not another web framework and it is not a replacement for REST. It is a protocol designed specifically for models. It gives them a predictable way to:
- Discover available tools.
- Understand their input schema.
- Invoke them in a structured format.
- Receive consistent responses.
This post walks through the reasoning behind MCP, how it works at a technical level, and a concrete example: wrapping nmap in an MCP server so a model can perform controlled network scans through a well-defined interface.
The example code can be found here: https://github.com/GustafNilstadius/nmap-mcp-wrapper
The Problem: Models Don’t Speak REST
When humans interact with systems, the flow is straightforward:
User → UI → Backend → Database
Or between systems:
System → REST → System
That works because we design clients and servers with each other in mind. We know how authentication works. We read the documentation. We understand the endpoints.
But when a model needs to interact with a system, things break down.
A language model can generate HTTP requests, but:
- REST is an architectural style, not a strict protocol.
- Authentication schemes differ across services.
- There is no built-in service discovery.
- Documentation is unstructured, there is multiple standards and written for humans.
- Error formats vary widely.
In short: REST works well for developers. It is not optimized for autonomous model interaction.
What Is MCP?
MCP stands for Model Context Protocol. It is a lightweight protocol for structured communication between models and external systems.
Instead of expecting a model to reverse-engineer a REST API, MCP gives it a predictable contract.
Technically, MCP is built on top of JSON-RPC.
Why JSON-RPC?
JSON-RPC provides:
- A standardized request/response structure.
- Explicit
methodandparamsfields. - Structured error handling.
- Transport agnosticism (HTTP, WebSockets, message queues, etc.).
An MCP server exposes tools via JSON-RPC methods. A model can:
- Discover available tools.
- Inspect their input schema.
- Invoke them with properly structured arguments.
- Handle results in a consistent format.
This moves the integration problem from “figure out this API” to “call a method with this schema.”
That difference is subtle but important.
MCP Tool Discovery in Practice
With MCP, the first step is usually discovery.
A client sends:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}
The server responds with a list of tools and their JSON Schema definitions:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "weather.getForecast",
"description": "Get a weather forecast by city name.",
"inputSchema": {
"type": "object",
"properties": {
"city": { "type": "string" },
"days": { "type": "integer", "minimum": 1, "maximum": 10 }
},
"required": ["city"]
}
}
]
}
}
This is structured, machine-readable, and unambiguous.
A model does not need to parse documentation. It can reason directly over the schema.
From Question to Tool Invocation
Consider the prompt:
“What is the weather in Tokyo for the next 3 days?”
The model can:
- Identify that the task is about weather.
-
Extract parameters:
city: “Tokyo”days: 3
- Match them against the discovered
weather.getForecastschema. - Construct a valid JSON-RPC call.
The invocation is not guesswork. It is schema-driven.
That is the core idea behind MCP: give models structured affordances instead of free-form endpoints.
Using MCP with Your Model
Integrating an MCP server into a model is straightforward. You configure the model runtime with the MCP endpoint:
{
"mcpServers": {
"my-nmap-server": {
"url": "http://localhost:8080/mcp"
}
}
}
From there, the model can:
- Discover tools exposed by that server.
- Call them as needed.
- Combine their outputs with reasoning.
A Concrete Example: Wrapping nmap
To make this practical, I built an MCP server that wraps nmap:
Repository: https://github.com/GustafNilstadius/nmap-mcp-wrapper
The goal was simple: allow a model to perform network scans through a structured, well-defined interface rather than by generating shell commands.
Architecture
The wrapper is implemented using Vert.x and delegates to the nmap binary via system calls.
High-level flow:
- The MCP server exposes scan-related tools.
- The model discovers those tools via
tools/list. - The model invokes a scan tool with validated parameters.
- The server executes
nmapwith controlled arguments. - The result is returned in structured form.
Instead of letting a model hallucinate command-line flags, we:
- Define a schema for allowed inputs.
- Validate them server-side.
- Map them to specific
nmapinvocations.
This provides guardrails while still enabling flexible usage.
Why Wrap a CLI Tool?
There are several advantages to wrapping command-line tools with MCP:
- The CLI is already battle-tested.
- You avoid reimplementing complex logic.
- You can restrict available flags and parameters.
- You can normalize output into a structured response.
For something like nmap, which has dozens of flags and modes, this is especially useful. The MCP layer becomes a controlled interface over a powerful underlying binary.
Security Considerations
Exposing system tools to models is not trivial.
When wrapping something like nmap, you need to:
- Whitelist allowed arguments.
- Prevent arbitrary shell injection.
- Validate hostnames and target formats.
- Consider rate limiting and auditing.
- Control network access scope.
MCP does not solve security by itself. It provides structure. You still need to design the server carefully.
In the nmap wrapper, the key design principle is that the model never constructs raw shell commands. It only supplies structured parameters that are validated and translated into safe system calls.
Why MCP Matters
The broader point is not about nmap.
It is about standardization.
If every tool exposes a different REST API with different conventions, models need custom logic for each integration. That does not scale.
MCP offers:
- Standard discovery.
- Standard invocation.
- Standard error handling.
- Schema-driven interaction.
That makes it possible to build a composable ecosystem of tools for models, rather than a collection of one-off integrations.
Ideas for Your Own MCP Server
If you want to experiment, you do not need to start with something complex.
Some ideas:
- Wrap a common CLI tool.
- Random number generator.
- Current time server.
- Base64 or AES encoder/decoder.
- Simple key-value store.
The pattern is always the same:
- Define a tool.
- Provide a JSON Schema for its inputs.
- Implement the underlying logic.
- Return structured results.
The rest is just JSON-RPC.
Final Thoughts
MCP is not trying to replace REST. It is solving a different problem.
REST is great for developers integrating services.
MCP is designed for models integrating capabilities.
That distinction becomes increasingly important as we build systems where models are not just generating text, but actively interacting with infrastructure.
Wrapping nmap is a small example, but it illustrates the larger idea: give models structured tools, not vague endpoints.
