Large language models are good at reasoning over text. They are not good at navigating the messy reality of APIs.

Ask a model to “call this REST endpoint,” and you’re implicitly asking it to:

  • Understand undocumented conventions.
  • Guess authentication flows.
  • Parse (often lacking) documentation.
  • Handle inconsistent error formats.
  • Discover available capabilities without a formal contract.

That works occasionally. It does not scale.

As soon as you want a model to do something real like scan a network, query a system, trigger a workflow—you need more than text generation. You need a structured way for the model to discover what it can do and how to do it safely.

That is where the Model Context Protocol (MCP) comes in.

MCP is not another web framework and it is not a replacement for REST. It is a protocol designed specifically for models. It gives them a predictable way to:

  • Discover available tools.
  • Understand their input schema.
  • Invoke them in a structured format.
  • Receive consistent responses.

This post walks through the reasoning behind MCP, how it works at a technical level, and a concrete example: wrapping nmap in an MCP server so a model can perform controlled network scans through a well-defined interface. The example code can be found here: https://github.com/GustafNilstadius/nmap-mcp-wrapper

The Problem: Models Don’t Speak REST

When humans interact with systems, the flow is straightforward:

User → UI → Backend → Database

Or between systems:

System → REST → System

That works because we design clients and servers with each other in mind. We know how authentication works. We read the documentation. We understand the endpoints.

But when a model needs to interact with a system, things break down.

A language model can generate HTTP requests, but:

  • REST is an architectural style, not a strict protocol.
  • Authentication schemes differ across services.
  • There is no built-in service discovery.
  • Documentation is unstructured, there is multiple standards and written for humans.
  • Error formats vary widely.

In short: REST works well for developers. It is not optimized for autonomous model interaction.

What Is MCP?

MCP stands for Model Context Protocol. It is a lightweight protocol for structured communication between models and external systems.

Instead of expecting a model to reverse-engineer a REST API, MCP gives it a predictable contract.

Technically, MCP is built on top of JSON-RPC.

Why JSON-RPC?

JSON-RPC provides:

  • A standardized request/response structure.
  • Explicit method and params fields.
  • Structured error handling.
  • Transport agnosticism (HTTP, WebSockets, message queues, etc.).

An MCP server exposes tools via JSON-RPC methods. A model can:

  1. Discover available tools.
  2. Inspect their input schema.
  3. Invoke them with properly structured arguments.
  4. Handle results in a consistent format.

This moves the integration problem from “figure out this API” to “call a method with this schema.”

That difference is subtle but important.

MCP Tool Discovery in Practice

With MCP, the first step is usually discovery.

A client sends:

{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/list",
  "params": {}
}

The server responds with a list of tools and their JSON Schema definitions:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "weather.getForecast",
        "description": "Get a weather forecast by city name.",
        "inputSchema": {
          "type": "object",
          "properties": {
            "city": { "type": "string" },
            "days": { "type": "integer", "minimum": 1, "maximum": 10 }
          },
          "required": ["city"]
        }
      }
    ]
  }
}

This is structured, machine-readable, and unambiguous.

A model does not need to parse documentation. It can reason directly over the schema.

From Question to Tool Invocation

Consider the prompt:

“What is the weather in Tokyo for the next 3 days?”

The model can:

  • Identify that the task is about weather.
  • Extract parameters:

    • city: “Tokyo”
    • days: 3
  • Match them against the discovered weather.getForecast schema.
  • Construct a valid JSON-RPC call.

The invocation is not guesswork. It is schema-driven.

That is the core idea behind MCP: give models structured affordances instead of free-form endpoints.

Using MCP with Your Model

Integrating an MCP server into a model is straightforward. You configure the model runtime with the MCP endpoint:

{
  "mcpServers": {
    "my-nmap-server": {
      "url": "http://localhost:8080/mcp"
    }
  }
}

From there, the model can:

  • Discover tools exposed by that server.
  • Call them as needed.
  • Combine their outputs with reasoning.

A Concrete Example: Wrapping nmap

To make this practical, I built an MCP server that wraps nmap:

Repository: https://github.com/GustafNilstadius/nmap-mcp-wrapper

The goal was simple: allow a model to perform network scans through a structured, well-defined interface rather than by generating shell commands.

Architecture

The wrapper is implemented using Vert.x and delegates to the nmap binary via system calls.

High-level flow:

  1. The MCP server exposes scan-related tools.
  2. The model discovers those tools via tools/list.
  3. The model invokes a scan tool with validated parameters.
  4. The server executes nmap with controlled arguments.
  5. The result is returned in structured form.

Instead of letting a model hallucinate command-line flags, we:

  • Define a schema for allowed inputs.
  • Validate them server-side.
  • Map them to specific nmap invocations.

This provides guardrails while still enabling flexible usage.

Why Wrap a CLI Tool?

There are several advantages to wrapping command-line tools with MCP:

  • The CLI is already battle-tested.
  • You avoid reimplementing complex logic.
  • You can restrict available flags and parameters.
  • You can normalize output into a structured response.

For something like nmap, which has dozens of flags and modes, this is especially useful. The MCP layer becomes a controlled interface over a powerful underlying binary.

Security Considerations

Exposing system tools to models is not trivial.

When wrapping something like nmap, you need to:

  • Whitelist allowed arguments.
  • Prevent arbitrary shell injection.
  • Validate hostnames and target formats.
  • Consider rate limiting and auditing.
  • Control network access scope.

MCP does not solve security by itself. It provides structure. You still need to design the server carefully.

In the nmap wrapper, the key design principle is that the model never constructs raw shell commands. It only supplies structured parameters that are validated and translated into safe system calls.

Why MCP Matters

The broader point is not about nmap.

It is about standardization.

If every tool exposes a different REST API with different conventions, models need custom logic for each integration. That does not scale.

MCP offers:

  • Standard discovery.
  • Standard invocation.
  • Standard error handling.
  • Schema-driven interaction.

That makes it possible to build a composable ecosystem of tools for models, rather than a collection of one-off integrations.

Ideas for Your Own MCP Server

If you want to experiment, you do not need to start with something complex.

Some ideas:

  • Wrap a common CLI tool.
  • Random number generator.
  • Current time server.
  • Base64 or AES encoder/decoder.
  • Simple key-value store.

The pattern is always the same:

  1. Define a tool.
  2. Provide a JSON Schema for its inputs.
  3. Implement the underlying logic.
  4. Return structured results.

The rest is just JSON-RPC.

Final Thoughts

MCP is not trying to replace REST. It is solving a different problem.

REST is great for developers integrating services.

MCP is designed for models integrating capabilities.

That distinction becomes increasingly important as we build systems where models are not just generating text, but actively interacting with infrastructure.

Wrapping nmap is a small example, but it illustrates the larger idea: give models structured tools, not vague endpoints.

Gustaf Nilstadius

AI General & Tech lead at Redpill Linpro

Gustaf started at Redpill Linpro in 2020 after spending multiple years abroad working in Silicon Valley. Gustaf is specialized in micro-services and the Vert.X framework with a keen liking for AI and LLMs.

Sequential Tekton Pipeline Runs

Tekton is a neat Kubernetes native CI/CD system. In this article we will explore what Kubernetes native means and show how this allows us to implement CI/CD features that are not present in Tekton itself by leveraging the power of the Kubernetes API. As an example, we will show how to ensure that Pipelines do not run in parallel.

... [continue reading]