Search

Software Engineer's Notes

Month

December 2025

Understanding Model Context Protocol (MCP) and the Role of MCP Servers

What is Model Context Protocol?

The rapid evolution of AI tools—especially large language models (LLMs)—has brought a new challenge: how do we give AI controlled, secure, real-time access to tools, data, and applications?
This is exactly where the Model Context Protocol (MCP) comes into play.

In this blog post, we’ll explore what MCP is, what an MCP Server is, its history, how it works, why it matters, and how you can integrate it into your existing software development process.

MCP architecture

What Is Model Context Protocol?

Model Context Protocol (MCP) is an open standard designed to allow large language models to interact safely and meaningfully with external tools, data sources, and software systems.

Traditionally, LLMs worked with static prompts and limited context. MCP changes that by allowing models to:

  • Request information
  • Execute predefined operations
  • Access external data
  • Write files
  • Retrieve structured context
  • Extend their abilities through secure, modular “servers”

In short, MCP provides a unified interface between AI models and real software environments.

What Is a Model Context Protocol Server?

An MCP server is a standalone component that exposes capabilities, resources, and operations to an AI model through MCP.

Think of an MCP server as a plugin container, or a bridge between your application and the AI.

An MCP Server can provide:

  • File system access
  • Database queries
  • API calls
  • Internal business logic
  • Real-time system data
  • Custom actions (deploy, run tests, generate code, etc.)

MCP Servers work with any MCP-compatible LLM client (such as ChatGPT with MCP support), and they are configured with strict permissions for safety.

History of Model Context Protocol

Early Challenges with LLM Tooling

Before MCP, LLM tools were fragmented:

  • Every vendor used different APIs
  • Extensions were tightly coupled to the model platform
  • There was no standard for secure tool execution
  • Maintaining custom integrations was expensive

As developers started using LLMs for automation, code generation, and data workflows, the need for a secure, standardized protocol became clear.

Birth of MCP (2023–2024)

MCP originated from OpenAI’s efforts to unify:

  • Function calling
  • Extended tool access
  • Notebook-like interaction
  • File system operations
  • Secure context and sandboxing

The idea was to create a vendor-neutral protocol, similar to how REST standardized web communication.

Open Adoption and Community Growth (2024–2025)

By 2025, MCP gained widespread support:

  • OpenAI integrated MCP into ChatGPT clients
  • Developers started creating custom MCP servers
  • Tooling ecosystems expanded (e.g., filesystem servers, database servers, API servers)
  • Companies adopted MCP to give AI controlled internal access

MCP became a foundational building block for AI-driven software engineering workflows.

How Does MCP Work?

MCP works through a client–server architecture with clearly defined contracts.

1. The MCP Client

This is usually an AI model environment such as:

  • ChatGPT
  • VS Code AI extensions
  • IDE plugins
  • Custom LLM applications

The client knows how to communicate using MCP.

2. The MCP Server

Your MCP server exposes:

  • Resources → things the AI can reference
  • Tools / Actions → things the AI can do
  • Prompts / Templates → predefined workflows

Each server has permissions and runs in isolation for safety.

3. The Protocol Layer

Communication uses JSON-RPC over a standard channel (typically stdio or WebSocket).

The client asks:

“What tools do you expose?”

The server responds with:

“Here are resources, actions, and context you can use.”

Then the AI can call these tools securely.

4. Execution

When the AI executes an action (e.g., database query), the server performs the task on behalf of the model and returns structured results.

Why Do We Need MCP?

– Standardization

No more custom plugin APIs for each model. MCP is universal.

– Security

Strict capability control → AI only accesses what you explicitly expose.

– Extensibility

You can build your own MCP servers to extend AI.

– Real-time Interaction

Models can work with live:

  • data
  • files
  • APIs
  • business systems

– Sandbox Isolation

Servers run independently, protecting your core environment.

– Developer Efficiency

You can quickly create new AI-powered automations.

Benefits of Using MCP Servers

  • Reusable infrastructure — write once, use with any MCP-supported LLM.
  • Modularity — split responsibilities into multiple servers.
  • Portability — works across tools, IDEs, editor plugins, and AI platforms.
  • Lower maintenance — maintain one integration instead of many.
  • Improved automation — AI can interact with real systems (CI/CD, databases, cloud services).
  • Better developer workflows — AI gains accurate, contextual knowledge of your project.

How to Integrate MCP Into Your Software Development Process

1. Identify AI-Ready Tasks

Good examples:

  • Code refactoring
  • Automated documentation
  • Database querying
  • CI/CD deployment helpers
  • Environment setup scripts
  • File generation
  • API validation

2. Build a Custom MCP Server

Using frameworks like:

  • Node.js MCP Server Kits
  • Python MCP Server Kits
  • Custom implementations with JSON-RPC

Define what tools you want the model to access.

3. Expose Resources Safely

Examples:

  • Read-only project files
  • Specific database tables
  • Internal API endpoints
  • Configuration values

Always choose minimum required permissions.

4. Connect Your MCP Server to the Client

In ChatGPT or your LLM client:

  • Add local MCP servers
  • Add network MCP servers
  • Configure environment variables
  • Set up permissions

5. Use AI in Your Development Workflow

AI can now:

  • Generate code with correct system context
  • Run transformations
  • Trigger tasks
  • Help debug with real system data
  • Automate repetitive developer chores

6. Monitor and Validate

Use logging, audit trails, and usage controls to ensure safety.

Conclusion

Model Context Protocol (MCP) is becoming a cornerstone of modern AI-integrated software development. MCP Servers give LLMs controlled access to powerful tools, bridging the gap between natural language intelligence and real-world software systems.

By adopting MCP in your development process, you can unlock:

  • Higher productivity
  • Better automation
  • Safer AI integrations
  • Faster development cycles

Unit Testing: The What, Why, and How (with Practical Examples)

What is unit test?

What is a Unit Test?

A unit test verifies the smallest testable part of your software—usually a single function, method, or class—in isolation. Its goal is to prove that, for a given input, the unit produces the expected output and handles edge cases correctly.

Key characteristics

  • Small & fast: millisecond execution, in-memory.
  • Isolated: no real network, disk, or database calls.
  • Repeatable & deterministic: same input → same result.
  • Self-documenting: communicates intended behavior.

A Brief History (How We Got Here)

  • 1960s–1980s: Early testing practices emerged with procedural languages, but were largely ad-hoc and manual.
  • 1990s: Object-oriented programming popularized more modular designs. Kent Beck introduced SUnit for Smalltalk; the “xUnit” family was born.
  • Late 1990s–2000s: JUnit (Java) and NUnit (.NET) pushed unit testing mainstream. Test-Driven Development (TDD) formalized “Red → Green → Refactor.”
  • 2010s–today: Rich ecosystems (pytest, Jest, JUnit 5, RSpec, Go’s testing pkg). CI/CD and DevOps turned unit tests into a daily, automated safety net.

How Unit Tests Work (The Mechanics)

Arrange → Act → Assert (AAA)

  1. Arrange: set up inputs, collaborators (often fakes/mocks).
  2. Act: call the method under test.
  3. Assert: verify outputs, state changes, or interactions.

Test Doubles (isolate the unit)

  • Dummy: unused placeholders to satisfy signatures.
  • Stub: returns fixed data (no behavior verification).
  • Fake: lightweight implementation (e.g., in-memory repo).
  • Mock: verifies interactions (e.g., method X called once).
  • Spy: records calls for later assertions.

Good Test Qualities (FIRST)

  • Fast, Isolated, Repeatable, Self-Validating, Timely.

Naming & Structure

  • Name: methodName_condition_expectedResult
  • One assertion concept per test (clarity > cleverness).
  • Avoid coupling to implementation details (test behavior).

When Should We Write Unit Tests?

  • New code: ideally before or while coding (TDD).
  • Bug fixes: add a unit test that reproduces the bug first.
  • Refactors: guard existing behavior before changing code.
  • Critical modules: domain logic, calculations, validation.

What not to unit test

  • Auto-generated code, trivial getters/setters, framework wiring (unless it encodes business logic).

Advantages (Why Unit Test?)

  • Confidence & speed: safer refactors, fewer regressions.
  • Executable documentation: shows intended behavior.
  • Design feedback: forces smaller, decoupled units.
  • Lower cost of defects: catch issues early and cheaply.
  • Developer velocity: faster iteration with guardrails.

Practical Examples

Java (JUnit 5 + Mockito)

// src/test/java/com/example/PriceServiceTest.java
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.*;
import static org.mockito.Mockito.*;

class PriceServiceTest {
    @Test
    void applyDiscount_whenVIP_shouldReduceBy10Percent() {
        DiscountPolicy policy = mock(DiscountPolicy.class);
        when(policy.discountFor("VIP")).thenReturn(0.10);

        PriceService service = new PriceService(policy);
        double result = service.applyDiscount(200.0, "VIP");

        assertEquals(180.0, result, 0.0001);
        verify(policy, times(1)).discountFor("VIP");
    }
}

// Production code (for context)
class PriceService {
    private final DiscountPolicy policy;
    PriceService(DiscountPolicy policy) { this.policy = policy; }
    double applyDiscount(double price, String tier) {
        return price * (1 - policy.discountFor(tier));
    }
}
interface DiscountPolicy { double discountFor(String tier); }

Python (pytest)

# app/discount.py
def apply_discount(price: float, tier: str, policy) -> float:
    return price * (1 - policy.discount_for(tier))

# tests/test_discount.py
class FakePolicy:
    def discount_for(self, tier):
        return {"VIP": 0.10, "STD": 0.0}.get(tier, 0.0)

def test_apply_discount_vip():
    from app.discount import apply_discount
    result = apply_discount(200.0, "VIP", FakePolicy())
    assert result == 180.0

In-Memory Fakes Beat Slow Dependencies

// In-memory repository for fast unit tests
class InMemoryUserRepo implements UserRepo {
    private final Map<String, User> store = new HashMap<>();
    public void save(User u){ store.put(u.id(), u); }
    public Optional<User> find(String id){ return Optional.ofNullable(store.get(id)); }
}

Integrating Unit Tests into Your Current Process

1) Organize Your Project

/src
  /main
    /java (or /python, /ts, etc.)
  /test
    /java ...

  • Mirror package/module structure under /test.
  • Name tests after the unit: PriceServiceTest, test_discount.py, etc.

2) Make Tests First-Class in CI

GitHub Actions (Java example)

name: build-and-test
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-java@v4
        with: { distribution: temurin, java-version: '21' }
      - run: ./gradlew test --no-daemon

GitHub Actions (Python example)

name: pytest
on: [push, pull_request]
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with: { python-version: '3.12' }
      - run: pip install -r requirements.txt
      - run: pytest -q

3) Define “Done” with Tests

  • Pull requests must include unit tests for new/changed logic.
  • Code review checklist: readability, edge cases, negative paths.
  • Coverage gate (sensible threshold; don’t chase 100%).
    Example (Gradle + JaCoCo):
jacocoTestCoverageVerification {
    violationRules {
        rule { limit { counter = 'INSTRUCTION'; minimum = 0.75 } }
    }
}
test.finalizedBy jacocoTestReport, jacocoTestCoverageVerification

4) Keep Tests Fast and Reliable

  • Avoid real I/O; prefer fakes/mocks.
  • Keep each test < 100ms; whole suite in seconds.
  • Eliminate flakiness (random time, real threads, sleeps).

5) Use the Test Pyramid Wisely

  • Unit (broad base): thousands, fast, isolated.
  • Integration (middle): fewer, verify boundaries.
  • UI/E2E (tip): very few, critical user flows only.

A Simple TDD Loop You Can Adopt Tomorrow

  1. Red: write a failing unit test that expresses the requirement.
  2. Green: implement the minimum to pass.
  3. Refactor: clean design safely, keeping tests green.
  4. Repeat; keep commits small and frequent.

Common Pitfalls (and Fixes)

  • Mock-heavy tests that break on refactor → mock only at boundaries; prefer fakes for domain logic.
  • Testing private methods → test through public behavior; refactor if testing is too hard.
  • Slow suites → remove I/O, shrink fixtures, parallelize.
  • Over-asserting → one behavioral concern per test.

Rollout Plan (4 Weeks)

  • Week 1: Set up test frameworks, sample tests, CI pipeline, coverage reporting.
  • Week 2: Add tests for critical modules & recent bug fixes. Create a PR template requiring tests.
  • Week 3: Refactor hot spots guided by tests. Introduce an in-memory fake layer.
  • Week 4: Add coverage gates, stabilize the suite, document conventions in CONTRIBUTING.md.

Team Conventions

  • Folder structure mirrors production code.
  • Names: ClassNameTest or test_function_behavior.
  • AAA layout, one behavior per test.
  • No network/disk/DB in unit tests.
  • PRs must include tests for changed logic.

Final Thoughts

Unit tests pay dividends by accelerating safe change. Start small, keep them fast and focused, and wire them into your daily workflow (pre-commit, CI, PR reviews). Over time, they become living documentation and your best shield against regressions.

Blog at WordPress.com.

Up ↑