
Integration testing verifies that multiple parts of your system work correctly together—modules, services, databases, queues, third-party APIs, configuration, and infrastructure glue. Where unit tests validate small pieces in isolation, integration tests catch issues at the seams: misconfigured ports, serialization mismatches, transaction boundaries, auth headers, timeouts, and more.
What Is an Integration Test?
An integration test exercises a feature path across two or more components:
- A web controller + service + database
- Two microservices communicating over HTTP/REST or messaging
- Your code + a real (or realistic) external system such as PostgreSQL, Redis, Kafka, S3, Stripe, or a mock double that replicates its behavior
It aims to answer: “Given realistic runtime conditions, do the collaborating parts interoperate as intended?”
How Integration Tests Work (Step-by-Step)
- Assemble the slice
Decide which components to include (e.g., API layer + persistence) and what to substitute (e.g., real DB in a container vs. an in-memory alternative). - Provision dependencies
Spin up databases, message brokers, or third-party doubles. Popular approaches:- Ephemeral containers (e.g., Testcontainers for DBs/Brokers/Object stores)
- Local emulators (e.g., LocalStack for AWS)
- HTTP stubs (e.g., WireMock, MockServer) to simulate third-party APIs
- Seed test data & configuration
Apply migrations, insert fixtures, set secrets/env vars, and configure network endpoints. - Execute realistic scenarios
Drive the system via its public interface (HTTP calls, messages on a topic/queue, method entry points that span layers). - Assert outcomes
Verify HTTP status/body, DB state changes, published messages, idempotency, retries, metrics/log signatures, and side effects. - Teardown & isolate
Clean up containers, reset stubs, and ensure tests are independent and order-agnostic.
Key Components of Integration Testing
- System under test (SUT) boundary: Define exactly which modules/services are “in” vs. “out.”
- Realistic dependencies: Databases, caches, queues, object stores, identity providers.
- Test doubles where necessary:
- Stubs for fixed responses (e.g., pricing API)
- Mocks for interaction verification (e.g., “was /charge called with X?”)
- Environment management: Containers, docker-compose, or cloud emulators; test-only configs.
- Data management: Migrations + fixtures; factories/builders for readable setup.
- Observability hooks: Logs, metrics, tracing assertions (useful for debugging flaky flows).
- Repeatable orchestration: Scripts/Gradle/Maven/npm to run locally and in CI the same way.
Benefits
- Catches integration bugs early: Contract mismatches, auth failures, connection strings, TLS issues.
- Confidence in deploys: Reduced incidents due to configuration drift.
- Documentation by example: Tests serve as living examples of real flows.
- Fewer flaky end-to-end tests: Solid integration coverage means you need fewer slow, brittle E2E UI tests.
When (and How) to Use Integration Tests
Use integration tests when:
- A unit test can’t surface real defects (e.g., SQL migrations, ORM behavior, transaction semantics).
- Two or more services/modules must agree on contracts (schemas, headers, error codes).
- You rely on infra features (indexes, isolation levels, topic partitions, S3 consistency).
How to apply effectively:
- Target critical paths first: sign-up, login, payments, ordering, data ingestion.
- Prefer ephemeral, production-like dependencies: containers over mocks for DBs/brokers.
- Keep scope tight: Test one coherent flow per test; avoid sprawling “kitchen-sink” cases.
- Make it fast enough: Parallelize tests, reuse containers per test class/suite.
- Run in CI for each PR: Same commands locally and in the pipeline to avoid “works on my machine.”
Integration vs. Unit vs. End-to-End (Quick Table)
| Aspect | Unit Test | Integration Test | End-to-End (E2E) |
|---|---|---|---|
| Scope | Single class/function | Multiple components/services | Full system incl. UI |
| Dependencies | All mocked | Realistic (DB, broker) or stubs | All real |
| Speed | Milliseconds | Seconds | Seconds–Minutes |
| Flakiness | Low | Medium (manageable) | Higher |
| Purpose | Logic correctness | Interoperation correctness | User journey correctness |
Tooling & Patterns (Common Stacks)
- Containers & Infra: Testcontainers, docker-compose, LocalStack, Kind (K8s)
- HTTP Stubs: WireMock, MockServer
- Contract Testing: Pact (consumer-driven contracts)
- DB Migrations/Fixtures: Flyway, Liquibase; SQL scripts; FactoryBoy/FactoryBot-style data builders
- CI: GitHub Actions, GitLab CI, Jenkins with service containers
Real-World Examples (Detailed)
1) Service + Database (Java / Spring Boot + PostgreSQL)
Goal: Verify repository mappings, transactions, and API behavior.
// build.gradle (snippet)
testImplementation("org.testcontainers:junit-jupiter:1.20.1")
testImplementation("org.testcontainers:postgresql:1.20.1")
testImplementation("org.springframework.boot:spring-boot-starter-test")
// Example JUnit 5 test
@AutoConfigureMockMvc
@SpringBootTest
@Testcontainers
class ItemApiIT {
@Container
static PostgreSQLContainer<?> pg = new PostgreSQLContainer<>("postgres:16");
@DynamicPropertySource
static void dbProps(DynamicPropertyRegistry r) {
r.add("spring.datasource.url", pg::getJdbcUrl);
r.add("spring.datasource.username", pg::getUsername);
r.add("spring.datasource.password", pg::getPassword);
}
@Autowired MockMvc mvc;
@Autowired ItemRepository repo;
@Test
void createAndFetchItem() throws Exception {
mvc.perform(post("/items")
.contentType(MediaType.APPLICATION_JSON)
.content("{\"group\":\"tools\",\"name\":\"wrench\",\"count\":5,\"cost\":12.5}"))
.andExpect(status().isCreated());
mvc.perform(get("/items?group=tools"))
.andExpect(status().isOk())
.andExpect(jsonPath("$[0].name").value("wrench"));
assertEquals(1, repo.count());
}
}
What this proves: Spring wiring, JSON (de)serialization, transactionality, schema/mappings, and HTTP contract all work together against a real Postgres.
2) Outbound HTTP to a Third-Party API (WireMock)
@WireMockTest(httpPort = 8089)
class PaymentClientIT {
@Test
void chargesCustomer() {
// Stub Stripe-like API
stubFor(post(urlEqualTo("/v1/charges"))
.withRequestBody(containing("\"amount\": 2000"))
.willReturn(aResponse().withStatus(200).withBody("{\"id\":\"ch_123\",\"status\":\"succeeded\"}")));
PaymentClient client = new PaymentClient("http://localhost:8089", "test_key");
ChargeResult result = client.charge("cust_1", 2000);
assertEquals("succeeded", result.status());
verify(postRequestedFor(urlEqualTo("/v1/charges")));
}
}
What this proves: Your serialization, auth headers, timeouts/retries, and error handling match the third-party contract.
3) Messaging Flow (Kafka)
- Start a Kafka container; publish a test message to the input topic.
- Assert your consumer processes it and publishes to the output topic or persists to the DB.
- Validate at-least-once handling and idempotency by sending duplicates.
Signals covered: Consumer group config, serialization (Avro/JSON/Protobuf), offsets, partitions, dead-letter behavior.
4) Python / Django API + Postgres (pytest + Testcontainers)
# pyproject.toml deps: pytest, pytest-django, testcontainers[postgresql], requests
def test_create_and_get_item(live_server, postgres_container):
# Set DATABASE_URL from container, run migrations, then:
r = requests.post(f"{live_server.url}/items", json={"group":"tools","name":"wrench","count":5,"cost":12.5})
assert r.status_code == 201
r2 = requests.get(f"{live_server.url}/items?group=tools")
assert r2.status_code == 200 and r2.json()[0]["name"] == "wrench"
Design Tips & Best Practices
- Define the “slice” explicitly (avoid accidental E2E tests).
- One scenario per test; keep them readable and deterministic.
- Prefer real infra where cheap (real DB > in-memory); use stubs for costly/unreliable externals.
- Make tests parallel-safe: unique schema names, randomized ports, isolated fixtures.
- Stabilize flakiness: time controls (freeze time), retry assertions for eventually consistent flows, awaitility patterns.
- Contracts first: validate schemas and error shapes; consider consumer-driven contracts to prevent breaking changes.
- Observability: assert on logs/metrics/traces for non-functional guarantees (retries, circuit-breakers).
Common Pitfalls (and Fixes)
- Slow suites → Parallelize, reuse containers per class, trim scope, share fixtures.
- Brittle external dependencies → Stub third-party APIs; only run “full-real” tests in nightly builds.
- Data leakage across tests → Wrap in transactions or reset DB/containers between tests.
- Environment drift → Pin container versions, manage migrations in tests, keep CI parity.
Minimal “Getting Started” Checklist
- Choose your test runner (JUnit/pytest/jest) and container strategy (Testcontainers/compose).
- Add migrations + seed data.
- Wrap external APIs with clients that are easy to stub.
- Write 3–5 critical path tests (create/read/update; publish/consume; happy + failure paths).
- Wire into CI; make it part of the pull-request checks.
Conclusion
Integration tests give you real confidence that your system’s moving parts truly work together. Start with critical flows, run against realistic dependencies, keep scenarios focused, and automate them in CI. You’ll ship faster with fewer surprises—and your end-to-end suite can stay lean and purposeful.








Recent Comments