Usage Patterns
A pattern is a reusable solution to a recurring problem when building API simulations with Counterfact. Each pattern below describes a context, the problem it addresses, the solution, and its consequences.
Most projects start with Explore a New API or Executable Spec to get a running server from an OpenAPI spec with no code. From there, Mock APIs with Dummy Data and AI-Assisted Implementation are the natural next steps for adding realistic responses — the former by hand, the latter with an AI agent doing the heavy lifting.
As the mock grows, Scenario Scripts let you automate repetitive REPL interactions — seeding data on startup, building reusable request sequences — while Federated Context Files and Test the Context, Not the Handlers keep the stateful logic organized and reliable. Live Server Inspection with the REPL is Counterfact’s most distinctive feature, letting you seed data, send requests, and toggle behavior in real time without restarting, and Simulate Failures and Edge Cases and Simulate Realistic Latency extend any mock to cover the error paths and performance characteristics that real services exhibit.
When your project involves multiple versions or multiple specs, Multiple API Versions shows how to serve them from a shared set of route files using $.minVersion() to branch on version without duplicating handlers. For teams that want the mock to remain a reliable, long-lived artifact, Reference Implementation and Automated Integration Tests make it a first-class part of the codebase that can run in CI. Finally, Agentic Sandbox and Hybrid Proxy address the two common integration strategies — isolating an AI agent from the real service, or blending mock and live traffic — and Custom Middleware covers cross-cutting concerns like authentication and logging without touching individual handlers.
All patterns
| Pattern | When to use it |
|---|---|
| Explore a New API | You have a spec but no running backend or production access |
| Executable Spec | You want immediate feedback on how spec changes affect the running server during API design |
| Mock APIs with Dummy Data | You need realistic-looking responses to build a UI, run a demo, or write assertions |
| Scenario Scripts | You want to automate REPL interactions, seed data on startup, or build reusable state configurations |
| AI-Assisted Implementation | You want an AI agent to replace random responses with working handler logic |
| Federated Context Files | You want each domain to own its state, with explicit cross-domain dependencies |
| Test the Context, Not the Handlers | You want to keep shared stateful logic reliable as the mock grows |
| Live Server Inspection with the REPL | You want to seed data, send requests, and toggle behavior without restarting the server |
| Simulate Failures and Edge Cases | You need reproducible, on-demand error conditions for development or testing |
| Simulate Realistic Latency | You want to test how clients and UIs behave under realistic response times |
| Reference Implementation | You want a working, executable implementation that expresses intended API behavior in code |
| Multiple API Versions | You maintain multiple versions of an API and want shared handlers that adapt by version |
| Agentic Sandbox | You are building an AI coding agent and want to avoid rate limits and costs during development |
| Hybrid Proxy | Some endpoints exist in the real backend; others need to be mocked |
| Automated Integration Tests | You want to run real HTTP tests against the mock in a CI-friendly test suite |
| Custom Middleware | You want authentication, headers, or logging applied uniformly across a group of routes |