This is a viewer only at the moment see the article on how this works.
To update the preview hit Ctrl-Alt-R (or ⌘-Alt-R on Mac) or Enter to refresh. The Save icon lets you save the markdown file to disk
This is a preview from the server running through my markdig pipeline
Sunday, 02 November 2025
One of my constant obsessions is sample data. It's an often annoying aspect of developing systems that you have the catch 22 of needing data to appropritately test fucntionality while developing the system which would let you create said data. Combined with my current love of 'AI assisted' coding random ideas it's led to the ceation of a little LLM enabled test data generator along with a nuget package with a middleware that can generate simulated API responses using an API:
You can find the GitHub here for the project, all public domain etc...
As with a lot of "AI Assisted" coding ideas it started with an idea about simulating output using LLMs. I was working on another project (LucidForums, a hilariously dysfunctional self populating LLM based forum experiment) and LLMs are really good (if a little slow) at generating sample data so what if I could use them to simulate any API.
This is what I came up with. I'll add more detail on the thinking as I add more functionality. It's really neat, really works and is faster than I'd feared. However; I have an A4000 16Gb, you could select smaller edge models but the quality would likey vary massively.
Future additions will likely include caching in case you want faster perf.
Here's the readme for the package (fetched automatically).
A comprehensive, production-ready ASP.NET Core mocking platform for generating realistic mock API responses using multiple LLM backends. Add intelligent mock endpoints to any project with just 2 lines of code!
Version 2.1.0 - Enhanced reliability, comprehensive validation, and streamlined configuration.
Focus: Enhanced reliability, comprehensive testing, and improved developer experience. Fully backward compatible with v2.0.
Complete validation coverage with 70+ ready-to-run test cases:
Improved instruction following for large array generation:
Clean, reference-based configuration:
All 25+ management endpoints fully documented:
Note: v2.0.0 was skipped to refine these critical areas before stable release.
See RELEASE_NOTES.md for complete details and full version history
This package provides six independent features - use any combination you need:
AddLLMockApi() + MapLLMockApi("/api/mock") = instant mock API/api/mock/graphql with standard GraphQL queriesdata and errors fieldsFor detailed guides with architecture diagrams, use cases, and implementation details:
Backend API Reference - Complete management endpoint documentation
Multiple LLM Backends Guide - Multiple provider support
API Contexts Guide - NEW!
gRPC Support Guide - NEW in v1.7.0!
dotnet add package mostlylucid.mockllmapi
ollama pull llama3
This package was developed and tested with llama3 (8B parameters), which provides excellent results for all features. However, it works with any Ollama-compatible model:
| Model | Size | Speed | Quality | Context | Best For |
|---|---|---|---|---|---|
| gemma3:4b | 4B | Fast | Good | 4K | KILLER for lower-end machines! |
| llama3 (default) | 8B | Medium | Very Good | 8K | General use, production |
| mistral-nemo | 12B | Slower | Excellent | 128K | High quality, massive datasets |
| mistral:7b | 7B | Medium | Very Good | 8K | Alternative to llama3 |
| phi3 | 3.8B | Fast | Good | 4K | Quick prototyping |
| tinyllama | 1.1B | Very Fast | Basic | 2K | Ultra resource-constrained |
Gemma 3 is KILLER for lower-end machines - fast, lightweight, excellent quality:
ollama pull gemma3:4b
{
"MockLlmApi": {
"ModelName": "gemma3:4b",
"Temperature": 1.2,
"MaxInputTokens": 4096
}
}
Why it's great:
For production-like testing with complex schemas:
ollama pull mistral-nemo
{
"MockLlmApi": {
"ModelName": "mistral-nemo",
"Temperature": 1.2,
"MaxInputTokens": 8000
}
}
Why it's great:
For gemma3:4b (Recommended for development):
{
"ModelName": "gemma3:4b",
"Temperature": 1.2,
"MaxContextWindow": 4096 // Set to model's context window size
}
For llama3 or mistral:7b (Production):
{
"ModelName": "llama3", // or "mistral:7b"
"Temperature": 1.2,
"MaxContextWindow": 8192 // Set to model's context window size
}
For mistral-nemo (High-quality production):
{
"ModelName": "mistral-nemo",
"Temperature": 1.2,
"MaxContextWindow": 32768, // Or 128000 if configured in Ollama
"TimeoutSeconds": 120 // Longer timeout for large contexts
}
Note: Mistral-nemo requires Ollama context configuration for 128K contexts.
Where to find MaxContextWindow:
# Check model info
ollama show {model-name}
# Look for "context_length" or "num_ctx" parameter
# Example output: "context_length": 8192
For smaller models (phi3, tinyllama):
{
"ModelName": "tinyllama",
"Temperature": 0.7 // Lower temperature for stability
}
Why Temperature Matters:
# RECOMMENDED for development (fast, lightweight)
ollama pull gemma3:4b
# Production options
ollama pull llama3 # Best balance
ollama pull mistral-nemo # Highest quality (requires more RAM)
# Alternative options
ollama pull mistral:7b
ollama pull phi3
Important Limitations:
tinyllama, phi3) work but may:
MaxRetryAttempts to 5 or moreProgram.cs:
using mostlylucid.mockllmapi;
var builder = WebApplication.CreateBuilder(args);
// Add LLMock API services (all protocols: REST, GraphQL, SSE)
builder.Services.AddLLMockApi(builder.Configuration);
var app = builder.Build();
// Map mock endpoints at /api/mock (includes REST, GraphQL, SSE)
app.MapLLMockApi("/api/mock");
app.Run();
appsettings.json:
{
"mostlylucid.mockllmapi": {
"BaseUrl": "http://localhost:11434/v1/",
"ModelName": "llama3",
"Temperature": 1.2
}
}
That's it! Now all requests to /api/mock/** return intelligent mock data.
📘 Complete Configuration Reference: See Configuration Reference Guide for all options 📄 Full Example: See appsettings.Full.json - demonstrates every configuration option
{
"mostlylucid.mockllmapi": {
"BaseUrl": "http://localhost:11434/v1/",
"ModelName": "llama3",
"Temperature": 1.2,
"TimeoutSeconds": 30,
"EnableVerboseLogging": false,
"CustomPromptTemplate": null,
// Token Management (NEW in v1.5.0)
"MaxInputTokens": 4096, // Adjust based on model (2048-8192)
// Resilience Policies (enabled by default)
"EnableRetryPolicy": true,
"MaxRetryAttempts": 3,
"RetryBaseDelaySeconds": 1.0,
"EnableCircuitBreaker": true,
"CircuitBreakerFailureThreshold": 5,
"CircuitBreakerDurationSeconds": 30
}
}
Model-Specific Token Limits: See LLMApi/appsettings.json for configuration examples for different models (Llama 3, TinyLlama, Mistral, etc.). Each model has different context window sizes - adjust MaxInputTokens accordingly.
API Contexts: For detailed information about using contexts to maintain consistency across requests, see the API Contexts Guide.
New in v1.2.0: Built-in Polly resilience policies protect your application from LLM service failures!
The package includes two resilience patterns enabled by default:
Exponential Backoff Retry
Circuit Breaker
Configuration:
{
"mostlylucid.mockllmapi": {
// Enable/disable retry policy
"EnableRetryPolicy": true,
"MaxRetryAttempts": 3,
"RetryBaseDelaySeconds": 1.0, // Actual delays: 1s, 2s, 4s (exponential)
// Enable/disable circuit breaker
"EnableCircuitBreaker": true,
"CircuitBreakerFailureThreshold": 5, // Open after 5 consecutive failures
"CircuitBreakerDurationSeconds": 30 // Stay open for 30 seconds
}
}
Logging:
The resilience policies log all retry attempts and circuit breaker state changes:
[Warning] LLM request failed (attempt 2/4). Retrying in 2000ms. Error: Connection refused
[Error] Circuit breaker OPENED after 5 consecutive failures. All LLM requests will be rejected for 30 seconds
[Information] Circuit breaker CLOSED. LLM requests will be attempted normally
When to Adjust:
MaxRetryAttempts or RetryBaseDelaySecondsCircuitBreakerDurationSecondsCircuitBreakerFailureThresholdEnableRetryPolicy and EnableCircuitBreaker to falsebuilder.Services.Addmostlylucid.mockllmapi(options =>
{
options.BaseUrl = "http://localhost:11434/v1/";
options.ModelName = "mixtral";
options.Temperature = 1.5;
options.TimeoutSeconds = 60;
});
// Default: /api/mock/** and /api/mock/stream/**
app.Mapmostlylucid.mockllmapi("/api/mock");
// Custom pattern
app.Mapmostlylucid.mockllmapi("/demo");
// Creates: /demo/** and /demo/stream/**
// Without streaming
app.Mapmostlylucid.mockllmapi("/api/mock", includeStreaming: false);
curl http://localhost:5000/api/mock/users?limit=5
Returns realistic user data generated by the LLM.
Use contexts to maintain consistency across multiple related requests:
# Step 1: Create a user
curl "http://localhost:5000/api/mock/users?context=checkout-flow"
# Step 2: Create order for that user (LLM references user from context)
curl "http://localhost:5000/api/mock/orders?context=checkout-flow"
# Step 3: Add payment (LLM references both user and order)
curl "http://localhost:5000/api/mock/payments?context=checkout-flow"
Each request in the same context sees the previous requests, ensuring consistent IDs, names, and data relationships. Perfect for multi-step workflows! See the API Contexts Guide for complete examples.
curl -X POST http://localhost:5000/api/mock/orders \
-H "X-Response-Shape: {\"orderId\":\"string\",\"total\":0.0,\"items\":[{\"sku\":\"string\",\"qty\":0}]}" \
-H "Content-Type: application/json" \
-d '{"customerId":"cus_123"}'
LLM generates data matching your exact shape specification.
SSE streaming is part of the REST API - just enable it when mapping endpoints:
// SSE streaming is automatically available at /api/mock/stream/**
app.MapLLMockApi("/api/mock", includeStreaming: true);
Usage:
curl -N http://localhost:5000/api/mock/stream/products?category=electronics \
-H "Accept: text/event-stream"
Returns Server-Sent Events as JSON is generated token-by-token:
data: {"chunk":"{","done":false}
data: {"chunk":"\"id\"","done":false}
data: {"chunk":":","done":false}
data: {"chunk":"123","done":false}
...
data: {"content":"{\"id\":123,\"name\":\"Product\"}","done":true,"schema":"{...}"}
JavaScript Example:
const eventSource = new EventSource('/api/mock/stream/users?limit=5');
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.done) {
console.log('Complete:', data.content);
eventSource.close();
} else {
console.log('Chunk:', data.chunk);
}
};
With Shape Control:
curl -N "http://localhost:5000/api/mock/stream/orders?shape=%7B%22id%22%3A0%2C%22items%22%3A%5B%5D%7D"
The streaming endpoint supports all the same features as regular endpoints:
New in v1.2.0: Native GraphQL support with query-driven mock data generation!
LLMock API includes built-in GraphQL endpoint support. Unlike REST endpoints where you specify shapes separately, GraphQL queries naturally define the exact structure they expect - the query IS the shape.
The GraphQL endpoint is automatically available when you map the LLMock API:
app.MapLLMockApi("/api/mock", includeGraphQL: true); // GraphQL enabled by default
This creates a GraphQL endpoint at /api/mock/graphql.
Simple Query:
curl -X POST http://localhost:5000/api/mock/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ users { id name email role } }"}'
Response:
{
"data": {
"users": [
{ "id": 1, "name": "Alice Johnson", "email": "alice@example.com", "role": "admin" },
{ "id": 2, "name": "Bob Smith", "email": "bob@example.com", "role": "user" }
]
}
}
curl -X POST http://localhost:5000/api/mock/graphql \
-H "Content-Type: application/json" \
-d '{
"query": "query GetUser($userId: ID!) { user(id: $userId) { id name email } }",
"variables": { "userId": "12345" },
"operationName": "GetUser"
}'
GraphQL's power shines with nested data:
{
company {
name
employees {
id
firstName
lastName
department {
name
location
}
projects {
id
title
status
milestones {
title
dueDate
completed
}
}
}
}
}
The LLM generates realistic data matching your exact query structure - including all nested relationships.
async function fetchGraphQL(query, variables = {}) {
const response = await fetch('/api/mock/graphql', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query, variables })
});
const result = await response.json();
if (result.errors) {
console.error('GraphQL errors:', result.errors);
}
return result.data;
}
// Usage
const data = await fetchGraphQL(`
query GetProducts($category: String) {
products(category: $category) {
id
name
price
inStock
reviews {
rating
comment
}
}
}
`, { category: 'electronics' });
GraphQL errors are returned in standard format:
{
"data": null,
"errors": [
{
"message": "Invalid GraphQL request format",
"extensions": {
"code": "INTERNAL_SERVER_ERROR"
}
}
]
}
{ "data": {...} }Use the included LLMApi.http file which contains 5 ready-to-use GraphQL examples:
See the GraphQL examples in LLMApi.http for complete working examples.
GraphQL responses can become large with deeply nested queries. To prevent JSON truncation errors, configure the GraphQLMaxTokens option:
{
"MockLlmApi": {
"GraphQLMaxTokens": 300 // Recommended: 200-300 for reliability
}
}
Token Limit Guidelines:
| Model | Recommended Max Tokens | Notes |
|---|---|---|
| llama3 | 300-500 | Best balance of speed and complexity |
| mistral:7b | 300-500 | Handles nested structures well |
| phi3 | 200-300 | Keep queries simple |
| tinyllama | 150-200 | Use shallow queries only |
Why Lower Is Better:
For Complex Nested Queries:
Example configuration for complex queries:
{
"MockLlmApi": {
"ModelName": "llama3", // Larger model
"GraphQLMaxTokens": 800, // Higher limit for nested data
"Temperature": 1.2
}
}
LLMock API includes optional SignalR support for continuous, real-time mock data generation. This is perfect for:
SignalR works independently - you don't need the REST API endpoints to use SignalR streaming.
1. Minimal SignalR-only setup:
using mostlylucid.mockllmapi;
var builder = WebApplication.CreateBuilder(args);
// Add SignalR services (no REST API needed!)
builder.Services.AddLLMockSignalR(builder.Configuration);
var app = builder.Build();
app.UseRouting();
// Map SignalR hub and management endpoints
app.MapLLMockSignalR("/hub/mock", "/api/mock");
app.Run();
Optional: Add REST API too
If you also want the REST API endpoints, add these lines:
// Add core LLMock API services (optional)
builder.Services.AddLLMockApi(builder.Configuration);
// Map REST API endpoints (optional)
app.MapLLMockApi("/api/mock", includeStreaming: true);
2. Configure in appsettings.json:
{
"MockLlmApi": {
"BaseUrl": "http://localhost:11434/v1/",
"ModelName": "llama3",
"Temperature": 1.2,
"SignalRPushIntervalMs": 5000,
"HubContexts": [
{
"Name": "weather",
"Description": "Weather data with temperature, condition, humidity, and wind speed"
},
{
"Name": "stocks",
"Description": "Stock market data with symbol, current price, change percentage, and trading volume"
}
]
}
}
3. Connect from client:
// Using @microsoft/signalr
const connection = new signalR.HubConnectionBuilder()
.withUrl("/hub/mock")
.withAutomaticReconnect()
.build();
// Subscribe to a context
connection.on("DataUpdate", (message) => {
console.log(`${message.context}:`, message.data);
// message.data contains generated JSON matching the shape
// message.timestamp is unix timestamp in ms
});
await connection.start();
await connection.invoke("SubscribeToContext", "weather");
Each hub context simulates a complete API request and generates data continuously:
{
"Name": "orders", // Context name (SignalR group identifier)
"Description": "Order data..." // Plain English description (LLM generates JSON from this)
// Optional:
// "IsActive": true, // Start in active/stopped state (default: true)
// "Shape": "{...}", // Explicit JSON shape or JSON Schema
// "IsJsonSchema": false // Auto-detected if not specified
}
Recommended: Use Plain English Descriptions
Let the LLM automatically generate appropriate JSON structures:
{
"Name": "sensors",
"Description": "IoT sensor data with device ID, temperature, humidity, battery level, and last reading timestamp"
}
The LLM automatically generates an appropriate JSON schema from your description - no manual Shape required!
Create and manage SignalR contexts at runtime using the management API:
POST /api/mock/contexts
Content-Type: application/json
{
"name": "crypto",
"description": "Cryptocurrency prices with symbol, USD price, 24h change percentage, and market cap"
}
Response:
{
"message": "Context 'crypto' registered successfully",
"context": {
"name": "crypto",
"description": "Cryptocurrency prices...",
"method": "GET",
"path": "/crypto",
"shape": "{...generated JSON schema...}",
"isJsonSchema": true
}
}
GET /api/mock/contexts
Response:
{
"contexts": [
{
"name": "weather",
"description": "Realistic weather data with temperature, conditions, humidity, and wind speed for a single location",
"method": "GET",
"path": "/weather/current",
"shape": "{...}"
},
{
"name": "crypto",
"description": "Cryptocurrency prices...",
"shape": "{...}"
}
],
"count": 2
}
Note: The list endpoint merges contexts configured in appsettings.json with any dynamically created contexts at runtime. Descriptions from appsettings are included even if those contexts have not yet been dynamically registered.
GET /api/mock/contexts/weather
Response:
{
"name": "weather",
"method": "GET",
"path": "/weather/current",
"shape": "{\"temperature\":0,\"condition\":\"string\"}",
"isJsonSchema": false
}
DELETE /api/mock/contexts/crypto
Response:
{
"message": "Context 'crypto' deleted successfully"
}
POST /api/mock/contexts/crypto/start
Response:
{
"message": "Context 'crypto' started successfully"
}
Starts generating data for a stopped context without affecting connected clients.
POST /api/mock/contexts/crypto/stop
Response:
{
"message": "Context 'crypto' stopped successfully"
}
Stops generating new data but keeps the context registered. Clients remain connected but receive no updates until started again.
<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/npm/@microsoft/signalr@8.0.0/dist/browser/signalr.min.js"></script>
</head>
<body>
<h1>Live Weather Data</h1>
<div id="weather-data"></div>
<script>
const connection = new signalR.HubConnectionBuilder()
.withUrl("/hub/mock")
.withAutomaticReconnect()
.build();
connection.on("DataUpdate", (message) => {
if (message.context === "weather") {
const weatherDiv = document.getElementById("weather-data");
weatherDiv.innerHTML = `
<h2>Current Weather</h2>
<p>Temperature: ${message.data.temperature}°F</p>
<p>Condition: ${message.data.condition}</p>
<p>Humidity: ${message.data.humidity}%</p>
<p>Updated: ${new Date(message.timestamp).toLocaleTimeString()}</p>
`;
}
});
connection.start()
.then(() => {
console.log("Connected to SignalR hub");
return connection.invoke("SubscribeToContext", "weather");
})
.then(() => {
console.log("Subscribed to weather context");
})
.catch(err => console.error(err));
</script>
</body>
</html>
async function createDynamicContext() {
// Create the context
const response = await fetch("/api/mock/contexts", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
name: "stocks",
description: "Stock market data with ticker symbol, current price, daily change percentage, and trading volume"
})
});
const result = await response.json();
console.log("Context created:", result.context);
// Subscribe to receive data
await connection.invoke("SubscribeToContext", "stocks");
console.log("Now receiving live stock data!");
}
The MockLlmHub supports the following methods:
SubscribeToContext(string context)
DataUpdate events with generated dataUnsubscribeFromContext(string context)
Events received by client:
DataUpdate - Contains generated mock data
{
context: "weather", // Context name
method: "GET", // Simulated HTTP method
path: "/weather/current", // Simulated path
timestamp: 1699564820000, // Unix timestamp (ms)
data: { // Generated JSON matching the shape
temperature: 72,
condition: "Sunny",
humidity: 45,
windSpeed: 8
}
}
Subscribed - Confirmation of subscription
{
context: "weather",
message: "Subscribed to weather"
}
Unsubscribed - Confirmation of unsubscription
{
context: "weather",
message: "Unsubscribed from weather"
}
{
"MockLlmApi": {
"SignalRPushIntervalMs": 5000, // Interval between data pushes (ms)
"HubContexts": [...] // Array of pre-configured contexts
}
}
Hub contexts support both simple JSON shapes and full JSON Schema:
Simple Shape:
{
"Name": "users",
"Shape": "{\"id\":0,\"name\":\"string\",\"email\":\"string\"}"
}
JSON Schema:
{
"Name": "products",
"Shape": "{\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"number\"},\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"number\"}},\"required\":[\"id\",\"name\",\"price\"]}",
"IsJsonSchema": true
}
The system auto-detects JSON Schema by looking for $schema, type, or properties fields.
graph TD
Client[SignalR Client] -->|Subscribe| Hub[MockLlmHub]
Hub -->|Join Group| Group[SignalR Group]
BG[Background Service] -->|Generate Data| LLM[Ollama LLM]
LLM -->|JSON Response| BG
BG -->|Push Data| Group
Group -->|DataUpdate Event| Client
API[Management API] -->|CRUD| Manager[DynamicHubContextManager]
Manager -->|Register/Unregister| BG
Components:
1. Dashboard Prototyping
// Subscribe to multiple data sources
await connection.invoke("SubscribeToContext", "sales");
await connection.invoke("SubscribeToContext", "traffic");
await connection.invoke("SubscribeToContext", "alerts");
// Now receiving live updates for all three!
2. IoT Simulation
{
"Name": "sensors",
"Description": "IoT temperature sensors with device ID, current temperature, battery percentage, and signal strength",
"Path": "/iot/sensors"
}
3. Financial Data
{
"Name": "trading",
"Description": "Real-time stock trades with timestamp, symbol, price, volume, and buyer/seller IDs",
"Path": "/trading/live"
}
4. Gaming Leaderboard
{
"Name": "leaderboard",
"Description": "Gaming leaderboard with player name, score, rank, level, and country",
"Path": "/game/leaderboard"
}
New in v1.1.0: Full lifecycle control over SignalR contexts with real-time status tracking!
Each context has the following properties:
Contexts can be in two states:
SignalRPushIntervalMs (default: 5 seconds)Key Features:
Response Caching for SignalR: New in v1.1.0: Intelligent caching reduces LLM load and improves consistency!
MaxCachePerKey)This significantly reduces LLM load for high-frequency contexts, especially with multiple clients.
Example Workflow:
# Create a context
POST /api/mock/contexts
{ "name": "metrics", "description": "Server metrics" }
# Stop data generation (clients remain connected)
POST /api/mock/contexts/metrics/stop
# Resume data generation
POST /api/mock/contexts/metrics/start
# Remove context entirely
DELETE /api/mock/contexts/metrics
New Feature: Automatically generate mock endpoints from OpenAPI/Swagger specifications! Point to any OpenAPI 3.0/Swagger 2.0 spec (URL or file) and the library will create mock endpoints for all defined operations.
dotnet add package mostlylucid.mockllmapi
Program.cs:
using mostlylucid.mockllmapi;
var builder = WebApplication.CreateBuilder(args);
// Add OpenAPI mock services
builder.Services.AddLLMockOpenApi(builder.Configuration);
var app = builder.Build();
app.UseRouting();
// Map OpenAPI-based mock endpoints
app.MapLLMockOpenApi();
app.Run();
appsettings.json:
{
"MockLlmApi": {
"BaseUrl": "http://localhost:11434/v1/",
"ModelName": "llama3",
"Temperature": 1.2,
"OpenApiSpecs": [
{
"Name": "petstore",
"Source": "https://petstore3.swagger.io/api/v3/openapi.json",
"BasePath": "/petstore",
"EnableStreaming": false
},
{
"Name": "myapi",
"Source": "./specs/my-api.yaml",
"BasePath": "/api/v1"
}
]
}
}
That's it! All endpoints from your OpenAPI spec are now available as intelligent LLM-powered mocks.
/users/{id} automaticallyEach OpenAPI spec supports these configuration options:
| Property | Type | Description |
|---|---|---|
Name |
string | Unique identifier for this spec (required) |
Source |
string | URL or file path to OpenAPI spec (required) |
BasePath |
string | Override base path (default: uses spec's servers[0].url) |
EnableStreaming |
bool | Add /stream suffix for SSE streaming (default: false) |
IncludeTags |
string[] | Only generate endpoints with these tags |
ExcludeTags |
string[] | Skip endpoints with these tags |
IncludePaths |
string[] | Only generate these paths (supports wildcards like /users/*) |
ExcludePaths |
string[] | Skip these paths (supports wildcards) |
Filter by tags:
{
"Name": "petstore",
"Source": "https://petstore3.swagger.io/api/v3/openapi.json",
"IncludeTags": ["pet", "store"]
}
Filter by paths:
{
"Name": "api",
"Source": "./specs/api.yaml",
"IncludePaths": ["/users/*", "/products/*"],
"ExcludePaths": ["/admin/*"]
}
Enable streaming:
{
"Name": "api",
"Source": "./specs/api.yaml",
"BasePath": "/api",
"EnableStreaming": true
}
Given this OpenAPI spec configuration:
{
"Name": "petstore",
"Source": "./specs/petstore.json",
"BasePath": "/petstore"
}
And a spec defining GET /pet/{petId} that returns a Pet object, you can test:
# Get a pet by ID
curl http://localhost:5116/petstore/pet/123
# Response (generated by LLM based on Pet schema):
{
"id": 123,
"name": "Fluffy",
"category": {
"id": 1,
"name": "Cats"
},
"photoUrls": ["https://example.com/fluffy.jpg"],
"tags": [
{"id": 1, "name": "cute"},
{"id": 2, "name": "playful"}
],
"status": "available"
}
OpenAPI mocks work independently of REST/GraphQL/SignalR:
// Just OpenAPI mocks
builder.Services.AddLLMockOpenApi(builder.Configuration);
app.MapLLMockOpenApi();
// Or combine with other features
builder.Services.AddLLMockRest(builder.Configuration);
builder.Services.AddLLMockOpenApi(builder.Configuration);
app.MapLLMockRest("/api/mock");
app.MapLLMockOpenApi();
/users/{id}, /posts/{postId}/comments/{commentId}, etc.The included management.http file contains comprehensive examples for all management endpoints:
Spec Management:
Endpoint Testing:
Advanced Workflows:
Example from management.http:
### Load Petstore API
POST http://localhost:5116/api/openapi/specs
Content-Type: application/json
{
"name": "petstore",
"source": "https://petstore3.swagger.io/api/v3/openapi.json",
"basePath": "/petstore"
}
### Test an endpoint
POST http://localhost:5116/api/openapi/test
Content-Type: application/json
{
"specName": "petstore",
"path": "/pet/123",
"method": "GET"
}
See the complete management.http file for 20+ ready-to-use examples.
The package includes two complete demo applications with interactive web interfaces featuring full context management:
/) — Real-Time Data Streaming with Management UINew in v1.1.0: Enhanced 3-column layout with full context lifecycle management!
Features:
Quick-Start Examples: One-click buttons for 5 pre-configured scenarios:
Perfect for: Dashboards, live monitoring, IoT simulations, real-time feeds, prototyping
/Streaming) — Progressive JSON GenerationNew in v1.1.0: Quick-start example buttons for instant streaming!
Features:
Quick-Start Examples: One-click buttons for 4 streaming scenarios:
Perfect for: Observing LLM generation, debugging shapes, understanding streaming behavior, testing SSE
/OpenApi) — Dynamic Spec Loading & TestingNew Feature: Interactive OpenAPI specification management with real-time updates!
Features:
Quick-Start Examples:
Perfect for: API prototyping, frontend development, contract testing, spec validation, demos
Run the demos:
cd LLMApi
dotnet run
Navigate to:
http://localhost:5116 - SignalR real-time data streaming with management UIhttp://localhost:5116/Streaming - SSE progressive generationhttp://localhost:5116/OpenApi - OpenAPI spec manager with dynamic loadingAll demos include:
You can optionally have the middleware echo back the JSON shape/schema that was used to generate the mock response.
Configuration:
Examples:
{
"mostlylucid.mockllmapi": {
"IncludeShapeInResponse": true
}
}
curl "http://localhost:5000/api/mock/users?shape=%7B%22id%22%3A0%2C%22name%22%3A%22string%22%7D&includeSchema=true"
Response includes header:
X-Response-Schema: {"id":0,"name":"string"}
...
data: {"content":"{full json}","done":true,"schema":{"id":0,"name":"string"}}
Notes:
Use cases:
Override the default prompts with your own:
{
"mostlylucid.mockllmapi": {
"CustomPromptTemplate": "Generate mock data for {method} {path}. Body: {body}. Use seed: {randomSeed}"
}
}
Available placeholders:
{method} - HTTP method (GET, POST, etc.){path} - Full request path with query string{body} - Request body{randomSeed} - Generated random seed (GUID){timestamp} - Unix timestamp{shape} - Shape specification (if provided)Test your client's error handling with comprehensive error simulation capabilities.
Four ways to configure errors (in precedence order):
IMPORTANT: Query parameter values MUST be URL-encoded. Spaces become %20, & becomes %26, : becomes %3A, etc.
# Properly encoded (spaces as %20)
curl "http://localhost:5000/api/mock/users?error=404&errorMessage=Not%20found&errorDetails=User%20does%20not%20exist"
# More complex example with special characters
# Decoded: "Invalid input: email & phone required"
curl "http://localhost:5000/api/mock/users?error=400&errorMessage=Invalid%20input%3A%20email%20%26%20phone%20required"
curl -H "X-Error-Code: 401" \
-H "X-Error-Message: Unauthorized" \
-H "X-Error-Details: Token expired" \
http://localhost:5000/api/mock/users
$error property):# Simple: just status code
curl "http://localhost:5000/api/mock/users?shape=%7B%22%24error%22%3A404%7D"
# Complex: with message and details
curl "http://localhost:5000/api/mock/users?shape=%7B%22%24error%22%3A%7B%22code%22%3A422%2C%22message%22%3A%22Validation%20failed%22%2C%22details%22%3A%22Email%20invalid%22%7D%7D"
error property):curl -X POST http://localhost:5000/api/mock/users \
-H "Content-Type: application/json" \
-d '{
"error": {
"code": 409,
"message": "Conflict",
"details": "User already exists"
}
}'
Error Response Formats:
Regular/Streaming endpoints:
{
"error": {
"code": 404,
"message": "Not Found",
"details": "Optional additional context"
}
}
GraphQL endpoint:
{
"data": null,
"errors": [
{
"message": "Not Found",
"extensions": {
"code": 404,
"details": "Optional additional context"
}
}
]
}
SignalR Error Simulation:
Configure errors in SignalR contexts for testing real-time error handling:
{
"HubContexts": [
{
"Name": "errors",
"Description": "Error simulation stream",
"ErrorConfig": {
"Code": 500,
"Message": "Server error",
"Details": "Database connection lost"
}
}
]
}
Or dynamically via the management API:
curl -X POST http://localhost:5000/api/management/contexts \
-H "Content-Type: application/json" \
-d '{
"name": "errors",
"description": "Test errors",
"error": 503,
"errorMessage": "Service unavailable",
"errorDetails": "Maintenance in progress"
}'
Supported HTTP Status Codes:
The package includes default messages for common HTTP status codes:
Custom messages and details override the defaults.
Use Cases:
See LLMApi/LLMApi.http for comprehensive examples of all error simulation methods.
Mount multiple mock APIs with different configurations:
// Development data with high randomness
builder.Services.Addmostlylucid.mockllmapi("Dev", options =>
{
options.Temperature = 1.5;
options.ModelName = "llama3";
});
// Stable test data
builder.Services.Addmostlylucid.mockllmapi("Test", options =>
{
options.Temperature = 0.3;
options.ModelName = "llama3";
});
app.Mapmostlylucid.mockllmapi("/api/dev");
app.Mapmostlylucid.mockllmapi("/api/test");
Three ways to control response structure:
X-Response-Shape: {"field":"type"}?shape=%7B%22field%22%3A%22type%22%7D (URL-encoded JSON){"shape": {...}, "actualData": ...}You can instruct the middleware to pre-generate and cache multiple response variants for a specific request/shape by adding a special field inside the shape object: "$cache": N.
Examples
Header shape: X-Response-Shape: {"$cache":3,"orderId":"string","status":"string","items":[{"sku":"string","qty":0}]}
Body shape: { "shape": { "$cache": 5, "invoiceId": "string", "customer": { "id": "string", "name": "string" }, "items": [ { "sku": "string", "qty": 0, "price": 0.0 } ], "total": 0.0 } }
Query param (URL-encoded): ?shape=%7B%22%24cache%22%3A2%2C%22users%22%3A%5B%7B%22id%22%3A0%2C%22name%22%3A%22string%22%7D%5D%7D
Configuration
Notes
Use the included LLMApi.http file with:
The project includes comprehensive unit tests:
# Run all tests
dotnet test
# Run with detailed output
dotnet test --verbosity detailed
Test Coverage:
graph LR
Client[Client] -->|HTTP Request| API[LLMApi<br/>Minimal API]
API -->|Chat Completion| Ollama[Ollama API<br/>localhost:11434]
Ollama -->|Inference| Model[llama3 Model]
Model -->|Response| Ollama
Ollama -->|JSON/Stream| API
API -->|JSON/SSE| Client
API -.->|uses| Helper[AutoApiHelper]
style API fill:#4CAF50
style Helper fill:#2196F3
style Model fill:#FF9800
sequenceDiagram
participant C as Client
participant A as LLMApi
participant H as AutoApiHelper
participant O as Ollama
participant M as llama3
C->>A: GET/POST/PUT/DELETE /api/auto/**
A->>H: Extract context (method, path, body, shape)
H->>H: Generate random seed + timestamp
H->>H: Build prompt with randomness
H-->>A: Prompt + temperature=1.2
A->>O: POST /v1/chat/completions
O->>M: Run inference
M-->>O: Generated JSON
O-->>A: Response
A-->>C: JSON Response
flowchart TD
Start[Request Arrives] --> CheckQuery{Shape in<br/>Query Param?}
CheckQuery -->|Yes| UseQuery[Use Query Shape]
CheckQuery -->|No| CheckHeader{Shape in<br/>Header?}
CheckHeader -->|Yes| UseHeader[Use Header Shape]
CheckHeader -->|No| CheckBody{Shape in<br/>Body Field?}
CheckBody -->|Yes| UseBody[Use Body Shape]
CheckBody -->|No| NoShape[No Shape Constraint]
UseQuery --> BuildPrompt[Build Prompt]
UseHeader --> BuildPrompt
UseBody --> BuildPrompt
NoShape --> BuildPrompt
BuildPrompt --> AddRandom[Add Random Seed<br/>+ Timestamp]
AddRandom --> SendLLM[Send to LLM]
style UseQuery fill:#4CAF50
style UseHeader fill:#4CAF50
style UseBody fill:#4CAF50
style NoShape fill:#FFC107
Projects:
mostlylucid.mockllmapi: NuGet package libraryLLMApi: Demo applicationLLMApi.Tests: xUnit test suite (196 tests)cd mostlylucid.mockllmapi
dotnet pack -c Release
Package will be in bin/Release/mostlylucid.mockllmapi.{version}.nupkg
This is a sample project demonstrating LLM-powered mock APIs. Feel free to fork and customize!
This is free and unencumbered software released into the public domain. See LICENSE for details or visit unlicense.org.
© 2025 Scott Galloway — Unlicense — All content and source code on this site is free to use, copy, modify, and sell.