One of my constant obsessions is sample data. It's an often annoying aspect of developing systems that you have the catch 22 of needing data to appropritately test fucntionality while developing the system which would let you create said data. Combined with my current love of 'AI assisted' coding random ideas it's led to the ceation of a little LLM enabled test data generator along with a nuget package with a middleware that can generate simulated API responses using an API:
You can find the GitHub here for the project, all public domain etc...
As with a lot of "AI Assisted" coding ideas it started with an idea about simulating output using LLMs. I was working on another project (LucidForums, a hilariously dysfunctional self populating LLM based forum experiment) and LLMs are really good (if a little slow) at generating sample data so what if I could use them to simulate any API.
This is what I came up with. I'll add more detail on the thinking as I add more functionality. It's really neat, really works and is faster than I'd feared. However; I have an A4000 16Gb, you could select smaller edge models but the quality would likey vary massively.
Future additions will likely include caching in case you want faster perf.
Here's the readme for the package (fetched automatically).
What it does: A production-ready ASP.NET Core mocking platform for generating realistic mock API responses using LLMs.
Why you'd use it: Add intelligent mock endpoints to any project with just 2 lines of code—no databases, no hardcoded fixtures, no maintenance.
Companion Package: mostlylucid.mockllmapi.Testing - Testing utilities with fluent HttpClient integration
This README is comprehensive by design. Choose your path:
Latest: AutoShape (Shape Memory) - Automatic JSON structure consistency across endpoint calls v2.5.0: Security improvements, AOT/trimming support by removing serialization dependencies v2.3.0: Full content type support - form bodies, file uploads, comprehensive testing (405 tests) v2.2.0: Pluggable tools for API integration, pre-configured REST APIs, automatic context memory expiration v2.1.0: Rate limiting, batching, n-completions with multiple execution strategies
Focus: Security improvements and AOT/trimming support by removing serialization dependencies. Fully backward compatible with v2.4.0.
1. AOT/Trimming Support - Removed all reflection-based serialization dependencies:
System.Text.Json reflection usage2. Enhanced Security - Comprehensive security improvements:
3. Performance Optimizations - Improved runtime performance:
4. .NET 10 Compatibility - Full support for .NET 10 preview builds:
5. Testing Improvements - Enhanced test coverage:
See TEST_SUMMARY.md and IMPLEMENTATION_SUMMARY.md for complete details
Focus: Complete content type support for all common HTTP request formats. Fully backward compatible with v2.2.0.
1. Form Body Support - Full support for application/x-www-form-urlencoded content type. Single and multiple values, automatic JSON conversion, manual construction for .NET 10 compatibility. Perfect for testing HTML forms and traditional web apps.
2. File Upload Support - Full support for multipart/form-data including file uploads. Memory-safe streaming (8KB buffer), metadata extraction (filename, size, content type), mixed form fields and files. Content is dumped to avoid memory bloat.
3. Arbitrary Path Lengths - Support for deep path nesting via {**path} catch-all routing. Tested with 9-segment deep paths and complex query strings. No practical limit on path depth (up to ASP.NET Core defaults).
4. Comprehensive Test Suite - 228 tests (37 new) covering all features. Form body parsing (12 tests), JSON handling (25 tests), integration tests for full HTTP workflows. 100% pass rate, zero regressions.
5. .NET 10 Compatibility - Manual JSON construction throughout to avoid reflection-based serialization issues. All features tested and working on .NET 10 preview builds.
See TEST_SUMMARY.md and IMPLEMENTATION_SUMMARY.md for complete details
Focus: Pluggable tools for API integration, pre-configured REST APIs, and intelligent context memory. Fully backward compatible with v2.1.0.
1. Pluggable Tools & Actions System - Call external REST APIs or chain mock endpoints to create realistic workflows and decision trees. MCP-compatible architecture ready for LLM-driven tool selection. Full docs →
2. Pre-Configured REST APIs - Define complete API configurations once, call by name. Shape or OpenAPI spec reference, shared context management, tool integration. See appsettings.Full.json for 8 complete examples.
3. Dynamic Context Memory Management - Contexts now expire after 15 minutes of inactivity (configurable 5-1440 minutes). No memory leaks, automatic cleanup, smart touch on access.
4. Intelligent Shared Data Extraction - Automatically extracts ALL fields from responses at any nesting level. Nested objects, array tracking, first item data, custom fields—all tracked automatically.
5. Enhanced Documentation - New comprehensive guides for Tools & Actions, Rate Limiting, and API Contexts.
One engine, multiple protocols, shared infrastructure. All features share the same generation, context, and control systems—giving you consistent behavior across REST, GraphQL, SSE, SignalR, OpenAPI, and gRPC.
graph LR
Client[Client] -->|HTTP Request| API[LLMApi<br/>Minimal API]
API -->|Chat Completion| Ollama[Ollama API<br/>localhost:11434]
Ollama -->|Inference| Model[llm-model Model]
Model -->|Response| Ollama
Ollama -->|JSON/Stream| API
API -->|JSON/SSE| Client
API -.->|uses| Helper[AutoApiHelper]
style API fill:#4CAF50
style Helper fill:#2196F3
style Model fill:#FF9800
Key Components:
See detailed architecture diagrams below for request flow and shape control.
This package provides six independent features - use any combination you need (see Modular Examples for protocol-specific setups):
AddLLMockApi() + MapLLMockApi("/api/mock") = instant mock APIapplication/json) - Standard JSON request bodiesapplication/x-www-form-urlencoded) - HTML form submissionsmultipart/form-data) - File uploads with metadata extraction/api/mock/v1/api/products/electronics/computers/laptops/gaming/...)/api/mock/graphql with standard GraphQL queriesdata and errors fieldsmostlylucid.mockllmapi.Testing for easy HttpClient integration in tests - See Testing SectionWhat this is:
What this is NOT:
When to use it:
When NOT to use it:
For detailed guides with architecture diagrams, use cases, and implementation details:
Docker Deployment Guide - Complete Docker setup and deployment
Backend API Reference - Complete management endpoint documentation
Multiple LLM Backends Guide - Multiple provider support
API Contexts Guide - NEW!
Rate Limiting & Batching Guide - NEW in v2.1.0!
Pluggable Tools & Actions Guide - NEW in v2.2.0!
gRPC Support Guide - NEW in v1.7.0!
Test Summary - NEW in v2.3.0!
Implementation Summary - NEW in v2.3.0!
Main Package:
dotnet add package mostlylucid.mockllmapi
Testing Utilities (Optional):
dotnet add package mostlylucid.mockllmapi.Testing
Provides fluent API for easy HttpClient configuration in tests. See Testing Section for details.
The fastest way to get started - no .NET or Ollama installation required!
# Clone the repository
git clone https://github.com/scottgal/LLMApi.git
cd LLMApi
# Start everything with Docker Compose (includes Ollama + llm-model)
docker compose up -d
# Wait for model download (first run only, ~4.7GB)
docker compose logs -f ollama
# Test the API
curl "http://localhost:5116/api/mock/users?shape={\"id\":0,\"name\":\"\",\"email\":\"\"}"
That's it! The API is running at http://localhost:5116 with Ollama backend.
See the Complete Docker Guide for:
If not using Docker:
ollama pull ministral-3:3b
This package was developed and tested with ministral-3:3b (3B parameters), which provides excellent results for all features with very fast performance. However, it works with any Ollama-compatible model:
| Model | Size | Speed | Quality | Context | Best For |
|---|---|---|---|---|---|
| ministral-3:3b (default) | 3B | V.Fast | Excellent | 256K | KILLER for JSON! Fast, accurate, huge context |
| gemma3:4b | 4B | Fast | Good | 4K | Alternative for lower-end machines |
| llama3 | 8B | Medium | Very Good | 8K | General use, production |
| mistral-nemo | 12B | Slower | Excellent | 128K | High quality, massive datasets |
| mistral:7b | 7B | Medium | Very Good | 8K | Alternative to llm-model |
| phi3 | 3.8B | Fast | Good | 4K | Quick prototyping |
| tinyllama | 1.1B | Very Fast | Basic | 2K | Ultra resource-constrained |
Qwen 2.5 Coder is KILLER for JSON generation - ultra-fast, highly accurate, large context:
ollama pull ministral-3:3b
{
"MockLlmApi": {
"ModelName": "ministral-3:3b",
"Temperature": 1.2,
"MaxInputTokens": 8192
}
}
Why it's great:
For production-like testing with complex schemas:
ollama pull mistral-nemo
{
"MockLlmApi": {
"ModelName": "mistral-nemo",
"Temperature": 1.2,
"MaxInputTokens": 8000
}
}
Why it's great:
For ministral-3:3b (Recommended - default):
{
"ModelName": "ministral-3:3b",
"Temperature": 1.2,
"MaxContextWindow": 262144 // 256K context window
}
For gemma3:4b or llama3:
{
"ModelName": "llm-model", // or "mistral:7b"
"Temperature": 1.2,
"MaxContextWindow": 8192 // Set to model's context window size
}
For mistral-nemo (High-quality production):
{
"ModelName": "mistral-nemo",
"Temperature": 1.2,
"MaxContextWindow": 32768, // Or 128000 if configured in Ollama
"TimeoutSeconds": 120 // Longer timeout for large contexts
}
Note: Mistral-nemo requires Ollama context configuration for 128K contexts.
Where to find MaxContextWindow:
# Check model info
ollama show {model-name}
# Look for "context_length" or "num_ctx" parameter
# Example output: "context_length": 8192
For smaller models (phi3, tinyllama):
{
"ModelName": "tinyllama",
"Temperature": 0.7 // Lower temperature for stability
}
Why Temperature Matters:
# RECOMMENDED for development (fastest, most accurate JSON)
ollama pull ministral-3:3b
# Alternative options
ollama pull gemma3:4b # Good for low-end machines
ollama pull llama3 # General purpose, good balance
ollama pull mistral-nemo # Highest quality (requires more RAM)
# Alternative options
ollama pull mistral:7b
ollama pull phi3
Important Limitations:
tinyllama, phi3) work but may:
MaxRetryAttempts to 5 or moreProgram.cs:
using mostlylucid.mockllmapi;
var builder = WebApplication.CreateBuilder(args);
// Add LLMock API services (all protocols: REST, GraphQL, SSE)
builder.Services.AddLLMockApi(builder.Configuration);
var app = builder.Build();
// Map mock endpoints at /api/mock (includes REST, GraphQL, SSE)
app.MapLLMockApi("/api/mock");
app.Run();
appsettings.json:
{
"mostlylucid.mockllmapi": {
"BaseUrl": "http://localhost:11434/v1/",
"ModelName": "ministral-3:3b",
"Temperature": 1.2
}
}
That's it! Now all requests to /api/mock/** return intelligent mock data.
Automatically maintain consistent JSON structures across requests to the same endpoint.
AutoShape remembers the JSON structure from the first response to an endpoint and automatically applies it to all subsequent requests to that same endpoint (with different IDs). This ensures:
# First request - generates free-form response
GET /api/mock/users/123
Response: {"id": 123, "name": "Alice", "email": "alice@example.com", "role": "admin"}
# Second request - automatically uses same structure
GET /api/mock/users/456
Response: {"id": 456, "name": "Bob", "email": "bob@example.com", "role": "user"}
# All subsequent requests use the same schema!
GET /api/mock/users/789
Response: {"id": 789, "name": "Carol", "email": "carol@example.com", "role": "manager"}
Enabled by default in appsettings.json:
{
"MockLlmApi": {
"EnableAutoShape": true, // Default: true
"ShapeExpirationMinutes": 15 // Default: 15
}
}
Disable for specific request:
GET /api/mock/users/special?autoshape=false
Renew a bad shape:
# If first response was incomplete, renew it:
GET /api/mock/users/1?renewshape=true
# New response replaces old shape template
All these requests share the same shape (normalized to /api/mock/users/{id}):
GET /api/mock/users/123 # Numeric ID
GET /api/mock/users/abc-456 # Alphanumeric ID
GET /api/mock/users/550e8400-e29b... # UUID
?renewshape=truePerfect for:
Not needed when:
Shapes not being applied?
EnableAutoShape: true in configNeed to change a shape?
GET /api/mock/users/1?renewshape=true
Clear all shapes programmatically:
// Inject AutoShapeManager
autoShapeManager.ClearAllShapes();
For complete documentation, see CLAUDE.md AutoShape Section.
Test Coverage: 39 tests covering all autoshape functionality ✅
📘 Complete Configuration Reference: See Configuration Reference Guide for all options 📄 Full Example: See appsettings.Full.json - demonstrates every configuration option
{
"mostlylucid.mockllmapi": {
"BaseUrl": "http://localhost:11434/v1/",
"ModelName": "ministral-3:3b",
"Temperature": 1.2,
"TimeoutSeconds": 30,
"EnableVerboseLogging": false,
"CustomPromptTemplate": null,
// Token Management (NEW in v1.5.0)
"MaxInputTokens": 8192, // Ministral has 256K context
// Resilience Policies (enabled by default)
"EnableRetryPolicy": true,
"MaxRetryAttempts": 3,
"RetryBaseDelaySeconds": 1.0,
"EnableCircuitBreaker": true,
"CircuitBreakerFailureThreshold": 5,
"CircuitBreakerDurationSeconds": 30
}
}
Model-Specific Token Limits: See LLMApi/appsettings.json for configuration examples for different models (Llama 3, TinyLlama, Mistral, etc.). Each model has different context window sizes - adjust MaxInputTokens accordingly.
API Contexts: For detailed information about using contexts to maintain consistency across requests, see the API Contexts Guide.
New in v1.2.0: Built-in Polly resilience policies protect your application from LLM service failures!
The package includes two resilience patterns enabled by default:
Exponential Backoff Retry
Circuit Breaker
Configuration:
{
"mostlylucid.mockllmapi": {
// Enable/disable retry policy
"EnableRetryPolicy": true,
"MaxRetryAttempts": 3,
"RetryBaseDelaySeconds": 1.0, // Actual delays: 1s, 2s, 4s (exponential)
// Enable/disable circuit breaker
"EnableCircuitBreaker": true,
"CircuitBreakerFailureThreshold": 5, // Open after 5 consecutive failures
"CircuitBreakerDurationSeconds": 30 // Stay open for 30 seconds
}
}
Logging:
The resilience policies log all retry attempts and circuit breaker state changes:
[Warning] LLM request failed (attempt 2/4). Retrying in 2000ms. Error: Connection refused
[Error] Circuit breaker OPENED after 5 consecutive failures. All LLM requests will be rejected for 30 seconds
[Information] Circuit breaker CLOSED. LLM requests will be attempted normally
When to Adjust:
MaxRetryAttempts or RetryBaseDelaySecondsCircuitBreakerDurationSecondsCircuitBreakerFailureThresholdEnableRetryPolicy and EnableCircuitBreaker to falsebuilder.Services.Addmostlylucid.mockllmapi(options =>
{
options.BaseUrl = "http://localhost:11434/v1/";
options.ModelName = "mixtral";
options.Temperature = 1.5;
options.TimeoutSeconds = 60;
});
// Default: /api/mock/** and /api/mock/stream/**
app.Mapmostlylucid.mockllmapi("/api/mock");
// Custom pattern
app.Mapmostlylucid.mockllmapi("/demo");
// Creates: /demo/** and /demo/stream/**
// Without streaming
app.Mapmostlylucid.mockllmapi("/api/mock", includeStreaming: false);
curl http://localhost:5000/api/mock/users?limit=5
Returns realistic user data generated by the LLM.
HTML Form Submission:
curl -X POST http://localhost:5000/api/mock/users/register \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "username=john_doe&email=john@example.com&age=30"
The form data is automatically converted to JSON and passed to the LLM for realistic response generation.
Form with Multiple Values:
curl -X POST http://localhost:5000/api/mock/posts \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "title=My Post&tags=tech&tags=programming&tags=llm"
Multiple values for the same field name are converted to arrays automatically.
Single File Upload:
curl -X POST http://localhost:5000/api/mock/photos/upload \
-F "title=My Photo" \
-F "description=Beautiful sunset" \
-F "image=@photo.jpg"
Multiple Files with Form Data:
curl -X POST http://localhost:5000/api/mock/documents/bulk \
-F "title=Multiple uploads" \
-F "file1=@document1.pdf" \
-F "file2=@document2.pdf"
How File Uploads Work:
Example Response:
{
"message": "Files uploaded successfully",
"uploads": [
{
"fieldName": "image",
"fileName": "photo.jpg",
"contentType": "image/jpeg",
"size": 524288,
"processed": true
}
]
}
Complex Nested Paths:
curl "http://localhost:5000/api/mock/v1/api/products/electronics/computers/laptops/gaming/high-end/2024/details?brand=Dell&model=XPS15"
The LLM incorporates all path segments and query parameters into realistic response generation. No practical limit on path depth.
Use contexts to maintain consistency across multiple related requests:
# Step 1: Create a user
curl "http://localhost:5000/api/mock/users?context=checkout-flow"
# Step 2: Create order for that user (LLM references user from context)
curl "http://localhost:5000/api/mock/orders?context=checkout-flow"
# Step 3: Add payment (LLM references both user and order)
curl "http://localhost:5000/api/mock/payments?context=checkout-flow"
Each request in the same context sees the previous requests, ensuring consistent IDs, names, and data relationships. Perfect for multi-step workflows! See the API Contexts Guide for complete examples.
curl -X POST http://localhost:5000/api/mock/orders \
-H "X-Response-Shape: {\"orderId\":\"string\",\"total\":0.0,\"items\":[{\"sku\":\"string\",\"qty\":0}]}" \
-H "Content-Type: application/json" \
-d '{"customerId":"cus_123"}'
LLM generates data matching your exact shape specification.
SSE streaming is part of the REST API - just enable it when mapping endpoints:
// SSE streaming is automatically available at /api/mock/stream/**
app.MapLLMockApi("/api/mock", includeStreaming: true);
Usage:
curl -N http://localhost:5000/api/mock/stream/products?category=electronics \
-H "Accept: text/event-stream"
Returns Server-Sent Events as JSON is generated token-by-token:
data: {"chunk":"{","done":false}
data: {"chunk":"\"id\"","done":false}
data: {"chunk":":","done":false}
data: {"chunk":"123","done":false}
...
data: {"content":"{\"id\":123,\"name\":\"Product\"}","done":true,"schema":"{...}"}
JavaScript Example:
const eventSource = new EventSource('/api/mock/stream/users?limit=5');
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.done) {
console.log('Complete:', data.content);
eventSource.close();
} else {
console.log('Chunk:', data.chunk);
}
};
With Shape Control:
curl -N "http://localhost:5000/api/mock/stream/orders?shape=%7B%22id%22%3A0%2C%22items%22%3A%5B%5D%7D"
The streaming endpoint supports all the same features as regular endpoints:
New in v1.2.0: Native GraphQL support with query-driven mock data generation!
LLMock API includes built-in GraphQL endpoint support. Unlike REST endpoints where you specify shapes separately, GraphQL queries naturally define the exact structure they expect - the query IS the shape.
The GraphQL endpoint is automatically available when you map the LLMock API:
app.MapLLMockApi("/api/mock", includeGraphQL: true); // GraphQL enabled by default
This creates a GraphQL endpoint at /api/mock/graphql.
Simple Query:
curl -X POST http://localhost:5000/api/mock/graphql \
-H "Content-Type: application/json" \
-d '{"query": "{ users { id name email role } }"}'
Response:
{
"data": {
"users": [
{ "id": 1, "name": "Alice Johnson", "email": "alice@example.com", "role": "admin" },
{ "id": 2, "name": "Bob Smith", "email": "bob@example.com", "role": "user" }
]
}
}
curl -X POST http://localhost:5000/api/mock/graphql \
-H "Content-Type: application/json" \
-d '{
"query": "query GetUser($userId: ID!) { user(id: $userId) { id name email } }",
"variables": { "userId": "12345" },
"operationName": "GetUser"
}'
GraphQL's power shines with nested data:
{
company {
name
employees {
id
firstName
lastName
department {
name
location
}
projects {
id
title
status
milestones {
title
dueDate
completed
}
}
}
}
}
The LLM generates realistic data matching your exact query structure - including all nested relationships.
async function fetchGraphQL(query, variables = {}) {
const response = await fetch('/api/mock/graphql', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query, variables })
});
const result = await response.json();
if (result.errors) {
console.error('GraphQL errors:', result.errors);
}
return result.data;
}
// Usage
const data = await fetchGraphQL(`
query GetProducts($category: String) {
products(category: $category) {
id
name
price
inStock
reviews {
rating
comment
}
}
}
`, { category: 'electronics' });
GraphQL errors are returned in standard format:
{
"data": null,
"errors": [
{
"message": "Invalid GraphQL request format",
"extensions": {
"code": "INTERNAL_SERVER_ERROR"
}
}
]
}
{ "data": {...} }Use the included LLMApi.http file which contains 5 ready-to-use GraphQL examples:
See the GraphQL examples in LLMApi.http for complete working examples.
GraphQL responses can become large with deeply nested queries. To prevent JSON truncation errors, configure the GraphQLMaxTokens option:
{
"MockLlmApi": {
"GraphQLMaxTokens": 300 // Recommended: 200-300 for reliability
}
}
Token Limit Guidelines:
| Model | Recommended Max Tokens | Notes |
|---|---|---|
| llm-model | 300-500 | Best balance of speed and complexity |
| mistral:7b | 300-500 | Handles nested structures well |
| phi3 | 200-300 | Keep queries simple |
| tinyllama | 150-200 | Use shallow queries only |
Why Lower Is Better:
For Complex Nested Queries:
Example configuration for complex queries:
{
"MockLlmApi": {
"ModelName": "llm-model", // Larger model
"GraphQLMaxTokens": 800, // Higher limit for nested data
"Temperature": 1.2
}
}
LLMock API includes optional SignalR support for continuous, real-time mock data generation. This is perfect for:
SignalR works independently - you don't need the REST API endpoints to use SignalR streaming.
1. Minimal SignalR-only setup:
using mostlylucid.mockllmapi;
var builder = WebApplication.CreateBuilder(args);
// Add SignalR services (no REST API needed!)
builder.Services.AddLLMockSignalR(builder.Configuration);
var app = builder.Build();
app.UseRouting();
// Map SignalR hub and management endpoints
app.MapLLMockSignalR("/hub/mock", "/api/mock");
app.Run();
Optional: Add REST API too
If you also want the REST API endpoints, add these lines:
// Add core LLMock API services (optional)
builder.Services.AddLLMockApi(builder.Configuration);
// Map REST API endpoints (optional)
app.MapLLMockApi("/api/mock", includeStreaming: true);
2. Configure in appsettings.json:
{
"MockLlmApi": {
"BaseUrl": "http://localhost:11434/v1/",
"ModelName": "ministral-3:3b",
"Temperature": 1.2,
"SignalRPushIntervalMs": 5000,
"HubContexts": [
{
"Name": "weather",
"Description": "Weather data with temperature, condition, humidity, and wind speed"
},
{
"Name": "stocks",
"Description": "Stock market data with symbol, current price, change percentage, and trading volume"
}
]
}
}
3. Connect from client:
// Using @microsoft/signalr
const connection = new signalR.HubConnectionBuilder()
.withUrl("/hub/mock")
.withAutomaticReconnect()
.build();
// Subscribe to a context
connection.on("DataUpdate", (message) => {
console.log(`${message.context}:`, message.data);
// message.data contains generated JSON matching the shape
// message.timestamp is unix timestamp in ms
});
await connection.start();
await connection.invoke("SubscribeToContext", "weather");
Each hub context simulates a complete API request and generates data continuously:
{
"Name": "orders", // Context name (SignalR group identifier)
"Description": "Order data..." // Plain English description (LLM generates JSON from this)
// Optional:
// "IsActive": true, // Start in active/stopped state (default: true)
// "Shape": "{...}", // Explicit JSON shape or JSON Schema
// "IsJsonSchema": false // Auto-detected if not specified
}
Recommended: Use Plain English Descriptions
Let the LLM automatically generate appropriate JSON structures:
{
"Name": "sensors",
"Description": "IoT sensor data with device ID, temperature, humidity, battery level, and last reading timestamp"
}
The LLM automatically generates an appropriate JSON schema from your description - no manual Shape required!
Create and manage SignalR contexts at runtime using the management API:
POST /api/mock/contexts
Content-Type: application/json
{
"name": "crypto",
"description": "Cryptocurrency prices with symbol, USD price, 24h change percentage, and market cap"
}
Response:
{
"message": "Context 'crypto' registered successfully",
"context": {
"name": "crypto",
"description": "Cryptocurrency prices...",
"method": "GET",
"path": "/crypto",
"shape": "{...generated JSON schema...}",
"isJsonSchema": true
}
}
GET /api/mock/contexts
Response:
{
"contexts": [
{
"name": "weather",
"description": "Realistic weather data with temperature, conditions, humidity, and wind speed for a single location",
"method": "GET",
"path": "/weather/current",
"shape": "{...}"
},
{
"name": "crypto",
"description": "Cryptocurrency prices...",
"shape": "{...}"
}
],
"count": 2
}
Note: The list endpoint merges contexts configured in appsettings.json with any dynamically created contexts at runtime. Descriptions from appsettings are included even if those contexts have not yet been dynamically registered.
GET /api/mock/contexts/weather
Response:
{
"name": "weather",
"method": "GET",
"path": "/weather/current",
"shape": "{\"temperature\":0,\"condition\":\"string\"}",
"isJsonSchema": false
}
DELETE /api/mock/contexts/crypto
Response:
{
"message": "Context 'crypto' deleted successfully"
}
POST /api/mock/contexts/crypto/start
Response:
{
"message": "Context 'crypto' started successfully"
}
Starts generating data for a stopped context without affecting connected clients.
POST /api/mock/contexts/crypto/stop
Response:
{
"message": "Context 'crypto' stopped successfully"
}
Stops generating new data but keeps the context registered. Clients remain connected but receive no updates until started again.
<!DOCTYPE html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/npm/@microsoft/signalr@8.0.0/dist/browser/signalr.min.js"></script>
</head>
<body>
<h1>Live Weather Data</h1>
<div id="weather-data"></div>
<script>
const connection = new signalR.HubConnectionBuilder()
.withUrl("/hub/mock")
.withAutomaticReconnect()
.build();
connection.on("DataUpdate", (message) => {
if (message.context === "weather") {
const weatherDiv = document.getElementById("weather-data");
weatherDiv.innerHTML = `
<h2>Current Weather</h2>
<p>Temperature: ${message.data.temperature}°F</p>
<p>Condition: ${message.data.condition}</p>
<p>Humidity: ${message.data.humidity}%</p>
<p>Updated: ${new Date(message.timestamp).toLocaleTimeString()}</p>
`;
}
});
connection.start()
.then(() => {
console.log("Connected to SignalR hub");
return connection.invoke("SubscribeToContext", "weather");
})
.then(() => {
console.log("Subscribed to weather context");
})
.catch(err => console.error(err));
</script>
</body>
</html>
async function createDynamicContext() {
// Create the context
const response = await fetch("/api/mock/contexts", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
name: "stocks",
description: "Stock market data with ticker symbol, current price, daily change percentage, and trading volume"
})
});
const result = await response.json();
console.log("Context created:", result.context);
// Subscribe to receive data
await connection.invoke("SubscribeToContext", "stocks");
console.log("Now receiving live stock data!");
}
The MockLlmHub supports the following methods:
SubscribeToContext(string context)
DataUpdate events with generated dataUnsubscribeFromContext(string context)
Events received by client:
DataUpdate - Contains generated mock data
{
context: "weather", // Context name
method: "GET", // Simulated HTTP method
path: "/weather/current", // Simulated path
timestamp: 1699564820000, // Unix timestamp (ms)
data: { // Generated JSON matching the shape
temperature: 72,
condition: "Sunny",
humidity: 45,
windSpeed: 8
}
}
Subscribed - Confirmation of subscription
{
context: "weather",
message: "Subscribed to weather"
}
Unsubscribed - Confirmation of unsubscription
{
context: "weather",
message: "Unsubscribed from weather"
}
{
"MockLlmApi": {
"SignalRPushIntervalMs": 5000, // Interval between data pushes (ms)
"HubContexts": [...] // Array of pre-configured contexts
}
}
Hub contexts support both simple JSON shapes and full JSON Schema:
Simple Shape:
{
"Name": "users",
"Shape": "{\"id\":0,\"name\":\"string\",\"email\":\"string\"}"
}
JSON Schema:
{
"Name": "products",
"Shape": "{\"type\":\"object\",\"properties\":{\"id\":{\"type\":\"number\"},\"name\":{\"type\":\"string\"},\"price\":{\"type\":\"number\"}},\"required\":[\"id\",\"name\",\"price\"]}",
"IsJsonSchema": true
}
The system auto-detects JSON Schema by looking for $schema, type, or properties fields.
graph TD
Client[SignalR Client] -->|Subscribe| Hub[MockLlmHub]
Hub -->|Join Group| Group[SignalR Group]
BG[Background Service] -->|Generate Data| LLM[Ollama LLM]
LLM -->|JSON Response| BG
BG -->|Push Data| Group
Group -->|DataUpdate Event| Client
API[Management API] -->|CRUD| Manager[DynamicHubContextManager]
Manager -->|Register/Unregister| BG
Components:
1. Dashboard Prototyping
// Subscribe to multiple data sources
await connection.invoke("SubscribeToContext", "sales");
await connection.invoke("SubscribeToContext", "traffic");
await connection.invoke("SubscribeToContext", "alerts");
// Now receiving live updates for all three!
2. IoT Simulation
{
"Name": "sensors",
"Description": "IoT temperature sensors with device ID, current temperature, battery percentage, and signal strength",
"Path": "/iot/sensors"
}
3. Financial Data
{
"Name": "trading",
"Description": "Real-time stock trades with timestamp, symbol, price, volume, and buyer/seller IDs",
"Path": "/trading/live"
}
4. Gaming Leaderboard
{
"Name": "leaderboard",
"Description": "Gaming leaderboard with player name, score, rank, level, and country",
"Path": "/game/leaderboard"
}
New in v1.1.0: Full lifecycle control over SignalR contexts with real-time status tracking!
Each context has the following properties:
Contexts can be in two states:
SignalRPushIntervalMs (default: 5 seconds)Key Features:
Response Caching for SignalR: New in v1.1.0: Intelligent caching reduces LLM load and improves consistency!
MaxCachePerKey)This significantly reduces LLM load for high-frequency contexts, especially with multiple clients.
Example Workflow:
# Create a context
POST /api/mock/contexts
{ "name": "metrics", "description": "Server metrics" }
# Stop data generation (clients remain connected)
POST /api/mock/contexts/metrics/stop
# Resume data generation
POST /api/mock/contexts/metrics/start
# Remove context entirely
DELETE /api/mock/contexts/metrics
New Feature: Automatically generate mock endpoints from OpenAPI/Swagger specifications! Point to any OpenAPI 3.0/Swagger 2.0 spec (URL or file) and the library will create mock endpoints for all defined operations.
dotnet add package mostlylucid.mockllmapi
Program.cs:
using mostlylucid.mockllmapi;
var builder = WebApplication.CreateBuilder(args);
// Add OpenAPI mock services
builder.Services.AddLLMockOpenApi(builder.Configuration);
var app = builder.Build();
app.UseRouting();
// Map OpenAPI-based mock endpoints
app.MapLLMockOpenApi();
app.Run();
appsettings.json:
{
"MockLlmApi": {
"BaseUrl": "http://localhost:11434/v1/",
"ModelName": "ministral-3:3b",
"Temperature": 1.2,
"OpenApiSpecs": [
{
"Name": "petstore",
"Source": "https://petstore3.swagger.io/api/v3/openapi.json",
"BasePath": "/petstore",
"EnableStreaming": false
},
{
"Name": "myapi",
"Source": "./specs/my-api.yaml",
"BasePath": "/api/v1"
}
]
}
}
That's it! All endpoints from your OpenAPI spec are now available as intelligent LLM-powered mocks.
/users/{id} automaticallyEach OpenAPI spec supports these configuration options:
| Property | Type | Description |
|---|---|---|
Name |
string | Unique identifier for this spec (required) |
Source |
string | URL or file path to OpenAPI spec (required) |
BasePath |
string | Override base path (default: uses spec's servers[0].url) |
EnableStreaming |
bool | Add /stream suffix for SSE streaming (default: false) |
IncludeTags |
string[] | Only generate endpoints with these tags |
ExcludeTags |
string[] | Skip endpoints with these tags |
IncludePaths |
string[] | Only generate these paths (supports wildcards like /users/*) |
ExcludePaths |
string[] | Skip these paths (supports wildcards) |
Filter by tags:
{
"Name": "petstore",
"Source": "https://petstore3.swagger.io/api/v3/openapi.json",
"IncludeTags": ["pet", "store"]
}
Filter by paths:
{
"Name": "api",
"Source": "./specs/api.yaml",
"IncludePaths": ["/users/*", "/products/*"],
"ExcludePaths": ["/admin/*"]
}
Enable streaming:
{
"Name": "api",
"Source": "./specs/api.yaml",
"BasePath": "/api",
"EnableStreaming": true
}
Given this OpenAPI spec configuration:
{
"Name": "petstore",
"Source": "./specs/petstore.json",
"BasePath": "/petstore"
}
And a spec defining GET /pet/{petId} that returns a Pet object, you can test:
# Get a pet by ID
curl http://localhost:5116/petstore/pet/123
# Response (generated by LLM based on Pet schema):
{
"id": 123,
"name": "Fluffy",
"category": {
"id": 1,
"name": "Cats"
},
"photoUrls": ["https://example.com/fluffy.jpg"],
"tags": [
{"id": 1, "name": "cute"},
{"id": 2, "name": "playful"}
],
"status": "available"
}
OpenAPI mocks work independently of REST/GraphQL/SignalR:
// Just OpenAPI mocks
builder.Services.AddLLMockOpenApi(builder.Configuration);
app.MapLLMockOpenApi();
// Or combine with other features
builder.Services.AddLLMockRest(builder.Configuration);
builder.Services.AddLLMockOpenApi(builder.Configuration);
app.MapLLMockRest("/api/mock");
app.MapLLMockOpenApi();
/users/{id}, /posts/{postId}/comments/{commentId}, etc.The included management.http file contains comprehensive examples for all management endpoints:
Spec Management:
Endpoint Testing:
Advanced Workflows:
Example from management.http:
### Load Petstore API
POST http://localhost:5116/api/openapi/specs
Content-Type: application/json
{
"name": "petstore",
"source": "https://petstore3.swagger.io/api/v3/openapi.json",
"basePath": "/petstore"
}
### Test an endpoint
POST http://localhost:5116/api/openapi/test
Content-Type: application/json
{
"specName": "petstore",
"path": "/pet/123",
"method": "GET"
}
See the complete management.http file for 20+ ready-to-use examples.
The package includes complete demo applications with interactive interfaces featuring full context management:
LLMockApiClient/) — WPF Application (In Development)⚠️ DEVELOPMENT STATUS: The Windows desktop client is currently under active development. While many features are functional, some functionality may be incomplete or subject to change. Use for testing and development purposes.
A comprehensive WPF desktop application for interacting with the LLMock API:
Features:
Documentation: LLMockApiClient README
Perfect for: Desktop testing workflows, visual API exploration, development and debugging
/) — Real-Time Data Streaming with Management UINew in v1.1.0: Enhanced 3-column layout with full context lifecycle management!
Features:
Quick-Start Examples: One-click buttons for 5 pre-configured scenarios:
Perfect for: Dashboards, live monitoring, IoT simulations, real-time feeds, prototyping
/Streaming) — Progressive JSON GenerationNew in v1.1.0: Quick-start example buttons for instant streaming!
Features:
Quick-Start Examples: One-click buttons for 4 streaming scenarios:
Perfect for: Observing LLM generation, debugging shapes, understanding streaming behavior, testing SSE
/OpenApi) — Dynamic Spec Loading & TestingNew Feature: Interactive OpenAPI specification management with real-time updates!
Features:
Quick-Start Examples:
Perfect for: API prototyping, frontend development, contract testing, spec validation, demos
Run the demos:
cd LLMApi
dotnet run
Navigate to:
http://localhost:5116 - SignalR real-time data streaming with management UIhttp://localhost:5116/Streaming - SSE progressive generationhttp://localhost:5116/OpenApi - OpenAPI spec manager with dynamic loadingAll demos include:
You can optionally have the middleware echo back the JSON shape/schema that was used to generate the mock response.
Configuration:
Examples:
{
"mostlylucid.mockllmapi": {
"IncludeShapeInResponse": true
}
}
curl "http://localhost:5000/api/mock/users?shape=%7B%22id%22%3A0%2C%22name%22%3A%22string%22%7D&includeSchema=true"
Response includes header:
X-Response-Schema: {"id":0,"name":"string"}
...
data: {"content":"{full json}","done":true,"schema":{"id":0,"name":"string"}}
Notes:
Use cases:
Override the default prompts with your own:
{
"mostlylucid.mockllmapi": {
"CustomPromptTemplate": "Generate mock data for {method} {path}. Body: {body}. Use seed: {randomSeed}"
}
}
Available placeholders:
{method} - HTTP method (GET, POST, etc.){path} - Full request path with query string{body} - Request body{randomSeed} - Generated random seed (GUID){timestamp} - Unix timestamp{shape} - Shape specification (if provided)Test your client's error handling with comprehensive error simulation capabilities.
Four ways to configure errors (in precedence order):
IMPORTANT: Query parameter values MUST be URL-encoded. Spaces become %20, & becomes %26, : becomes %3A, etc.
# Properly encoded (spaces as %20)
curl "http://localhost:5000/api/mock/users?error=404&errorMessage=Not%20found&errorDetails=User%20does%20not%20exist"
# More complex example with special characters
# Decoded: "Invalid input: email & phone required"
curl "http://localhost:5000/api/mock/users?error=400&errorMessage=Invalid%20input%3A%20email%20%26%20phone%20required"
curl -H "X-Error-Code: 401" \
-H "X-Error-Message: Unauthorized" \
-H "X-Error-Details: Token expired" \
http://localhost:5000/api/mock/users
$error property):# Simple: just status code
curl "http://localhost:5000/api/mock/users?shape=%7B%22%24error%22%3A404%7D"
# Complex: with message and details
curl "http://localhost:5000/api/mock/users?shape=%7B%22%24error%22%3A%7B%22code%22%3A422%2C%22message%22%3A%22Validation%20failed%22%2C%22details%22%3A%22Email%20invalid%22%7D%7D"
error property):curl -X POST http://localhost:5000/api/mock/users \
-H "Content-Type: application/json" \
-d '{
"error": {
"code": 409,
"message": "Conflict",
"details": "User already exists"
}
}'
Error Response Formats:
Regular/Streaming endpoints:
{
"error": {
"code": 404,
"message": "Not Found",
"details": "Optional additional context"
}
}
GraphQL endpoint:
{
"data": null,
"errors": [
{
"message": "Not Found",
"extensions": {
"code": 404,
"details": "Optional additional context"
}
}
]
}
SignalR Error Simulation:
Configure errors in SignalR contexts for testing real-time error handling:
{
"HubContexts": [
{
"Name": "errors",
"Description": "Error simulation stream",
"ErrorConfig": {
"Code": 500,
"Message": "Server error",
"Details": "Database connection lost"
}
}
]
}
Or dynamically via the management API:
curl -X POST http://localhost:5000/api/management/contexts \
-H "Content-Type: application/json" \
-d '{
"name": "errors",
"description": "Test errors",
"error": 503,
"errorMessage": "Service unavailable",
"errorDetails": "Maintenance in progress"
}'
Supported HTTP Status Codes:
The package includes default messages for common HTTP status codes:
Custom messages and details override the defaults.
Use Cases:
See LLMApi/LLMApi.http for comprehensive examples of all error simulation methods.
Mount multiple mock APIs with different configurations:
// Development data with high randomness
builder.Services.Addmostlylucid.mockllmapi("Dev", options =>
{
options.Temperature = 1.5;
options.ModelName = "llm-model";
});
// Stable test data
builder.Services.Addmostlylucid.mockllmapi("Test", options =>
{
options.Temperature = 0.3;
options.ModelName = "llm-model";
});
app.Mapmostlylucid.mockllmapi("/api/dev");
app.Mapmostlylucid.mockllmapi("/api/test");
Three ways to control response structure:
X-Response-Shape: {"field":"type"}?shape=%7B%22field%22%3A%22type%22%7D (URL-encoded JSON){"shape": {...}, "actualData": ...}You can instruct the middleware to pre-generate and cache multiple response variants for a specific request/shape by adding a special field inside the shape object: "$cache": N.
Examples
Header shape: X-Response-Shape: {"$cache":3,"orderId":"string","status":"string","items":[{"sku":"string","qty":0}]}
Body shape: { "shape": { "$cache": 5, "invoiceId": "string", "customer": { "id": "string", "name": "string" }, "items": [ { "sku": "string", "qty": 0, "price": 0.0 } ], "total": 0.0 } }
Query param (URL-encoded): ?shape=%7B%22%24cache%22%3A2%2C%22users%22%3A%5B%7B%22id%22%3A0%2C%22name%22%3A%22string%22%7D%5D%7D
Configuration
Notes
mostlylucid.mockllmapi.Testing - A companion NuGet package that makes testing with the mock API even easier!
dotnet add package mostlylucid.mockllmapi.Testing
Quick Example:
using mostlylucid.mockllmapi.Testing;
// Create a configured HttpClient for testing
var client = HttpClientExtensions.CreateMockLlmClient(
baseAddress: "http://localhost:5116",
pathPattern: "/users",
configure: endpoint => endpoint
.WithShape(new { id = 0, name = "", email = "" })
.WithCache(5)
.WithError(404) // Simulate errors easily
);
// Use in your tests
var response = await client.GetAsync("/users");
var users = await response.Content.ReadFromJsonAsync<User[]>();
Key Features:
WithShape(), WithError(), WithCache(), etc.HttpClient and IHttpClientFactoryConfiguration Examples:
// Multiple endpoints with different configurations
var client = HttpClientExtensions.CreateMockLlmClient(
"http://localhost:5116",
configure: handler => handler
.ForEndpoint("/users", config => config
.WithShape(new { id = 0, name = "", email = "" })
.WithCache(10))
.ForEndpoint("/posts", config => config
.WithShape(new { id = 0, title = "", content = "" })
.WithStreaming()
.WithSseMode("CompleteObjects"))
.ForEndpoint("/error", config => config
.WithError(500, "Internal server error"))
);
// Dependency Injection support
services.AddMockLlmHttpClient<IUserApiClient>(
baseApiPath: "/api/mock",
configure: handler => handler
.ForEndpoint("/users", config => config.WithShape(...))
);
How It Works:
The MockLlmHttpHandler is a DelegatingHandler that intercepts HTTP requests and automatically applies your configuration via query parameters and headers before forwarding to the mock API. This means you can use real HttpClient instances in your tests while controlling mock behavior declaratively.
See the Testing Package README for complete documentation and examples.
Use the included LLMApi.http file with:
The project includes comprehensive unit tests:
# Run all tests
dotnet test
# Run with detailed output
dotnet test --verbosity detailed
Test Coverage (228 tests, 100% pass rate):
New in v2.3.0:
See TEST_SUMMARY.md for complete test documentation and coverage metrics.
See Architecture Overview above for the high-level system diagram and component description.
sequenceDiagram
participant C as Client
participant A as LLMApi
participant H as AutoApiHelper
participant O as Ollama
participant M as llm-model
C->>A: GET/POST/PUT/DELETE /api/auto/**
A->>H: Extract context (method, path, body, shape)
H->>H: Generate random seed + timestamp
H->>H: Build prompt with randomness
H-->>A: Prompt + temperature=1.2
A->>O: POST /v1/chat/completions
O->>M: Run inference
M-->>O: Generated JSON
O-->>A: Response
A-->>C: JSON Response
flowchart TD
Start[Request Arrives] --> CheckQuery{Shape in<br/>Query Param?}
CheckQuery -->|Yes| UseQuery[Use Query Shape]
CheckQuery -->|No| CheckHeader{Shape in<br/>Header?}
CheckHeader -->|Yes| UseHeader[Use Header Shape]
CheckHeader -->|No| CheckBody{Shape in<br/>Body Field?}
CheckBody -->|Yes| UseBody[Use Body Shape]
CheckBody -->|No| NoShape[No Shape Constraint]
UseQuery --> BuildPrompt[Build Prompt]
UseHeader --> BuildPrompt
UseBody --> BuildPrompt
NoShape --> BuildPrompt
BuildPrompt --> AddRandom[Add Random Seed<br/>+ Timestamp]
AddRandom --> SendLLM[Send to LLM]
style UseQuery fill:#4CAF50
style UseHeader fill:#4CAF50
style UseBody fill:#4CAF50
style NoShape fill:#FFC107
Projects:
mostlylucid.mockllmapi: NuGet package libraryLLMApi: Demo applicationLLMApi.Tests: xUnit test suite (228 tests - 100% pass rate)cd mostlylucid.mockllmapi
dotnet pack -c Release
Package will be in bin/Release/mostlylucid.mockllmapi.{version}.nupkg
This is a sample project demonstrating LLM-powered mock APIs. Feel free to fork and customize!
This is free and unencumbered software released into the public domain. See LICENSE for details or visit unlicense.org.
© 2026 Scott Galloway — Unlicense — All content and source code on this site is free to use, copy, modify, and sell.