I'm a .NET developer. When I started building LLM-powered systems, everyone pointed me toward LangChain. "It's the standard," they said. "All the examples use it." And they were right — if you're in the Python ecosystem, LangChain is everywhere.
But here's the thing: I don't avoid LangChain because it's bad. I avoid it because it solves problems I already solve more explicitly, and for my use cases — C#, local inference, privacy, determinism — frameworks add friction rather than value.
This isn't an anti-LangChain post. It's a post about understanding what problems frameworks solve, and realizing you might not need them.
Thesis: If you understand the problems LangChain solves, you don't need LangChain.
Let's be fair first. LangChain excels at several things:
Rapid prototyping - You can have a working demo in minutes. The getting-started examples are genuinely good.
Python ecosystem integration - If you're already in the Python/Jupyter/pandas world, LangChain glues everything together seamlessly.
Lowering the barrier - For people new to LLMs, it provides useful abstractions: prompt templates, tool calling patterns, memory management, vector DB integrations.
LangChain is an integration accelerator, not an AI requirement. It speeds up the path from "I have an idea" to "I have a demo." That's valuable.
But it's also where the problems start for me as a C# developer building production systems.
Before dismissing a framework, you need to understand what problems it's solving. LangChain addresses these real issues:
These are legitimate problems. The question is: do you need a framework to solve them?
For my work — building production .NET systems with local LLMs, strict privacy requirements, and deterministic behavior — LangChain introduces friction in several areas.
LangChain manages memory and context for you. That sounds convenient until you need to debug why your prompt is 10,000 tokens longer than expected, or why the LLM suddenly has access to conversation history you thought you'd cleared.
The framework concatenates prompts, manages memory, and handles execution order implicitly. When something breaks, you're debugging the framework's behavior, not your code's behavior.
Once you adopt LangChain, you start designing for LangChain. Your architecture becomes coupled to the framework's abstractions: chains, agents, retrievers, memory buffers.
This isn't unique to LangChain — all frameworks do this. But in a fast-moving field like LLMs, where the right abstractions aren't settled yet, coupling to a framework's worldview is risky.
LangChain assumes:
As a .NET developer, I assume:
The LangChain .NET ports exist, but they're playing catch-up with the Python version, and the abstractions still feel foreign to idiomatic C#.
When you move from prototype to production, you need:
LangChain optimizes for iteration speed, not production hardening. That's fine for demos; it's a problem for production.
Here's the mental model I use: LLMs are reasoning engines, not execution engines.
The principle: LLMs reason. Engines compute.
This separation drives everything I build.
Instead of framework-managed memory, I build context explicitly per request:
public class QueryContext
{
public List<ColumnInfo> Schema { get; set; }
public List<Dictionary<string, string>> SampleRows { get; set; }
public List<ConversationTurn> History { get; set; }
public string UserQuestion { get; set; }
}
Every prompt construction is visible. I know exactly what's being sent to the LLM because I built the string myself:
private string BuildPrompt(QueryContext context)
{
var sb = new StringBuilder();
sb.AppendLine("You are a SQL expert. Generate a query based on:");
sb.AppendLine();
// Schema
sb.AppendLine("Schema:");
foreach (var col in context.Schema)
sb.AppendLine($" - {col.Name}: {col.Type}");
// History (if any)
if (context.History.Any())
{
sb.AppendLine("\nPrevious conversation:");
foreach (var turn in context.History.TakeLast(3))
sb.AppendLine($" Q: {turn.Question} → SQL: {turn.Sql}");
}
// Current question
sb.AppendLine($"\nQuestion: {context.UserQuestion}");
sb.AppendLine("Generate SQL (no explanation, just the query):");
return sb.ToString();
}
No hidden state. No magic concatenation. Just explicit string building. When it's wrong, I know why.
Instead of letting the LLM execute anything, I use it to generate intent, then execute that intent through deterministic engines:
The LLM generates SQL. DuckDB executes it. The LLM never sees the data:
// LLM generates intent
var sql = await GenerateSqlAsync(context);
// Validate before execution
var error = ValidateSql(connection, sql);
if (error != null)
{
// Retry with error feedback
sql = await GenerateSqlAsync(context, previousError: error);
}
// Execute in sandboxed engine
var results = ExecuteQuery(connection, sql);
This is safer, faster, and debuggable. The LLM can't accidentally run DROP TABLE because I validate the SQL first. The LLM can't leak data because it never sees the data — only the schema.
I recently wrote about analyzing large CSV files with local LLMs. The architecture:
User Question → LLM → SQL → DuckDB → Results
The LLM receives:
The LLM generates:
The system then:
EXPLAIN (catches syntax errors without executing)The LLM never sees the actual data. It only sees structure.
This is what LangChain would call an "agent" — a system that uses an LLM to generate actions, validates them, executes them, and potentially retries on failure.
Except I built it in ~200 lines of C# with no framework:
public class CsvQueryService
{
private readonly OllamaApiClient _ollama;
private readonly string _model;
public async Task<QueryResult> QueryAsync(string csvPath, string question)
{
using var connection = new DuckDBConnection("DataSource=:memory:");
connection.Open();
// 1. Build context
var context = BuildContext(connection, csvPath, question);
// 2. Generate SQL
var sql = await GenerateSqlAsync(context);
// 3. Validate
var error = ValidateSql(connection, sql);
if (error != null)
{
// Retry once with error feedback
sql = await GenerateSqlAsync(context, error);
}
// 4. Execute
return ExecuteQuery(connection, sql);
}
}
That's it. No chains, no agents framework, no magic. Just explicit orchestration of LLM → validation → execution.
The term "agent" gets thrown around constantly, usually to mean "anything involving an LLM." Let's be precise.
An agent is:
An agent is not a library. It's a pattern.
My agent pattern in C#:
public class Agent
{
private readonly List<ConversationTurn> _history = new();
public async Task<string> RunAsync(string goal)
{
while (!IsGoalAchieved(goal))
{
// 1. Generate next action based on history
var action = await GenerateActionAsync(goal, _history);
// 2. Validate before executing
if (!IsActionSafe(action))
{
_history.Add(new ConversationTurn
{
Action = action,
Result = "REJECTED: Unsafe action"
});
continue;
}
// 3. Execute through deterministic tool
var result = await ExecuteActionAsync(action);
// 4. Record and continue
_history.Add(new ConversationTurn { Action = action, Result = result });
}
return GenerateSummary(_history);
}
}
This is an agent. It's a loop with state, tools, and feedback. I wrote it in 30 lines. I didn't need a framework.
To be fair to the .NET ecosystem, Microsoft has released the Microsoft Agent Framework that's purpose-built for .NET developers building production AI systems.
The framework (formerly known as Microsoft.Extensions.AI) provides:
Key components:
IChatClient - Unified interface for chat completionsIEmbeddingGenerator - Vector embeddings across providersAIFunction - Type-safe function callingExample:
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddChatClient(builder =>
builder.UseOllama("llama3.2")
.UseOpenTelemetry()
.UseLogging());
var app = builder.Build();
app.MapPost("/chat", async (IChatClient client, string message) =>
{
var response = await client.CompleteAsync(message);
return response.Content;
});
Even with Microsoft's framework, I prefer to keep core orchestration explicit:
I don't want:
I want:
Microsoft's Agent Framework is closer to how I think than LangChain. It respects .NET patterns, uses dependency injection properly, and doesn't fight the ecosystem. But I still prefer writing the orchestration myself.
When to use the Microsoft Agent Framework:
When to go framework-less:
The framework doesn't eliminate architectural decisions. You still choose what to put in context, how to chunk data, and when to retry. It just makes the plumbing easier.
Framework-less systems age better for several reasons:
Performance - No abstraction overhead. My CSV query service runs sub-100ms because there's no framework between the LLM and DuckDB.
Cost predictability - I control exactly what goes to the LLM. No hidden prompt inflation from framework-managed memory.
Debuggability - When something breaks, I'm debugging my code, not reverse-engineering a framework's magic.
Privacy - For systems with strict data residency requirements, knowing exactly what leaves the machine matters.
Offline scenarios - Edge devices, air-gapped networks, regulated environments. Frameworks assume internet access and cloud services.
Regulatory compliance - In finance, healthcare, and government, you often need to explain and audit every decision. "The framework did it" isn't an acceptable answer.
The more constrained your environment, the more you want explicit control.
To disarm critics: there are legitimate cases where I'd reach for LangChain.
Hackathons - Speed to demo matters more than architecture.
Throwaway POCs - If you're validating an idea and plan to rewrite for production anyway.
Python-heavy teams - If your team is already fluent in Python, the ecosystem fit is strong.
Teaching concepts - LangChain's abstractions can help beginners understand the agent pattern before building their own.
Knowing when not to use something is as valuable as knowing when to use it.
This isn't really about LangChain. It's about the tradeoff between frameworks and first-principles engineering.
Frameworks accelerate familiar problems. If you're building the 100th CRUD API, reach for Entity Framework or Dapper. The patterns are settled.
But LLM-powered systems? The right abstractions aren't settled yet. We don't know if "chains" or "agents" or "retrievers" are the right mental models. We're still figuring it out.
In that environment, I prefer to build close to the metal:
OllamaSharp, OpenAI SDK)As a .NET developer, I have strong opinions about how systems should be built: explicit lifetimes, strong typing, async all the way down, dependency injection for testability.
LangChain's abstractions don't map cleanly to those opinions. So I don't use it.
If you're a .NET developer looking at LangChain and wondering "Do I need this?", here's my answer:
You need to solve the problems LangChain solves - context management, tool orchestration, retry logic, observability.
You don't need LangChain to solve them - Especially if you value explicitness, strong typing, and production hardening over rapid prototyping.
The principle I build on:
"LLMs reason. Engines compute. Orchestration is yours to own."
Or more simply:
"If you understand the problems a framework solves, you often don't need the framework."
Build systems that make sense in your ecosystem, with your constraints, using your language's idioms. For me, that's C#, strong typing, explicit control flow, and deterministic execution layers.
For you, it might be different. And that's fine.
The goal isn't to avoid frameworks. The goal is to choose them consciously, understanding both what they provide and what they cost.
Further Reading:
© 2025 Scott Galloway — Unlicense — All content and source code on this site is free to use, copy, modify, and sell.