5 min read

The AI Agent Stack is Confusing. Here's How I Pick

People keep asking me: "Should I use n8n or LangGraph?" or "What's the difference between LangChain and ADK?" Wrong question. The right question: What are you actually building? I've shipped agentic systems at Avataar and Neoflo. Here's the mental model that clears up the confusion.

The Core Distinction

These tools solve different problems. The confusion happens because they all "use AI" — but so does a calculator app and a self-driving car.

n8n = Connect systems, AI optional. The workflow is the product. AI might summarize or classify at one step, but you designed the flow.

LangChain = Build with LLMs, workflow fixed. The AI is the product. You're chaining LLM calls to process, transform, or retrieve.

LangGraph = AI decides the workflow. The AI navigates. It evaluates results, chooses next steps, loops back on failure, asks humans when uncertain.

Google ADK = LangGraph, but Gemini-native and multimodal-first. Similar reasoning capabilities, but optimized for Gemini's strengths: processing images, PDFs, mixed-format documents natively.

n8n: The Visual Connector

What it is: A visual workflow builder. Drag boxes, draw connections, automate processes. Your ops team can use it without writing code.

Where it shines: connecting apps, automating repetitive processes with clear triggers, adding AI as a classification step without AI controlling the flow.

Where it struggles: when the AI needs to decide what happens next, when you need memory across steps, when branches can't be pre-defined.

LangChain: The Prototyping Kit

What it is: Developer building blocks for LLM apps. Prompts, retrievers, chains, memory — LEGO pieces you assemble.

Where it shines: RAG pipelines, summarization and extraction chains, fast prototyping, swapping between LLM providers.

Where it struggles: complex decision-making, production debugging (abstractions hide what's actually happening), when you need explicit control over state.

LangGraph: The Reasoning Engine

What it is: A framework for building stateful, multi-step agents with explicit control flow. You define nodes (actions) and edges (transitions), and the AI navigates the graph.

Where it shines: agents that evaluate results and decide next steps, human-in-the-loop workflows, complex branching based on AI judgment, debugging and replay via checkpoints.

Example: Invoice processing at Neoflo. Agent reads invoice → extracts fields → pulls matching PO → compares line items → if match, auto-approve; if discrepancy, flag with specific questions for human review. Each invoice takes a different path.

Google ADK: The Multimodal Reasoning Engine

What it is: Google's agent framework, built specifically for Gemini.

Where it shines: multimodal processing (documents mixing text, tables, images, handwriting), long context windows, native Google Cloud integration, cost efficiency at high volume.

Where it struggles: model flexibility (Gemini-first), ecosystem maturity (younger than LangChain/LangGraph), fine-grained control over agent behavior.

The Decision Framework

Start with the problem: connecting systems? → n8n. Building LLM features? → LangChain. Building reasoning agents? → LangGraph. Document-heavy + Gemini? → ADK.

Consider your team: non-technical needs to modify it? → n8n. Developers prototyping? → LangChain. Engineers building production agents? → LangGraph or ADK.

Consider timeline: this week? → n8n or LangChain. For scale? → LangGraph or ADK.

The Meta-Lesson

The best AI builders I know don't start with frameworks. They start with the workflow.

They ask: What's the trigger? What are the steps? Where does AI add value? What decisions need to be made? By whom?

Then they pick the tool that matches.

Most people do it backwards. They pick the trendy tool, then force their problem to fit. Six months later, they're rewriting everything.

Don't be most people.

Related reading