The AI Agent Stack is Confusing. Here's How I Pick
People keep asking me: "Should I use n8n or LangGraph?" or "What's the difference between LangChain and ADK?" Wrong question. The right question: What are you actually building? I've shipped agentic systems at Avataar and Neoflo. Here's the mental model that clears up the confusion.
The Core Distinction
These tools solve different problems. The confusion happens because they all "use AI" — but so does a calculator app and a self-driving car.
n8n = Connect systems, AI optional
The workflow is the product. AI might summarize or classify at one step, but you designed the flow. Think: "When email arrives → extract data → update CRM → notify team."
LangChain = Build with LLMs, workflow fixed
The AI is the product. You're chaining LLM calls to process, transform, or retrieve. Think: "Take this document → chunk it → embed it → answer questions about it."
LangGraph = AI decides the workflow
The AI navigates. It evaluates results, chooses next steps, loops back on failure, asks humans when uncertain. Think: "Figure out if this invoice matches our PO, handle whatever you find."
Google ADK = LangGraph, but Gemini-native and multimodal-first
Similar reasoning capabilities, but optimized for Gemini's strengths: processing images, PDFs, mixed-format documents natively.
n8n: The Visual Connector
What it is: A visual workflow builder. Drag boxes, draw connections, automate processes. Your ops team can use it without writing code.
Where it shines:
Connecting apps: Typeform → Airtable → Slack → Email
Automating repetitive processes with clear triggers and steps
Adding AI as a "step" (classify this, summarize that) without AI controlling the flow
Example: Customer complaint lands in inbox. AI classifies it (refund/shipping/product). Based on classification, routes to the right Slack channel. Deterministic flow, AI just tags.
Where it struggles:
When the AI needs to decide what happens next (not just classify)
When you need memory across steps or complex state management
When branches can't be pre-defined
Example that breaks: "Research this company, decide if we should dig deeper, and write a report based on what you find." That's reasoning, not routing.
"But can't I make n8n reason with LLM nodes?"
Technically, yes. Practically, it breaks down because:
n8n's LLM nodes are stateless — no memory between calls
Branching is still deterministic: "if output contains X, go left"
You can't say "loop until the AI thinks it's done" — you script the loop count
Error recovery is manual, not intelligent
You can force it. But you're fighting the tool.
LangChain: The Prototyping Kit
What it is: Developer building blocks for LLM apps. Prompts, retrievers, chains, memory — LEGO pieces you assemble.
Where it shines:
RAG pipelines (search your docs, answer questions)
Summarization, extraction, transformation chains
Fast prototyping when you're exploring approaches
Swapping between LLM providers easily
Example: Internal knowledge bot. Employee asks a question → retrieve relevant docs from Notion → pass to LLM with context → return answer. Built in a day.
Where it struggles:
Complex decision-making (evaluate → branch → maybe loop back)
Production debugging (abstractions hide what's actually happening)
When you need explicit control over state and flow
Example that breaks: Agent that handles invoice exceptions — needs to pull PO data, compare line items, flag discrepancies differently based on type, escalate uncertain cases. Too many conditional paths for simple chains.
LangGraph: The Reasoning Engine
What it is: A framework for building stateful, multi-step agents with explicit control flow. You define nodes (actions) and edges (transitions), and the AI navigates the graph.
Where it shines:
Agents that evaluate results and decide next steps
Human-in-the-loop workflows (pause, wait for approval, continue)
Complex branching based on AI judgment, not just field values
Debugging and replay (checkpoints at each step)
Example: Invoice processing at Neoflo. Agent reads invoice → extracts fields → pulls matching PO → compares line items → if match, auto-approve; if discrepancy, flag with specific questions for human review. Each invoice takes a different path.
Where it struggles:
Simple linear workflows (overkill, adds unnecessary complexity)
Non-technical teams (requires developers)
Quick prototypes where LangChain would be faster
Google ADK: The Multimodal Reasoning Engine
What it is: Google's agent framework, built specifically for Gemini. Similar to LangGraph in capability — stateful agents, multi-step reasoning — but native to Gemini's architecture.
Where it shines:
Multimodal processing: documents mixing text, tables, images, handwriting
Long context windows (process entire documents without chunking)
Native Google Cloud integration (Vertex AI, BigQuery)
Cost efficiency at high volume
Example: We use ADK at Neoflo for vendor invoices that arrive in every format — PDFs, scanned images, phone photos of paper. Gemini handles visual understanding natively. No separate OCR step, fewer error points.
Where it struggles:
Model flexibility (Gemini-first, switching to Claude/GPT means fighting the framework)
Ecosystem maturity (younger than LangChain/LangGraph, fewer community resources)
When you need fine-grained control over agent behavior
LangGraph vs ADK — the real difference:
LangGraph | Google ADK | |
|---|---|---|
Model flexibility | Any LLM | Gemini-first |
Control style | Explicit graph, you define every edge | More declarative, framework handles orchestration |
Multimodal | Possible but you wire it | Native, batteries-included |
Debugging | Checkpoint replay, step-by-step | Vertex AI tracing |
Best for | Complex reasoning, model-agnostic agents | Document-heavy workflows, Google stack |
If you want maximum control and model flexibility → LangGraph.
If you're processing mixed-format documents on Google Cloud → ADK.
The Decision Framework
Start with the problem:
Connecting systems? → n8n
Building LLM features? → LangChain
Building reasoning agents? → LangGraph
Document-heavy + Gemini? → ADK
Consider your team:
Non-technical needs to modify it? → n8n
Developers prototyping? → LangChain
Engineers building production agents? → LangGraph or ADK
Consider timeline:
This week? → n8n or LangChain
For scale? → LangGraph or ADK
The Meta-Lesson
The best AI builders I know don't start with frameworks. They start with the workflow.
They ask: What's the trigger? What are the steps? Where does AI add value? What decisions need to be made? By whom?
Then they pick the tool that matches.
Most people do it backwards. They pick the trendy tool, then force their problem to fit. Six months later, they're rewriting everything.
Don't be most people.
Building AI workflows? We're deep in this at Neoflo. If you're wrestling with these decisions, reach out.
