Case Study · Side Project

Soul — My AI
Digital Twin

I built a version of me that doesn't need sleep. Same memory. Same voice. Same thinking. Runs 24/7 while I'm in meetings pretending to pay attention.

TypeSide Project
StartedMay 2025
Stagev2 — Live
StackPython · Postgres · pgvector · MCP
StatusBuilding in Public
Project Walkthrough
Video walkthrough coming soon
A screen recording explaining the project end-to-end
Placeholder
6
AI tools connected
~2hrs
Saved daily
500+
Memory entries
The Idea

I use ChatGPT, Claude, Cursor, Perplexity, Figma AI — sometimes all in the same day. None of them know each other. None of them know me. Every new chat starts from zero.

I got tired of being the context.

So I built Soul — my digital twin. It knows what I'm working on, how I communicate, what I've decided, and why. It writes my blogs, drafts my LinkedIn posts, handles email — and it sounds exactly like me.

Not a chatbot. Not a copilot. A version of me that runs 24/7.

The Real Problem

AI tools remember in silos. ChatGPT has its memory. Claude has its own. Cursor has context. None of it connects. I was spending roughly an hour a day re-explaining myself to tools that technically 'remember' — just not together.

That's 365 hours a year being your own onboarding document.

The Solution

Soul sits above all models. One job: know me completely and never forget.

Memory stored in Postgres with pgvector — structured, governed, not a dump. Voice cloned via ElevenLabs. Visual avatar via Synthesia. MCP as the universal interface so every tool I already use connects without a wrapper app.

Every AI tool I open already knows who I am. New chat, new model, new tool — same context. No re-explaining. Ever.

What's Automated Today

Blog posts. LinkedIn content. Email drafts. All in my voice. All without me writing from scratch. Soul doesn't generate generic content — it generates content that sounds like it came from me. Because the memory it pulls from is mine.

What Soul Is (And Isn't)

Soul is: A personal memory system. Model-agnostic. Voice-cloned. Avatar-matched. Character-matched. Boring by design — which is exactly the point.

Soul is not: A chat app. An LLM deciding what to remember. A vector dump. A journal.

The End State

I become the API. My knowledge, my voice, my way of thinking — accessible to any system, any agent, any tool that needs it. Soul keeps evolving as I do. Every conversation, every decision, every opinion — fed back into the memory. The longer it runs, the more accurate the twin gets.

The Core Insight

The hard part isn't storing memory. It's knowing what to store, how to update it, and when to forget it. Soul exists to solve exactly that. Building in public. Breaking things weekly.

The Journey
May 2025 — MVP v1 Built the first working version. Chat UI, memory in a database, vector embeddings, semantic search, context retrieval injected into prompts. It worked. But the extraction was dumb.

Everything went into the database — no filtering, no lifecycle, no relevance scoring, no update logic. Memory got noisy fast: too much context, not enough signal, duplicates, contradictions, outdated facts showing up in responses.

The failure wasn't semantic search. The failure was memory governance.

The Dead End: Build Everything from Scratch My next instinct was to formalize it as a full product — custom UI, direct model API calls, bring-your-own keys, full RAG pipeline. Soul becomes the app. Valid approach. Wrong problem.

It would mean rebuilding chat UX, auth, model switching, cost controls — things Claude, Cursor, and ChatGPT already do well. And embeddings still wouldn't answer the real questions: what's worth saving? How do preferences update? How do time-based facts expire? How do you prevent contradictions?

The Realization Memory is not an LLM problem. It's a systems and policy problem. LLMs understand language well. They are terrible at being consistent, auditable decision-makers about what to remember. So I stopped trying to make the model smart about memory. I designed memory as infrastructure instead.

Late 2025 — v2: Memory as Infrastructure Rebuilt from scratch. No custom chat UI. No model switching logic. Just the memory layer.

Explicit storage policies — what gets saved, how it updates, when it expires. Postgres + pgvector for structured retrieval. MCP as the interface so Claude, Cursor, and every tool I use connects natively without a wrapper.

The result: persistent context across every AI tool I open. New session, same Shubham. v2 is live and running in production. Blog posts, LinkedIn content, Slack drafts — all generated in my voice, from context that already knows who I am.

What's next: Conflict resolution — when two memories contradict, Soul needs to decide which one wins. That's the hard part I'm working on now.

Want to try Soul?

It's live. Ask it anything. It'll answer exactly like I would.