Soul — I built a version of me that doesn't need sleep.

Same memory. Same voice. Same thinking. Runs 24/7 while I'm in meetings pretending to pay attention.

Curious what talking to me feels like?

Memoria - Shubham Shrivastava's side project

Type

Type

Side Project

Started

Started

May 2025

Stage

Stage

v1 → v2 Rebuild

Stack

Stack

Python, Postgres, pgvector, MCP

Status

Status

Building in Public

THE IDEA

I use ChatGPT, Claude, Cursor, Perplexity, Figma AI — sometimes all in the same day. None of them know each other. None of them know me. Every new chat starts from zero. I got tired of being the context. So I built Soul — my digital twin. Same memory. Same voice. Same thinking. It knows what I'm working on, how I communicate, what I've decided, and why. It writes my blogs, drafts my LinkedIn posts, handles email — and it sounds exactly like me. Not a chatbot. Not a tool. A version of me that runs 24/7 while I'm in meetings pretending to pay attention.

6

AI tools connected

2hrs

Saved daily on average

1

Memory layer to rule them all

THE REAL PROBLEM

AI tools remember in silos. ChatGPT has its memory. Claude has its own. Cursor has context. None of it connects. I was spending roughly an hour a day re-explaining myself to tools that technically 'remember' — just not together. That's 365 hours a year being your own onboarding document.

THE SOLUTION — SOUL

Soul sits above all models. One job: know me completely and never forget. Memory stored in a database with embeddings — structured, governed, not a dump. ElevenLabs for my voice. Synthesis for my character. MCP as the universal interface. Every AI tool I open already knows who I am. New chat, new model, new tool — same context. No re-explaining. Ever.

WHAT'S AUTOMATED TODAY

Blog posts. LinkedIn content. Email drafts. All in my voice. All without me writing from scratch. Soul doesn't generate generic content — it generates content that sounds like it came from me. Because the memory it pulls from is mine.

WHAT MEMORIA IS (AND ISN'T)

Soul is: A personal memory system. Model-agnostic. Voice-cloned. Character-matched. Boring by design — which is exactly the point. Soul is not: A chat app. An LLM deciding what to remember. A vector dump. A journal.

WHERE THIS IS GOING

I become the API. My knowledge, my voice, my way of thinking — accessible to any system, any agent, any tool that needs it. Soul keeps evolving as I do. Every conversation, every decision, every opinion — fed back into the memory. The longer it runs, the more accurate the twin gets.

THE END STATE

I become the API. My knowledge, my voice, my way of thinking — accessible to any system, any agent, any tool that needs it. Soul keeps evolving as I do. Every conversation, every decision, every opinion — fed back into the memory. The longer it runs, the more accurate the twin gets.

THE CORE INSIGHT

The hard part isn't storing memory. It's knowing what to store, how to update it, and when to forget it. Soul exists to solve exactly that. Building in public. Breaking things weekly.

THE JOURNEY

THE JOURNEY

THE JOURNEY

May 2025 — MVP v1

Built the first working version. It had a chat UI, memory stored in a database, vector embeddings, semantic search, and context retrieval injected into prompts.

It worked. But the extraction was dumb.

Everything went into the database — no filtering, no lifecycle, no relevance scoring, no update logic. Memory got noisy fast: too much context, not enough signal, duplicates, contradictions, outdated facts showing up in responses.

The failure wasn't semantic search. The failure was lack of memory governance.

The "Build Everything from Scratch" Phase

My next instinct was to formalize it as a full product: custom UI, direct model API usage, bring-your-own API keys, database + embeddings, retrieval-augmented prompting.

In short: Memoria becomes the app.

This approach is valid — but flawed for this problem.

Why it wasn't ideal:

  1. It becomes a platform build, not a memory problem. I'd spend time rebuilding chat UX, auth, model switching, cost controls — things ChatGPT, Claude, and Cursor already do well.

  2. Embeddings don't solve memory correctness. Vector search helps retrieval, but it doesn't answer: what's worth saving? How do preferences update? How do time-based facts change? How do we prevent contradictions?

The real problem wasn't better retrieval. It was knowing what to store, how to update it, and when to forget it.

The Shift in Thinking

That's when the core realization clicked:

Memory is not an LLM problem. It's a systems and policy problem.

LLMs are great at understanding language. They are terrible at being consistent, auditable decision-makers.

So I stopped trying to make the model "smart about memory" and instead designed memory as infrastructure.