Linubra
Product K in The Linubra Journal

Why We Built Linubra

Most 'second brain' tools make you do all the work. We built one that thinks.

Part 1 of the series: Building Linubra

Patrick Lehmann
Patrick Lehmann
· 7 min read
Raw inputs — audio waveforms, text snippets, images — flowing through a reasoning layer into a structured knowledge graph

Every knowledge worker has lived the same moment. You had a conversation three weeks ago — maybe a lunch meeting, a phone call, a quick hallway chat — and now you need a specific detail. A name. A number. A commitment someone made.

You open your note-taking app. Nothing. You search your email. Close, but not quite. You dig through your calendar, trying to reconstruct the context. Twenty minutes later, you either find a fragment or give up entirely.

This is the problem we set out to solve. Not with a better notes app. With a fundamentally different kind of tool.

TL;DR: Knowledge workers spend 9.3 hours per week just searching for information they already have (McKinsey Global Institute, 2012). Manual note-taking tools can’t fix this — productivity apps see a day-30 retention rate of just 4.1% because the maintenance burden is unsustainable. A Reasoning Memory Engine captures raw input and builds the knowledge graph automatically, turning voice notes and quick captures into queryable, structured memory.


Why Does Every Note-Taking System Eventually Fail?

Productivity apps retain only 17.1% of users after day one, dropping to 4.1% by day 30 (Growth-onomics, 2025). The pattern is the same whether you’re using Obsidian, Notion, or Roam Research: the system demands more from you than it gives back.

These tools popularised the idea of a “second brain” — a digital extension of your memory where you capture, organise, and retrieve knowledge. The concept is genuinely powerful. But the execution has a structural problem.

You have to do all the work.

Every insight must be manually typed. Every connection manually linked. Every piece of context manually tagged. The “second brain” isn’t a thinking partner — it’s a filing cabinet that requires constant upkeep.

For the rare disciplined note-taker, this works. But what about the rest of us? The ones with back-to-back meetings, who think while walking, who have their best ideas in the shower? For most people, the second brain stays empty — not from lack of intent, but from the sheer weight of the maintenance tax.

The Productivity App Retention Cliff — users drop from 100% at install to 17% on day 1, 11% on day 7, 4% on day 30, and 2% by day 90


What Happens When You Record Everything but Understand Nothing?

According to the McKinsey Global Institute, knowledge workers spend 1.8 hours every day — 9.3 hours per week — searching for and gathering information (McKinsey, 2012). A separate IDC study put the figure even higher at 2.5 hours per day. The problem isn’t capture. It’s retrieval.

On the opposite end of the spectrum from manual note-taking, tools like Rewind and Limitless take a passive approach. They record everything — screen activity, ambient audio, meeting transcripts — creating an exhaustive log of your digital life.

The problem? Retrieval is shallow. You get keyword search over transcripts. You get timestamped recordings. What you don’t get is understanding.

Ask these tools “What did Mark say about the Q3 budget?” and you’ll get a transcript snippet. Ask “How has Mark’s position on the Q3 budget evolved over the last month?” and you get silence. They capture data. They don’t build knowledge. Is there a meaningful difference between losing a memory and being unable to find it?

Hours Per Week Searching for Information at Work — McKinsey 9.3h, IDC 12.5h, Glean/Harris 10h — three independent studies, same conclusion


What If the System Did the Thinking?

The speech recognition market is projected to reach $47 billion by 2030, growing at a 14.2% CAGR (Statista/Grand View Research, 2024). Voice-first capture is becoming the dominant input mode — and it changes what’s architecturally possible. When capture is effortless, the bottleneck shifts from input to processing.

We built a tool that occupies the space between manual note-taking and passive life-logging. Like life-logging tools, capture is effortless — speak into your phone, share a link, jot a quick note. No manual organisation required.

But unlike life-logging tools, it doesn’t just store what you said. It reasons about it.

When you record a voice note about a lunch meeting, the AI engine:

  1. Extracts structured data — people mentioned, action items, decisions made, sentiment, location, and time
  2. Resolves entities — recognising that “Dr. Schmidt,” “Patricia,” and “the neuroscientist from Berlin” are the same person
  3. Embeds the memory semantically — placing it in a vector space where similar concepts cluster together
  4. Detects contradictions — flagging when new information conflicts with what you’ve previously recorded
  5. Connects to your knowledge graph — linking people to events, projects to organisations, commitments to deadlines

The result isn’t a transcript. It’s a knowledge graph — a living, queryable model of your world that grows smarter with every interaction. The five steps above happen automatically, in the background, while you move on with your day. That’s what separates a Reasoning Memory Engine from a notes app with an AI bolt-on.


How Does This Work in Practice?

Knowledge graph-enhanced retrieval improves factual accuracy by 13.6% and answer quality by 22.9% compared to traditional search approaches (Nature Scientific Reports, 2025). What does that look like in practice? Instead of searching through transcripts, you can ask questions that require synthesis — not just lookup:

  • “Brief me on everything related to Project Aurora before tomorrow’s meeting”
  • “When is Sarah’s birthday? I think she mentioned it last month”
  • “What commitments did I make to the engineering team this quarter?”
  • “How has my running pace changed over the past 6 weeks?”

The system doesn’t just find relevant recordings. It synthesises an answer, cites its sources, and surfaces connections you might have missed. It can even detect cross-domain patterns — like the relationship between your work stress and your running injury risk — because the knowledge graph connects data that would otherwise live in separate silos.

90% of users say voice input feels easier than typing (DemandSage, 2026). We’ve found that when you remove the friction of capture, people naturally start recording more — and the knowledge graph becomes richer, faster.


Why Did We Bet on Long-Context AI?

Google’s Gemini 1.5 Pro demonstrated >99.7% recall in “needle in a haystack” retrieval tests across text, video, and audio at up to 1 million tokens (Google DeepMind, 2024). That capability is what made the architecture possible. We needed a model that could hold hundreds of memories in a single context, resolve ambiguous references across weeks of conversation, and extract structured data from messy, unstructured speech.

Specifically, we bet on Google’s Gemini 3 Pro and its ability to:

  • Process long audio files directly (no separate transcription step)
  • Extract structured data from unstructured speech with high fidelity
  • Maintain consistency across hundreds of memories in a single context window
  • Resolve ambiguous entity references across conversations separated by weeks

So far, the bet is paying off. A 2025 study in Nature Scientific Reports found that knowledge graph-enhanced RAG improves factual accuracy by 13.6% over baseline retrieval-augmented generation (Nature, 2025) — which validates the architectural choice of building a structured graph rather than relying on flat vector search alone.

But we’re honest about the trade-offs — this approach is compute-intensive, and the accuracy of entity resolution across very long time horizons is still an active area of improvement. We’ve written more about our privacy architecture and why your data never trains the model.


What’s Next in This Series?

This post is the first in a series documenting how the system works under the hood. In upcoming posts, we’ll cover:

  • The hidden cost of your second brain — why we eliminated the Maintenance Tax
  • The Knee, the Board Meeting, and a Pattern — how cross-domain pattern detection caught an injury three months early
  • Your Life Is Not Training Data — what data sovereignty actually means when your AI handles your most sensitive memories
  • The Knowledge Graph architecture — PostgreSQL, pgvector, and why we chose a property graph over a triple store
  • Entity resolution at scale — how the system decides that “Mark” and “Marcus from accounting” are the same person

If you’re building in the AI-augmented knowledge space, or if you’re tired of losing important details to the void between your meetings and your notes, we’d love to hear from you.


Frequently Asked Questions

How much time do knowledge workers spend searching for information?

Multiple independent studies converge on the same finding: knowledge workers lose 20-30% of their workweek to information retrieval. McKinsey Global Institute measured 9.3 hours per week (McKinsey, 2012), IDC found 2.5 hours per day, and a 2022 Glean/Harris Poll reported 25% of the workweek (Glean, 2022). The problem isn’t new — it’s persistent.

What is a Reasoning Memory Engine?

A Reasoning Memory Engine captures raw inputs — voice, text, images — and automatically extracts entities, builds connections, detects contradictions, and surfaces patterns. Unlike traditional note-taking tools that require manual tagging and linking, it constructs the knowledge graph from your raw experience rather than from your administrative effort.

How is this different from a notes app with AI features?

Most AI-powered notes apps add features like summarisation or search on top of a traditional note-taking model. You still have to capture, file, and maintain your notes. A Reasoning Memory Engine replaces the entire pipeline — capture is voice-first, processing is automatic, and the knowledge graph builds itself. The difference is architectural, not cosmetic.

Why do most people abandon their productivity apps?

Productivity apps see a day-30 retention rate of just 4.1% (Growth-onomics, 2025). The core issue is that the maintenance burden exceeds the retrieval value. People stop using the tool not because they don’t need it, but because the overhead of keeping it useful is unsustainable. The solution isn’t better habits — it’s a system that doesn’t require maintenance.

Does voice input actually work for capturing knowledge?

90% of users report that voice input feels easier than typing (DemandSage, 2026). The speech recognition market is projected to reach $47 billion by 2030 (Grand View Research, 2024), driven by accuracy improvements that have made voice-first workflows viable for professional use. Combined with AI that can extract structured data from natural speech, voice becomes the lowest-friction capture method available.


A Reasoning Memory Engine captures raw life logs — voice, text, images — and builds a structured Knowledge Graph automatically. Your data stays yours.


Patrick Lehmann

Written by

Patrick Lehmann

Software Architect & AI Engineer

Founder of Linubra. Building tools that capture reality and retrieve wisdom. Software architect with a passion for AI-powered knowledge systems and the intersection of memory science and technology.

More from Patrick Lehmann

Abstract visualization of a pristine desk overshadowed by an overwhelming filing cabinet — the cost of manual knowledge maintenance

The Hidden Cost of Your Second Brain

Knowledge workers spend 25% of their workweek just searching for information (Glean/Harris Poll). Every note-taking system carries a hidden Maintenance Tax. Here's why we eliminated it.

Patrick Lehmann Patrick Lehmann · · 6 min read
Abstract visualization of threads connecting business and athletic objects — revealing hidden cross-domain patterns

The Knee, the Board Meeting, and a Pattern

75% of athletic injuries are associated with anxiety or depression (PMC, 2024). A founder's board-meeting stress caused a running injury no app caught — because the data lived in different silos.

Patrick Lehmann Patrick Lehmann · · 6 min read
A sealed concrete cube surrounded by untouching glass spheres — representing data sovereignty and protected personal data

Your Life Is Not Training Data

82% of consumers see AI data loss-of-control as a serious personal threat (Relyance AI, 2025). Here's what data sovereignty actually means when your AI tool handles your most sensitive memories.

Patrick Lehmann Patrick Lehmann · · 6 min read

Stay in the loop

Get insights on AI, memory, and building tools that capture reality.