The Enterprise Brain: Why Someone Will Make a Fortune Solving Knowledge Work’s Biggest Problem
Someone is going to build a world-class “Brain” for enterprises and make a stupid amount of money.
Why? As David Fant said, “coding with AI is solved because all context is in the git repo. Knowledge work is difficult because context is spread out. An AI system that creates a git repo with all context for a knowledge worker will be able to 100% automate the work.”
When companies talk about being data ready for AI, this is what they’re implicitly saying.
Table of Contents
Engineering’s Advantage
Engineering has been prepared for this moment for a long time because of the deterministic nature of code, the centralization and versioning of data (read: GitHub), and AI tools that are largely built by engineers for engineers.
But for the rest of white collar work, there’s a TON of catching up to do to properly harness the power of the technology. If you’re curious about the broader landscape, check out these high-ROI AI use cases by category to see where the opportunities lie.
The Big Challenge
The big challenge here, and why no one has truly cracked the code for “an AI system that creates a git repo with all context for a knowledge worker” is because unlike code, most knowledge is 1) distributed, 2) unstructured, and 3) unverifiable.
It’s Distributed
Transcripts live in Granola. Documents in Notion. Customer data in Hubspot.
Building an ingestion engine that connects to your disparate data sources and auto-updates based on the shelf-life of the data is the first, and frankly, easiest step of the process.
It’s Unstructured
Let’s say I want to create a proposal for a potential client. To nail the proposal, I want it to pull important information from a variety of sources. The specific asks and background from our initial sales call. Previous proposals to anchor ourselves to a proven format. And completed sprint boards from Linear, so the pricing and timeline in the document is grounded in truth.
Whether it’s a thoughtful filesystem (a la Obsidian) or an OpenClaw-esque memory structure, the brain needs to be great at self-organizing in a thoughtful schema. This is very hard, especially if you want to build a generalizable brain that can be shaped to an array of different enterprises.
It’s Unverifiable
Writing a function, running a unit test, and seeing if the code works is easy. It works or it doesn’t. Using AI to accelerate your content creation process is highly subjective. What is a good or bad idea? Is the content in your voice or not? Does it feel like slop or novel? Answering these questions are both difficult and non-verifiable.
That same system described above doesn’t just have to be great at organizing and forming coherent relationships, but it also has to be great at self-improving based on feedback from the user. Memory systems (like those introduced by OpenClaw) are great to a point, but as you scale the corpus of data within your company’s brain, things like compaction and cleaning become wildly important to avoid the needle in the haystack problem. This is one of the core AI frustrations that teams face when trying to implement these systems at scale.
The Opportunity
Someone is going to figure out how to solve this problem. And when they do, not only will they make a ton of money, but they’ll be Robinhood for knowledge workers.
The company that cracks this will need to solve three interconnected challenges: building ingestion engines for distributed data, creating self-organizing schemas for unstructured information, and developing feedback loops for unverifiable outputs.
This isn’t just a business opportunity. It’s the key to unlocking AI’s full potential for everyone outside of engineering. The race is on to build the enterprise brain that finally brings knowledge workers into the AI era.
Keep an eye on this space.
