We stand at a peculiar moment. AI can generate poetry that captures emotion, create images that are indistinguishable from reality, and correctly answer every question in an SAT Math exam. Yet it struggles to perform the professional tasks that occur thousands of times each day: an analyst at Goldman Sachs building a DCF, a partner at Latham & Watkins structuring a merger, a radiologist at Mayo Clinic identifying a tumor pattern.
This reveals an intelligence paradox. We are solving problems we once thought were hard–general reasoning, pattern recognition, and creativity–while the problems individuals find easy remain out of reach. The problem is not that we lack processing power (computational), it's that we lack knowledge and understanding (epistemological).
Professional knowledge doesn't live in textbooks. It exists in the space between keystrokes, in the pause before a diagnosis, in the intuition that tells a developer this architecture will scale and that one won't. Models have never been trained on this knowledge, because, until now, we didn't have anything to capture it.
Consider what happens when an expert performs their craft: A quant at Citadel doesn't just run models; they navigate a complex decision tree built from thousands of market scenarios, each weighted by experience.
This tacit knowledge ("knowing more than we can tell") represents trillions of dollars of economic value locked inside human minds. It's the difference between a model that can pass the bar exam and a model that can actually practice law.
The breakthrough isn't coming from bigger models or more compute. It's coming from a reimagining of how we teach machines to think.
Reinforcement learning transformed coding (see incredible products like Cursor and Claude Code) because code is verifiable. It either compiles or it doesn't, tests pass or they fail. This binary feedback created a clear reward signal that models could optimize against. But most professional work isn't binary. It exists in gradients of correctness that only experts can perceive.
We're now entering the age of granular evaluation. Instead of asking "Is this answer correct?", we're building grading rubrics that capture how experts actually think:
Not just whether the answer to "calculate NVIDIA's return on equity for FY 2025" is exactly 106.48%, but whether 2025 net income is calculated correctly, whether 2024 shareholders' equity is correctly identified, and whether the correct shareholders' equity averaging formula is applied.
Without giving models signals on reasoning, they will never learn complex solutions. This shift from outcome to process is how we'll crack the professional domains that create real economic value. With grading rubrics and RL, models will quickly improve on less verifiable but crucial domains (finance, medicine, law, etc.).
The foundation model layer is rapidly commoditizing. The differences between GPT, Claude, and Gemini are shrinking with each model release. When everyone has access to the same transformer architectures and training compute (Google and OpenAI are very well capitalized!), competitive advantages must come from somewhere else.
Data is that somewhere else. Data that captures how a Supreme Court clerk crafts arguments or how a surgeon decides between approaches. This is data that can't be scraped from the internet because it was never written down. This is data that can't be synthetically generated because models haven't seen past examples. It lives in the minds of the 1% of professionals who define excellence in their fields.
The companies that win the next decade will be the companies that help models extract, structure, and encode professional expertise into formats that machines can learn from. Immense enterprise value will accrue to the models that are useful for professional work.
Redefining the future of work is inevitable. Economic singularity is inevitable. We're helping to create a future when technology completely reshapes the economy.
Today, professional expertise is scarce. A company can only hire so many 10x engineers. This scarcity creates economic inefficiency – not because work is impossible, but because there aren't enough experts to work.
Now imagine a world where expertise is abundant. Where every startup has access to partner-level legal counsel. Where every patient has access to Mayo Clinic-level diagnosis on their smartphones at near-zero cost.
We're building the bridge between the current age of impressive but impractical AI and the coming age of professional automation. We're creating the training infrastructure that will turn foundation models from brilliant generalists into domain-specific experts.
In a few years, the question won't be whether AI can do professional work. It will be whether any professional work requires humans. When expertise becomes infinitely scalable, human judgment becomes infinitely valuable. When basic professional tasks are automated, humans are freed to tackle problems we can't even conceive of today. We're building the data infrastructure for this future. Domain by domain, profession by profession, we're encoding excellence into forms that machines can learn.
The company that successfully captures and encodes professional knowledge will reshape the economy. That company is AfterQuery.
— Spencer Mateega, Carlos Georgescu, Danny Tang
We're assembling a team of exceptional individuals who understand that data is the blueprint for intelligence.
AfterQuery is hiring engineers, researchers, and domain experts who want to build the bridge to artificial professional intelligence.
The future of work is being written. Help us write it.