Mercor positions itself as a $10B middleman in AI’s data rush
Mercor, a three-year-old startup that has surged to a reported $10 billion valuation, is building a business around a simple premise: the next wave of AI progress depends less on cheap, crowdsourced labeling and more on scarce, high-skilled expertise. The company connects leading AI labs such as OpenAI and Anthropic with experienced professionals—often alumni of firms like Goldman Sachs, McKinsey, and elite law practices—who can provide domain knowledge, evaluate outputs, and help train models that may ultimately automate parts of the very industries they came from.
The company’s approach was highlighted in a recent on-stage conversation featuring Mercor CEO Brendan Foody, recorded at a major startup conference. In that discussion, Foody laid out why he believes the market is shifting toward “expert-in-the-loop” model improvement, how disruptions among large data and labeling vendors opened space for new intermediaries, and why he expects the broader economy to “converge” on training AI agents as a core activity.
Why AI labs are paying up to $200 an hour for expertise
Unlike earlier phases of AI development—where massive datasets and low-cost labeling at scale could deliver step-function improvements—Foody argued that frontier models increasingly benefit from specialized judgment. In fields such as finance, consulting, and law, small nuances can determine whether an answer is merely plausible or truly correct, compliant, and useful. That is where Mercor aims to operate: sourcing vetted experts who can provide high-quality feedback that helps models learn industry-specific reasoning and language.
Foody said that a relatively small share of contributors can drive an outsized share of progress. In his telling, the top tier of contractors—roughly the best 10% to 20%—produce the majority of meaningful model improvement. If that dynamic holds, it changes the economics of AI training: it becomes less about maximizing throughput and more about finding and retaining the rare people who can reliably spot subtle errors, craft high-signal examples, and set standards for what “good” looks like in a professional context.
From crowdsourcing to “elite contracting”
Mercor is effectively betting that AI development is entering a phase where “who” provides feedback matters as much as “how much” feedback is collected. The startup’s pitch is that elite contractors can outperform large pools of anonymous crowd labor when tasks require contextual judgment, deep familiarity with regulations, or an ability to reason through edge cases.
This shift also reflects the rising stakes for AI labs racing to deploy systems into real workflows. As models move from demos into revenue-generating products, errors can become expensive: a hallucinated legal citation, a flawed financial calculation, or a misinterpreted policy can create compliance and reputational risks. Hiring experienced professionals to stress-test and refine outputs is one way labs attempt to make models more dependable.
Scale AI turmoil and a market opening for new intermediaries
Foody also pointed to industry turbulence as a catalyst for Mercor’s rise, referencing reported troubles at Scale AI, one of the best-known providers of data-labeling infrastructure. While the details and competitive dynamics vary by customer and use case, the broader point is that AI labs are increasingly diversifying suppliers and experimenting with new training pipelines—especially as they push into harder problems that require expert-level evaluation.
In that environment, a company that can quickly assemble specialized labor—financial analysts, former consultants, seasoned attorneys, or other credentialed professionals—can become strategically valuable. Mercor is positioning itself as the broker that can deliver that workforce on demand, at a quality level that labs believe improves model performance faster than generalized labeling programs.
The gray zone: employee knowledge vs. corporate secrets
One of the most sensitive issues raised by Foody’s comments is the boundary between legitimate professional expertise and protected corporate information. When former employees of major institutions help train AI systems, they may draw on lessons learned inside those organizations. That can be entirely lawful—people are allowed to use general skills and knowledge—but it can also drift into risky territory if it involves confidential data, proprietary processes, or non-public strategies.
Foody’s framing acknowledged a “gray area” that companies and contractors must navigate. For large employers, the concern is not only whether trade secrets are disclosed, but whether AI systems could be indirectly tuned to replicate internal best practices at scale. For contractors, the risk is that a well-intentioned attempt to be helpful could cross compliance lines, especially in heavily regulated sectors.
For firms such as Goldman Sachs and top consultancies, the question is increasingly practical: should they treat outside model training work as a routine side gig, or as a potential leakage channel that demands tighter policies, monitoring, and enforcement?
Foody’s thesis: all knowledge work becomes training data
Foody argued that the long-term arc of AI will pull more of the economy into the act of training AI agents. In this view, “knowledge work” does not disappear overnight; instead, it transforms. Professionals spend more time supervising, evaluating, and shaping AI outputs—creating feedback loops that turn expertise into structured training signals.
That thesis implies a future in which many workers are, in effect, part-time AI trainers, and specialized intermediaries like Mercor become a new labor layer between individuals and the labs building models. It also suggests a new kind of career leverage: people with rare, high-value judgment may be able to monetize their expertise directly, while more routine work becomes increasingly automated.
What comes next for Mercor and the AI labor market
Mercor’s growth highlights a key tension in the AI boom: the models are built to reduce the cost of knowledge work, yet their improvement increasingly relies on paying top experts premium rates. Whether that remains true as models mature—or whether new techniques reduce the need for human expert feedback—will shape how durable this “elite contracting” market becomes.
In the near term, the company’s rise underscores how AI’s infrastructure is expanding beyond chips and cloud. The less visible layer—human expertise, evaluation, and domain-specific training—may prove just as decisive in determining which labs build the most capable, trusted systems, and which workers benefit from the transition.









