Meta Platforms (META) CEO Mark Zuckerberg‘s ambitious pursuit of artificial general intelligence (AGI) is fueling a spending spree reminiscent of past tech bubbles. The company’s aggressive recruitment of top AI researchers, including a reported $250 million, four-year contract for a 24-year-old researcher, highlights a prioritization of spending over profitability.
This mirrors the dot-com boom, where companies prioritized spending, particularly on personnel, to establish their importance. Zuckerberg‘s strategy, involving significant bonuses to lure talent from competitors like OpenAI, Google (GOOG)(GOOGL), and Anthropic, underscores this approach. While some sources claim that Anthropic‘s CEO rejected offers, the overall impact on the AI talent market is undeniable.
Meta‘s spending is not unique. Other tech giants, such as Microsoft (MSFT), are engaging in similar recruitment drives, resulting in a highly competitive market for AI engineers and researchers. Reports indicate a significant increase in average salaries for these roles, with some top candidates receiving compensation packages exceeding $900,000 annually.
These salaries dwarf those of prominent researchers in the past. The compensation of Meta‘s latest recruit, for example, significantly exceeds the historical earnings of renowned figures like Robert Oppenheimer and Thomas Watson, even when adjusted for inflation.
This exorbitant spending is predicated on several assumptions. The belief in the imminent arrival of AGI, the expectation of substantial commercial value from large language models (LLMs), and the assumption that only top-tier researchers can achieve AGI are all central to this investment strategy. However, experts question the validity of these assumptions.
Some leading AI researchers, like Yann LeCun, Meta‘s chief AI scientist, express skepticism about the timeline for achieving AGI. LeCun has described current AI as less intelligent than a cat and indicated that true AGI remains years away. Furthermore, the limitations of current LLMs are also causing concern. These models often struggle with critical thinking and real-world understanding, requiring extensive human intervention to correct errors and ensure accuracy.
The commercial viability of LLMs also faces uncertainty. Their inherent limitations raise concerns about their reliability for critical decision-making. Incidents, such as a reported case of near-fatal bromide poisoning due to reliance on ChatGPT for medical advice, highlight the risks associated with deploying LLMs in high-stakes applications.
The assumption that only elite researchers can advance AGI is also questioned. Historical examples, such as the success of Bell Labs, which prioritized diverse talent over strictly academic credentials, demonstrate that significant technological breakthroughs can emerge from a broader range of individuals.
The current AI spending spree raises questions about its long-term sustainability. The high cost of talent, coupled with the inherent uncertainties surrounding AGI and the commercial viability of LLMs, casts doubt on the economic rationality of this approach. The situation appears to mirror historical tech bubbles, where rapid spending eventually gave way to a correction. Whether the current AI boom will follow a similar trajectory remains to be seen.










