By now, news about artificial intelligence (AI) comes out on a daily if not intra-daily basis: a signal that it is indeed a topic of interest as well as a leading investment topic. However, we believe that few news stories are noteworthy at least from a financial point of view. This week we point out two that are markedly opposite and for that very reason even more interesting.
The first concerns Meta and its planned $15 billion investment to acquire 49 percent of Scale AI, a startup specializing in data labeling (labelling) and pattern evaluation (machine learning) services. This acquisition is positioned within a larger project that sees Meta Platform Inc. engaged in the creation of a (very little is known about) artificial intelligence research lab dedicated to achieving “superintelligence,” a hypothetical AI system capable of surpassing human cognitive abilities in all domains. Known as Artificial SuperIntelligence (ASI), it is a theoretical (and we emphasize theoretical) form of AI that would far surpass human intelligence in all aspects, including problem solving, reasoning, creativity and emotional understanding. It represents the highest stage of AI development, surpassing so-called Artificial General Intelligence (AGI), which aims to replicate human cognitive abilities, and is also obviously a purely theoretical challenge (for the time being).
The impetus behind this new lab seems to stem from Zuckerberg’s frustration with Meta’s perceived shortcomings in AI advances, including the mixed reception of the Llama 4 model and delays in the most ambitious model: Behemoth. Meta’s bold goal is to outdo other tech companies in this battle: in particular, competitors such as Google, OpenAI, and Anthropic have unveiled a new generation of powerful “reasoning” models that solve problems by breaking them down step by step: and here we are talking about AGI. However, Meta wants to go beyond AGI, with the long-term vision of integrating ASI’s advanced capabilities into Meta’s vast suite of products, including social media platforms, communication tools, chatbots, and AI-enabled devices such as Ray-Ban smart glasses. In all of this, it has also factored in the enormous amount of energy required by these models and has signed an agreement with Constellation Energy, the leading operator of nuclear power plants in the stars and stripes, to purchase all of the energy produced by the Clinton power plant in Illinois for 20 years starting in 2027.
And now the cold shower. In a paper published last weekend, Apple claimed that Large Reasoning Models (LRMs), the basis of AGI and perhaps even ASI (because LLMs surely cannot be), suffered a “complete collapse in accuracy” when subjected to very complex problems. These are puzzle problems such as the famous Tower of Hanoi.
It turns out that standard artificial intelligence models outperform LRMs in low-complexity problems, while both types of models suffer a “complete collapse” in high-complexity problems. LRMs attempt to solve complex questions by generating detailed thought processes that break down the problem into smaller sub-problems to be tackled sequentially. We do not think it futile to point out that when we talk about AI, complexity must mean “computational difficulty” or, trivializing for explanatory purposes, we talk about combinatorial problems.
The study, which tested the ability of models to solve puzzles, added that when LRMs approached performance collapse they began to “reduce their reasoning effort.” Models such as OpenAI’s o3, Google’s Gemini Thinking, Anthropic’s Claude 3.7 Sonnet-Thinking and DeepSeek-R1 were tested. Apple researchers said they found this evidence “particularly troubling.”
The paper also found that LRMs waste computational power (and thus electricity) by finding the right solution to simpler problems in the early stages of their “thinking.” However, when the problems become slightly more complex, the models explore the wrong solutions first and arrive at the correct ones later. For problems of greater complexity, however, the models enter “collapse,” failing to generate any correct solutions. There was also a case where once an algorithm was provided that would solve the problem, the models failed. As they approach a critical threshold-which corresponds closely to the point of collapse of their accuracy-the models counter-intuitively begin to reduce their reasoning effort despite the increasing difficulty of the problem, and this portends a substantial scalability limit in the reasoning capabilities of current LRMs-that is, a fundamental limit in generalization (a property that any AI or machine learning model must possess) reasoning.
These results are certainly not encouraging, and above all, it is hard to understand how Meta can succeed in developing superintelligence models while devoting significant resources – moreover to be financed largely through advertising revenues – to grabbing the best minds in this field. However, the history of AI is punctuated by courses and recurrences as in the case of neural networks abandoned for a long time once it was established that the contribution of many layers of “neurons” was practically irrelevant compared to the added value provided by the last and then returned in a big way since 2017 with Deep Learning. Meanwhile, research efforts in machine learning have focused on other areas and models (e.g., kernel methods) that are no less useful and important.
What seems to be looming to date is that AGI/ASI development should not go through LRMs, but probably through different types of models or revision of those currently under study. What seems to be emerging under the investor lens is a reconsideration and careful weighing of the potential of AI as an investment theme. Which in no way means abandonment of the theme or underweighting of portfolio positions pertaining to the theme. It simply means: more careful weighting and analysis of the “value” profiles of these investments.
Disclaimer
This post expresses the personal opinion of the employees of Custody Wealth Management who wrote it. It is not investment advice or recommendations, personalized advice and should not be considered as an invitation to conduct transactions in financial instruments.