And indeed, Sam Altman, CEO of OpenAI, gave the high five last week: GPT 5. An hour and a half of live streaming to present the new GPT 5 model, which boasts five significant changes or, in Altman’s words, “a major update” to the GPT model.
We are a long way from the unforgettable presentation of the iPhone in 2007. A more intimate setting (a living room) where, after a brief introduction by Sam Altman, various collaborators of the CEO took turns to discuss the five fundamental changes to the model; the ones that interest us.
1. Vibe coding. This seems to us to be the most important aspect, perhaps the only one worth mentioning. GPT is asked to produce software by specifying its characteristics, and it appears that the new model is able to deliver a virtually finished product. It even seems that by describing a feeling or mood, the AI is able to produce a dynamic website (not only HTML and CSS but also the relevant JavaScript code) consistent with these emotions.
2. Size. GPT 5 is available in three versions: standard, mini, and nano (available only via API). Free ChatGPT users will have access to both standard and mini, while ChatGPT Plus subscribers will enjoy higher usage limits across the board.
A US Pro user who pays $200 per month will have unlimited access to GPT-5, as well as the more powerful gpt-5-pro and gpt-5-thinking models. Both take longer to provide answers, but return more in-depth and thoughtful responses.
3. Customization. It’s the tone that makes the music. And in fact, with GPT 5, users will be able to choose the type of interaction they want. In addition to the standard interaction offered by ChatGPT, users can request “Cynic” for sarcastic responses, “Listener” to interact with an understanding interlocutor who also allows for venting, ‘Nerd’ for responses that need no explanation, and “Robot” for purely mechanical responses, which we imagine will be used for scientific consultations.
4. Voice. OpenAI is launching a significantly improved version of voice interaction that not only works with custom GPTs but also adapts the tone and style of conversation based on user instructions and the overall atmosphere. Users can ask the AI for a more lively, slower, warmer voice interaction, or whatever else they want. For ChatGPT Plus users, voice responses are already almost unlimited. Free users also continue to have access, with a few hours a day to chat hands-free. To simplify things, the old standard voice mode will be completely eliminated within a month. After that, everyone will be able to enjoy the same updated experience.
5. Google. Probably as early as this week, ChatGPT Pro users will be able to connect Gmail, Google Calendar, and Google Contacts directly to ChatGPT. This means you’ll no longer need to switch between tabs to check if you’re free next Tuesday or search through conversations to find that email you definitely forgot to reply to. Once connected, ChatGPT will retrieve what it needs to help you answer questions. OpenAI has assured users that it will only retrieve the minimum necessary and only when it is useful. You won’t need to say “check my calendar” or “retrieve that contact.” The AI will do it based on your request: let’s say you need to schedule an appointment; the AI will choose a time and write the email on your behalf. Other subscription tiers will have access to connections in the near future, expanding this feature beyond the Pro version.
However, the main limitation of this “important update” seems to be that it does not exceed the AGI limit. As already mentioned in our in-depth analysis of June 13, 2025, Artificial General Intelligence aims to replicate human cognitive abilities, but this remains a purely theoretical challenge that has certainly not been overcome by GPT 5. The main technical innovation behind GPT-5 seems to be the introduction of a “router.” This decides which GPT model to delegate to when a question is asked, essentially asking how much effort to invest in calculating the answers. Delegation options include previous, current (including gpt-5-thinking – and it is not yet clear what this actually is) and future models integrated into GPT. One could therefore hypothesize that this model is, in reality, just another way to control existing models with repeated queries and push them to work harder until they produce better results, and that it constitutes in itself a well-founded suspicion of the limit that LLMs will not be able to overcome (namely AGI).
To better understand this point – and considering that we took them for granted in our previous in-depth analysis of June 13, 2025 – we believe it is worth saying a few words to clarify what Large Language Models (LLMs) are and how they work.
In 2017, researchers at Google worked on new machine learning (ML) models capable of capturing extremely complex patterns within long sequences of words that underlie the structure of human language. By training them on large amounts of text (hence the name “large language models”), the models could respond to user queries by mapping a sequence of words to its most likely continuation in accordance with the patterns found in the text database. These patterns are nothing more than huge tables of stimuli and responses: a query from the user is a stimulus that triggers a search in the table to find the best response. While traditional ML models learn by calibrating model parameters to arrive at the best answer among those predicted in the table, in LLMs, requests can be modified as needed to direct them to the most appropriate models or to prompt them multiple times to obtain the most appropriate answer. This is exactly what the GPT 5 “router” does: it breaks down more complex requests into simpler ones and sorts them. This could justify the claim—supported by many—that LLMs have no further evolutionary capacity. In order to provide better answers, various models (specialized in certain topics) must collaborate by “orchestrating” their work based on queries that are appropriately broken down and directed to the best model, where the final answer is a composition of specific answers. If this were the case, we would have to agree with those scientists and experts in the field who have long argued that it will not be possible to overcome the current limitations of AI without going beyond LLM architectures.
Returning to GPT 5 and continuing our examination, we note that even the quality of writing seems to have made only linear progress; not bad, but certainly not compatible with the typically exponential improvements we have come to expect from technology.
On the other hand, the use of AI in mathematical (but also scientific) terms and in programming, understood as the production of software solutions, appears to be interesting.
Anthropic, OpenAI’s competitor, has dominated the market to date, but GPT-5 has slightly outperformed Anthropic’s latest model on SWE-bench Verified, an industry-standard test for programming capabilities. This has allowed OpenAI to win over important users, including the Any sphere team, whose popular coding assistant Cursor is one of Anthropic’s biggest customers: Successfully migrating its users to GPT 5 would provide a significant boost to OpenAI’s annual recurring revenue, which already stands at $12 billion in annual recurring revenue (ARR) with a prospect of reaching $20 billion by the end of 2025.
OpenAI is currently valued at around $300 billion, with 700 million weekly active users, many of whom pay monthly subscriptions ranging from $20 to $200, which constitute the product’s main source of revenue.
GPT 5 is expected – precisely because of its positioning relative to the competition – to significantly increase the San Francisco-based company’s valuation: a new valuation of $500 billion is being discussed, which would make it the most valuable private technology company in the world. But apart from the marked improvement in programming, there are many shadows that suggest caution, including the lack of improvement in reasoning and knowledge capabilities compared to competitors such as xAI’s Grok 4 Heavy (which scores 42% on the “Humanity’s Last Exam” compared to 44% for its competitor), the technological limitations that LLMs seem to represent and which form the backbone of GPT, the risk—considered high—of its dangerous use (such as the creation of biological weapons or new viruses by amateurs), and the ever-looming Chinese competition (DeepSeek).
Disclaimer
This post expresses the personal opinion of the Custodia Wealth Management staff who wrote it. It is not investment advice or recommendations, nor is it personalized advice, and should not be considered an invitation to carry out transactions on financial instruments.