Here's What We WON'T Get From AI in 2025
Published: January 5, 2025
My prediction on what we should see in AI in 2025 but maybe we won't see
Like every new year, the online challenge is about who can make the most accurate predictions about what will happen in a specific sector during the new year, which in my case is artificial intelligence. It's a game I can't participate in because I would always lose. It's well known that I'm always ahead of the market by at least a few years. For this reason, in the rest of the article, I'll talk about the exact opposite - what won't happen in the AI world in 2025. Not everything that won't happen, that wouldn't make sense and the list would be endless. Instead, I'll talk about everything that I believe could and, most importantly, should happen, simply because in my view the time is ripe for it to happen, and yet it won't due to the lack of foresight of the biggest players in the sector. I know all this sounds very arrogant and boastful: who am I to have greater foresight in AI than Sam Altman, Jensen Huang, and many other giants in the field? Well, the point isn't who I am or who I'm not; the point is that after a few decades of seeing my predictions come true 100% of the time, just on average 3-5 years after the time I had predicted, I have to accept it - I have this gift, so I try to use it. Maybe one day the big players will start reading me and become a bit more forward-thinking and less behind the times. Or maybe I'll become even bigger than them. 🙂 For each of the things I'll list, I'll explain why it would already be feasible and, where applicable, what the market will do instead. However, I'll leave to the absolute oblivion of mystery the reasons why what I present won't happen; is my idea perhaps idiotic? Is the idea good but no one, not even the brightest minds on the planet, managed to have it before me? Boo, I'll let you decide that, since I've been wondering for ages why often the biggest names have my exact same winning ideas, just years after I've had them. One last note before diving into the list: within a few dozen hours, I'll launch my next startup. Obviously, I'll announce it officially here and on my social media, and it will be active in the AI sector. Since the things I know won't happen in 2025 in the AI sector, which I'm going to list in this article, are also things that would be great for everyone if they did happen, it's inevitable that my nascent startup will try to implement as many of these things as possible among those you'll read below. There's obviously no guarantee that it will succeed, but there is a guarantee that these are at the foundation of its mission. Then, of course, priority will be given to those that will achieve more commercial success, in order to more easily fund the others as well.
Here's the list of what won't happen in the AI sector in 2025:
In 2025 We Won't See Any Standardization of AI Agents
Everyone's talking about AI Agents and everyone insists that 2025 will be their year. It's absolutely true, I agree too, but their real effectiveness will be severely hampered by the lack of standards that would allow their management and interoperability in an optimal, open, and multi-vendor way. Let me explain: standards like TCP/IP for the internet, ODBC for databases, OpenGL for graphics, W3C for the web, and dozens and dozens of others in various IT fields have always been the foundation for the explosive adoption and development of entire IT sectors. Today, not in three years, it's absolutely desirable to start designing these standards for AI Agents. Why? Because history teaches us that, especially in IT, until free and standard protocols are adopted, the reference sector remains fragmented, at risk of monopolies and various kinds of lock-in, and ultimately the sector's development is severely stifled, no matter how powerful or amazing individual vendors' solutions might become. We're already witnessing this fragmentation that greatly limits the more advanced adoption of the most powerful artificial intelligence models. Some models, for example, manage a sort of intra-session memory between various chats, others don't. Some offer advanced features like project management, or canvas management, and so on. All of them, however, even though they offer APIs that are generally poorly compatible with each other, expose these advanced features only if you access the various models using the vendor's specific platform, giving up any interoperability with other vendors. The same applies to "tools," meaning the functionalities that various models can directly access to increase their capabilities. Although there are already various libraries that try to standardize the way different models from different vendors are programmed, none of them has as its main mission precisely to become the de facto standard of the sector, that is, proposing to orchestrate the library's development together with as many vendors as possible, often resulting in proprietary solutions sometimes not even open source, or only for some features.
What we won't see in 2025 is essentially a series of market-recognized standards for managing agent memory and RAG systems, for their access to tools, for managing chats and calls with multiple human and AI participants, for interoperability at various levels between the agents themselves.
Yet AI Agents will truly explode in 2025 and in the years to follow. So we'll witness a proliferation of chaos and incompatibility. Do we really want to find ourselves living through the same nightmares in the AI field that we experienced until a few years ago with the internet in general due to incompatibilities between the various most popular browsers? I really hope not, please!
In 2025 We Won't See Any Revolutionary New AI Model Architectures
It's a fact that progress in artificial intelligence has always followed cyclical patterns of golden periods followed by periods commonly called "winters." I'm not saying another AI winter is coming; today things are very different from the past, and we've certainly reached a point of no return in the adoption of AI technologies, so progress will continue to happen in any case, even though lately many are rushing to predict the arrival of the famous WALL, or stagnation if you will, according to whose supporters everything that could be achieved with AI has already been achieved and not much more will be obtained. No AGI therefore, let alone, not even jokingly, ASI. Now, anyone with an ounce of sense would have no difficulty comparing those who think this way to a primitive who, after inventing the wheel and using it for a couple of weeks under some big boulder to make it slide away more easily, might say: "Well, this is what the wheel is, we won't get much more out of it," obviously not imagining that it's also a wheel that today allows an Airbus A380 800 to take off and land.
And yet it's a fact that these very short-sighted individuals are generally also many if not the majority, so they manage to influence the sector to such an extent that they convince even the biggest players to entrench themselves in wanting to build the wheel for the aforementioned Airbus's landing gear using the most primordial technologies, without venturing into risky research in yet unproven areas that would actually be absolutely worth investigating to make the wheel become something that can truly be applied to an Airbus's landing gear. So in 2025, everyone will be elbowing each other to achieve the most striking and spectacular progress in "Test Time Compute," "Chain of Thought," "Forest of Thought" and similar areas, while any desire to explore completely new architectures or fundamental low-level extensions to existing architectures will be abandoned to the nettles. Not a true winter, therefore, but it will be chilly nonetheless. In practice, although some faint glimmer of hope is brought by papers like Meta's on Large Concept Models, many other potentially super-prolific areas are being totally ignored. Specifically, what we'll see least in 2025 is the realization by the entire industry that current models, even the most advanced ones, are still lacking components that are absolutely essential to be candidates for reaching AGI or beyond. The sector seems to forget that no model today has systems to learn in a truly iterative way and maintain memory of what it has learned. It's not that "Test Time Compute" and the other things mentioned before aren't good, it's that they should come after. It's like wanting to teach a robot to crochet without having built its hands first! Specifically, what's missing today, what we should be working on today is the entire neuroplasticity system typical of biological brains. This is what's completely missing from today's models. This is serious because even though with various "Test Time Compute," "Chain of Thought," "Forest of Thought" plus a healthy dose of brute force search algorithms it might seem to us that these models are becoming increasingly intelligent, in reality they aren't becoming intelligent at all because they haven't been equipped with that mechanism that would allow them to subsequently execute learned tasks automatically, as happens to us humans who, for example, the first times we try to drive a car are slow and clumsy, but after some time we drive completely automatically. You see, without this mechanism even an ASI would always remain on its knees because it would have to relearn every task every time it needs to execute it, even if these were always the same. But unfortunately in 2025 we won't see anyone else, besides me, who will understand these things I've just written. In 5 years, however, maybe...
In 2025 We Won't See Any Multi-Model AI Agent
Although, as already mentioned, AI Agents will explode in 2025, no one seems to have realized yet that another absolutely essential element to bring current models closer to AGI or ASI is what in humans is called the "theory of multiple minds," a model theorized by researcher Marvin Minsky that describes the mind as a set of cognitive agents that interact with each other, meaning not a monolithic mind but a system composed of numerous specialized mental processes that collaborate and sometimes compete with each other. This can be done today and must be done today, especially with the most advanced models, because specifically it would not only bring major improvements from the cognitive perspective of AI Agents, but would also allow super-optimized management of their computational costs. One example above all: why have the expensive GPT-o3 do the preliminary RAG for responses, when instead, given the simplicity of the task, a small open-source Mistral of just a few billion parameters could do it perfectly well? And so on for many other examples. In essence, it must be possible to build AI Agents that are composed of multiple internal models, each designated for a specific task in the agent's "thinking" process. Don't worry, we won't see this in 2025 either. At most, someone else, besides me, will come to understand this thing in 2-3 years.
In 2025 We Won't See Any Programming Language Specific for AI
It's a widespread common opinion, largely conveyed by many YouTube gurus but also by the industry giants themselves, that one of the areas where the most advanced AI models excel is in software programming. It's said that now the quality of code written by them surpasses that written on average by humans. Now, I don't know what humans they took to establish the average quality of code written by humans, but if that statement is true then they must have calculated an average that includes people who aren't programmers.
The truth is that if you compare the best currently available model with a programmer with 40 years of experience like me, the model makes a poor showing and 9 times out of 10 you have to modify by hand the code it writes, otherwise it either doesn't work at all or works terribly. However, it's very true that a programmer like me nowadays finds it much easier to have the software written by the model and make the few, albeit essential, corrections that are needed afterward. And that's exactly the point: why aren't these models perfect at writing code yet? For me, the answer is obvious - because they're forced to use programming languages designed for humans, not for LLMs. Now I know that videos like "This app was made by AI and I sold a million copies in a month! Without being a programmer!" are trending on YouTube, but they're false. Not a single non-programmer friend of mine has ever managed to make an app with AI despite being strongly encouraged by me to try it, making them believe it's really possible to succeed, even though they were often already technical friends albeit not programmers.
LLMs don't program well because the programming languages they use were created for human brains, assuming that whoever would use them has the ability to switch from a 360-degree view of the entire platform they need to develop to one narrow enough to develop individual components. LLMs don't have this capability; in fact, people try to simulate it by bringing up Agents and implementing multistep thinking strategies that can give LLMs this ability to range from reflections on the application as a whole to those of implementation details, but the truth is that a vastly greater result would be achieved if the LLM were allowed to create software using a programming language created based on LLMs' cognitive capabilities rather than human programmers'. But that's not the only point. Since no matter how good they become, we'll always want to personally check what software an LLM has created before putting it into production, having LLMs write software in traditional programming languages effectively prevents non-programmers from being able to understand what the LLM has actually created, meaning from truly being able to use LLMs to write software, unless they put something into production about which they know absolutely nothing, with all the risks that entails.
Unfortunately, however, no one else in the world has understood these things yet, and they are therefore all focused and racking their brains to create the best LLM for writing code in Python, Java, and JavaScript while what should be done is creating a new programming language that has these two simple characteristics: 1) it's much easier for LLMs to use; 2) it's incredibly easy to check and validate even for non-programmers.
No point in kidding ourselves, until this happens, the use of LLMs in software writing will always be severely penalized. But don't worry, in 2025, we won't see any of this either. Unfortunately.
And here I end my list. As mentioned, I've focused only on what I believe would be feasible today and should be implemented today, but no one seems to be aware of it. I'm not stopping here. As anticipated, within a few hours I'll launch my next startup that immediately sets itself the goal of pursuing these things I've listed here precisely because I believe if I don't do them, no one in the world will. I'm aware that there will be a great risk this time too of being ahead of the market, as has happened to me other times. But for this reason, this time I won't put everything on the table at once but will be much more attentive to capturing the real needs of those who decide to use my platform. In short, I'll listen to you, and I'll prioritarily go in the direction that you, users and utilizers of my platform, will indicate to follow. I therefore invite you to participate and explicitly request the features you need most; I'm certain that through the strategic use of all the things that already exist plus those futuristic ones that are still only in my mind, like those described here, we can create together a platform that really makes a difference in the AI sector, yes, even starting from zero. After all, isn't this the great promise of AI? To give us superpowers that allow us to achieve in very little time what used to take decades.