5 epochal changes humanity must embrace ASAP to secure a bright future - Part 2 of 5
Published: February 16, 2025
In this long, five-part article, I'm focusing on what, in my humble opinion, humanity must do as soon as possible to guarantee ourselves a future so bright it seems utopian while effectively averting the worst that could happen.
Wait, this is Part II of the article, if you missed it, Part I is here.
In this second part of the article, I'm putting forward a proposal even more controversial than the previous one, I'm aware of that. This time, its implementation is also much more complex. Finally, even if we wanted to embrace it, we'd face objective difficulties that are neither marginal nor easy to resolve. At the same time, I maintain that it's an essential proposal, one that we must all strongly demand our governments translate into law as soon as possible, and I hope to fully convince you of this with this article. If I succeed, then you'll agree with me that the difficulties and problems in implementing it should only push us to address this proposal sooner and with greater urgency, precisely because it requires so much work to bring to fruition. So, let's get straight to the point.
Artificial intelligence software must be mandatorily open source, by law.
Years ago, a group of brilliant individuals – some of the world's most powerful entrepreneurs and computer scientists and geniuses of unquestionable integrity – came up with an idea as brilliant as it was just and opportune: to create a non-profit organization dedicated to the development of "open" AI that would be beneficial to humanity. In their initial announcement post, they declared that they would make their research and patents freely available. The manifesto spoke of "advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." The reasons behind this initiative? "AI needs to be open source because it's too dangerous to leave this power concentrated in the hands of a few corporations!"
Simple. Clear. Direct. Very, very right.
Above all, this was back in 2015, and all this wasn't declared, desired, and supported by a bunch of hippies, flower children, or out-of-touch weirdos, dazed on a sweltering summer Sunday by too many joints and too much scorching sun at the beach. No way: we're talking about the cream of the crop of technological and entrepreneurial humanity: Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, John Schulman, soon joined by other prestigious researchers of dizzying stature like Andrej Karpathy, Durk Kingma, and numerous others. In an unexpected and utopian surprise twist, the world was shaken and moved by the vibrant fervor with which individuals usually well-known for their greedy ability to amass billions, to legally evade taxes and duties, to optimize labor costs with the most acrobatic financial techniques, transformed themselves with high-sounding proclamations into champions of artificial intelligence for the benefit of all humanity, without aiming for profits. And not only that, contributing to the implausibility of the initiative and the resounding astonishment it caused in most people was the flood of billions that flowed into OpenAI from Microsoft, from Elon Musk and Sam Altman themselves, and also from Peter Thiel, Reid Hoffman (co-founder of LinkedIn), Jessica Livingston, Amazon Web Services (providing cloud computing credits), Y Combinator Research, and countless others who, having participated privately, have not been disclosed. We're talking about many billions of dollars raised in the early years to finance an initiative that had no profit motive. Can you believe it?
Yeah!
We all certainly believed it!
But unfortunately, it wasn't true.
The story of OpenAI should teach us that if something is truly essential for the good of the community, we cannot and must not leave its implementation to the hypothetical enlightened benevolence of some tycoon of the moment. The story of OpenAI must be studied, remembered forever, and taken as a perfect example of the cloying and paroxysmal derailment of a company born with the very noble intention of hindering at birth exactly the kind of company that it itself has become. Had the premises on which OpenAI was founded been laws, we would never have arrived at the grotesque corporate oxymoron of finding "OpenAI" as the name of one of the most closed, centralized, and profit-oriented companies in the sector. I'll skip over, because I think there are more important things to discuss, the vile and deceitful narrative used from time to time by Sam Altman to justify OpenAI's transformation from champions of open source to proprietary guardians of "dangerous" technology, but you'll all agree that it was a masterpiece of Orwellian doublethink.
What must be remembered, however, is that the motivations behind the birth of OpenAI were extremely valid, and its metamorphosis, all the more reason, must be for us the irrefutable demonstration of how real the danger is and how insidious and unpredictable the drift of companies active in AI research can be. So let's start from this: what are the reasons why AI software must be open source? Here they are:
- It is widely believed in the academic and scientific world that sooner or later AI will surpass human intelligence in quality. Not having the source code of such technology would mean increasing the probability of becoming its mercy if it somehow turned against us.
- AI supremacy is serious business. A private individual who first managed to achieve an ASI could use it for private interests that are potentially contrary and disadvantageous to the rest of humanity, and could do so by taking advantage of the "unfair advantage" of having on their side an intelligence that no one else in the world could have, confront, or oppose.
- AGI and ASI are potentially achievements capable of substantially disrupting the social and economic balance of all humanity. An era of great abundance, affordability of manufactured goods, works, and products, and a previously unimaginable acceleration of scientific and technological progress could potentially open up for all humanity. Such a good must be the patrimony of all humanity; it cannot and must not be the prerogative of an elite.
- Releasing the source code in open source mode of products related to artificial intelligence is the only way to be sure that producers are not driven by malicious intent and would also greatly help to limit the use of these technologies in military fields. And even if this use were not limited, the availability of the source code would limit the unfair advantage concentrated in the hands of a single nation. AI is too risky an area to accept that the presumption of innocence also applies to it; those who make AI must do so in a transparent way that is verifiable by anyone.
- Although the idea that AI and humans are destined to merge in the future into a single species is mainly typical of currents like transhumanism, many of the most influential and prominent people in the technology sector and the world of AI share and support this hypothesis, including Elon Musk, Peter Thiel, Sergey Brin, Larry Page, Jeff Bezos, and many others. In the face of other much more catastrophic scenarios, moreover, this hypothesis is often also the most desirable. The question, however, becomes: are we really willing to merge with code of which we don't even have the source code? In short, it's one thing to use closed software, which, all things considered, if it really creates problems for us, we stop using it and look for alternatives, since we can't modify it. It's quite another to be ourselves, in part, something over which we would have no control. And while it's true that even today and for millions of years we have lived with brains over which we had very little control, at least we know that control was in the hands of nature and its natural selection, whereas now it would be in the hands of the individual producer of the software that runs our brains. So, no thanks!
AI has the potential to become the most disruptive discovery humanity has ever made. And disruptive is precisely the most appropriate adjective. Because on the one hand, it can be used very badly and in a devastating way like nothing before, even worse than atomic weapons, if you will. On the other hand, it can usher in an era of true panacea for all if we know how to manage it in the most favorable way. Our common goal must be the latter, and the only way to achieve it is to ensure that research in this field is transparent, free, and open to all. Obviously, this also involves a series of major difficulties. For example:
- What do we really mean by AI? How do we determine which software falls into this category and is therefore mandatory to publish as open source?
- What to do if not all countries in the world adhere to this proposal?
- How to prevent the mandatory publication of source code from penalizing the willingness/ability of private companies to innovate?
Certainly, there are ways to mitigate, resolve, or even neutralize these and other side effects and difficulties, but it's not easy, and that's why the sooner we start working on this goal, the better. On the other hand, we've already had the first example of disaster, precisely with OpenAI. We alone, as private individuals, cannot implement this thing, even though it is very right. So let governments step in and work on appropriate laws shared among countries around the world. AIs are not the next game to play on your phone while waiting for the subway to arrive, but the ones of which we will either become the game, and we hope not, or which will be a good half or even more of ourselves. Be careful not to pretend it's a trivial matter.