A Realistic Path to Artificial General Intelligence - Part I
Published: January 4, 2020
A special kind of AI is progressing in a way that reminds us how human intelligence evolved. In this post I propose that we leverage it as the most promising path to Artificial General Intelligence.
Widespread opinion among AI's pioneers and experts in the 1960s was that within twenty years machines would have been capable of doing any work a man can do. These are the machines that in 2020, namely 60 years after that prediction, can do any work a man can do:
In fact, in the last 60 years there have been only 4 other areas in which we humans have failed harder than in AGI research:
- vanquish world hunger
- wipe out obesity
- distribute wealth fairer without resorting to totalitarian regimes
- produce energy from nuclear fusion
with the creepy end-note that, had we solved AGI, those other 4 would have been probably long solved too by now. And the bad news doesn't end here. In fact, not only we didn't invent AGI, not only we don't have the slightest idea of how to achieve it, but above all, and that's why I'm writing this article, It seems to me that one of the most promising paths we could follow to get to it is still not being leveraged enough.
Ok, but let's start by framing all the bad news first, with a premise however: personally I am very optimistic about how AI will develop and even reach AGI. I focus on so many bad news just because without knowing what the obstacles are, there is no way to overcome them.
So let's start with the main question.
Why we still didn't reach AGI?
Below is a list of the reasons why in my opinion we have not yet reached AGI, but again a premise, the largely predominant reason is just one but I'm going to list it as the last one, so that it could remain very impressed in your mind, even because I'm very surprised that still today this reason is so much overlooked and forgot, despite being extremely trivial and obvious.
-
blurred goal: while intuitively we know what we would like to achieve when we talk about Artificial General Intelligence, things quickly become confusing when one gets to the details. Even the word intelligence itself is still today at least ambiguous. Does intelligence imply self-awareness? Would it be enough for us to call it intelligent if an agent showed adaptive behaviors and problem solving abilities in any human field? Or should we first be sure that this agent in really understanding the most intimate meaning of what it is doing? While it's easy to define our goal if we want to invent a flying car, the same is not true with AGI. And if you can't even exactly define what you're trying to achieve the chances that you succeed are practically random.
-
we still lack any idea: even if a clear definition of our goal to reach AGI existed, we would still lack any idea on how to proceed to at least try to make something that could be called AGI. Why? Put is simple: intelligence is an extremely complicated thing. Nature took billions of years to give birth to a race with a brain powerfull enough to let them embrace technological development, and we can't see nothing similar in the universe to compare different types of intelligences of our own level. For many decades now we have been studying artificial neural networks, originally inspired by the biological neural networks of the brain, but we are not yet able to understand whether this approach is ultimately really valid, or is just as inappropriate as it would be to study how animals move in order to invent the wheel.
-
intelligent design: most of the progress made so far in Artificial Intelligence has been the result of inventions and discoveries made by scientists and researchers. Neural networks themselves where intelligently designed as a computational model for biological neural networks, and still today all of the many different architectures like Perceptrons, Convolutional Neural Networks, Recurrent Neural Networks, Autoencoders, Generative Adversarial Network, just to name a few, have been conceived by humans who used their own abilities, knowledge and intelligence to create them. This is good, we all are extremely grateful for these accomplishments and, after all, all this strictly reflects the usual way we humans normally make progress in any field. However it is limiting. If we were enough intelligent to discover on our own all the possible architectures necessary to develop AGI we probably would not even need AGI. Now this is not to say that research to build machines that can design better versions of themselves on their own are not ongoing all around the world, it just happens that unfortunately there is an additional problem: when a new algorithm or even architecture emerges or is created by another algorithm it usually appears to us as a black box, that means we find very hard to understand why and how it works and how we can further develop it. Just to make an example: had been GANs emerged on their own from an evolutionary algorithm or another optimization method, it would have probably taken years for us to figure out what they are, how they work, how their training should be organized and so on.
-
barrier to field entry: while it’s normal that entering a bleeding edge field of human research usually requires a lot of knowledge, patience, intelligence, and most of the time also a lot of hardware, infrastructure, computing power, and money in general, the AI field suffers from an additional twist: It’s falsely promoted as something that everyone can embrace and easily start to work on no matter what background or resources one has. Not only is this not true, but it’s false for reasons that are way different from what non field experts usually imagine. For example: it’s not that AI math is too hard, really; it’s not that AI requires superhuman programming skills, especially with today's open source frameworks; it’s not that datasets to make Machine Learning are expensive or hard to find or never enough. Real barriers to enter this field are: a) unless you seriously study for many months from one of the very few, good and most recent sources of AI knowledge, you’ll waste a lot of time and will get nowhere because most of the online courses, tutorials, repositories on github, YouTube gurus’ videos and so on are useless either because when you personally try them they don’t work, or they work but are so narrowly conceived that you can never expand them to do something useful and different from what the tutorials were designed to do; b) learning from research and academic papers and source code poses a similar problem: they are rarely applicable to real world problems being them most of the time focused on refining or superseding previous state of the art architectures and techniques from previous papers, or are addressed using reference datasets that most of the time have nothing to do with your problems; c) unless one just wants to use an already trained model from the many that can be find online, which all suffers of the just mentioned problems, creating custom models, architectures, not to talk of the datasets themselves, requires a lot of real field experience, a lot of time and either extensive hardware resources or money. In fact, training AI models in reasonable times requires GPUs. For real problems, many GPUs with a lot of dedicated RAM are needed. This can be bought, and it’s expensive, seriously expensive to do something real, or can be rented in the could, which at the moment is even more expensive due to commercial policies from the market leader: less expensive NVIDIA gaming cards, some of which would be more than enough to make AI, are forbidden to be deployed in cloud services by cloud providers.
-
AI is counterintuitive: so far most of the progresses we have made in the so called field of narrow AI, the one most employed today, have been counterintuitive, which means: many times tasks that we would have bet should be easy to teach to machines have turned out to be extremely hard for machines to grasp. On the opposite, other things that we humans have usually a very hard time to do ourselves are often instead quickly learned and mastered by an algorithm. I could make many examples of this, the famous car driving task and so on, but just to mention one: everyone of us can make a great job washing dishes even without any training at all, which no machine can do today. On the other side very few of us could paint in the style of Picasso, probably even if we took painting classes for years, which instead today's GANs can do quite well.
Picasso's "Girl Before a Mirror" and a GAN interpretation.
-
AGI research investments are hard to justify: even if we reached AGI how could we possibly monetize it? I know this seems a nonsense question, that thousands of examples come to mind: selling business personal assistants, sex robots, lawyers, personal trainers and more generally consultant in any field one could imagine. The problem is: whatever one could imagine there could probably be a cheaper and quicker way to do it using easier narrow AI than developing the whole complex AGI. This is why narrow AI is progressing but no one really cares that much about AGI outside of big corporations and academic institutions. This is also why most AI experts, including Ray Kurzweil, nowadays believes that AGI will just emerge on its own at a certain point from narrow Artificial Intelligence as that increases its levels of sophistication.
-
Investments in AI go on in fits and starts: anyone who knows a minimum of AI history knows the concept of "AI winter" because this cycle and recycling of enthusiasms followed by disappointments has been repeated many times over the decades. This is how it works: suddenly a new progress is achieved, often in a specific area previously defined as incapable of further progress, therefore the uproar arises, hence the enthusiasm and the desire to invest again in AI: academies and departments of R&D of large companies, new startups, all start focusing on the new evolution and investors fill with big money anyone who works with this evolution. We have witnessed this phenomenon for many years now with Deep Learning, and for a while also with chatbots. What happens next? That after a while everyone fall on the Earth again. The limits of that innovation emerge punctually, realism returns to dominate, soon followed by genuine disappointment and the sudden counter-race to disinvestments: welcome new AI winter. This is starting to happen again today according to the sentiment of some of the most expert AI researchers around the world.
-
Too much focus on overhyped trends: direct consequence of the previous point, AI trends form rapidly and quickly become overhyped. This causes at last 4 kinds of problems: a) if you are making in the industry something different from what is trending right now, you are perceived as a loser. Anyway, no one is interested in your company: no customers, no investors. This way, non-trending stuff have a hard life to survive the lack of attention from the industry and the market and often promising paths are abbandoned prematurely; b) if you tell around you are working on what is trendy just to get the attention, but you’re are indeed not doing any of that, you just contribute later to the delusional phase that brings to the next AI winter; c) if you’re really working on the hyped trend, the expectations mount so quickly that if you don’t get spectacular results before anyone else in the industry you are quickly dismissed as the worst on the market; d) hyper marketing and buzzwords proliferation: words become way more important than their underlying concepts. People, often even those in top decision-making positions, confuses facts with hype. This means that every time a new AI winter comes, once it passes new names have to be adopted to revamp the same very old concepts of the previous springs, which need to change name just to avoid that people remember the past winters. The perfect example of this is Deep Learning: neural networks with many hidden layers (that’s all what a Deep Neural Networks is) and their training techniques (that’s all what Deep Learning is) have been invented many decades ago but never proved to be of almost any use until recently, at the point that from 2000 to 2011 any papers about them were not even accepted anymore in AI events and conferences. Then in 2012 everything changed, but at that point it would certainly have been better to use a different name than those used in the past and Deep Learning seemed to be the most appropriate.This is why today we call Deep Learning something that for the previous 50 years of its life has been called in a different way. Now still today I find young people that can't believe this story, but everyone can verify it on their own: the term Deep Learning was first introduced decades ago but started to be really used only very recently as you can see on Wayback Machine and Google Trends. But the most interesting part about this story, as well as about those related to the AI winters in general, and the real reasons we still today don't have any AGI at all, is the next, definitive, most important and very last point. After all there has been a really valid reason why Deep Learning has always been of almost no use until recent years: we lacked the hardware to run it. In fact...
-
Our computers are ludicrously weak: it's really incredible that this aspect is so rarely brought to attention. I want to be very clear on this: had we today enough powerful computers to emulate in real time 100 billions of human brain neurons, we would reach AGI in a matter of months in the worst case. People ignore so much that still today our most sophisticated neural network models have just a few millions of fake (not fully emulated biological ones) neurons and that those big models can still today be trained only on very expensive machines, surely not on your brand new notebook. Our hardware today is many orders of magnitude inferior to what is needed to build neural networks that at least quantitatively could be compared to a single human brain. 98% of the reasons we still have no AGI is exactly this: our hardware power is ludicrous small when compared to what would be needed. Of course one could argue that we don't need the level of human neurons sophistication to create AGI, that maybe we could do it even with less than 100 billions neurons. After all a bird is made up of billions of cells but an Airbus A380-900 has just four million individual parts and the latter carries 900 people, a bird not even one. It's a fact that nature does not evolve the best possible solution but just the one that better adapts to the environment, while we can go much further. On the other side it's also true that an Airbus A380-900 is still an incredible complex machine that required the industry almost 100 hundred years of development to get to the point to create such kind of planes. It could be that not all 100 billions of neurons are really necessary to make an Artificial Generally Intelligent machine, maybe we can make it with a very simplified model of the neuron instead of the emulation of a full blown biological one, we cannot know this today. What we can say with certainty today is that our current computational power is still way too small to achieve the goal.
These are all the sad facts we should be aware of when planning about the next steps to take to reach AGI.
But fortunately this is only the first half of the story. In recent years AI has gone very far in many sectors, but one in particular has really exploded in a striking manner and in a field that honestly very few would have ever imagined. Go to Part II to read about it.
Also follow me on Twitter, I always tweet about my new articles when I publish them.