One common idea when discussing the future of Artificial Intelligence (AI) is that, if humans ever develop an AI that reaches above some threshold of reasoning power, then the AI will increase its own reasoning power and become progressively “smarter” until human intelligence pales in comparison.
The following quote is from the synopsis of Nick Bostrom’s Superintelligence on Wikipedia:
[…] once human-level machine intelligence is developed, a “superintelligent” system that “greatly exceeds the cognitive performance of humans in virtually all domains of interest” would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.
This idea, that there will exist a type of AI singularity, if we just develop an AI that is “smart” enough, is attractive but naive. With our current technology, I don’t believe that type of scenario would be possible.
Let’s destruct the idea a bit. My interpretation of the core hypothesis is this:
AI Singularity Hypothesis
If there is an AI running on a processing unit, and the AI is intelligent enough to take control of other processing units, it could take advantage of those other processing units to increase its own computing power and thereby increase its own intelligence.
The hypothesis assumes something I think we should not take for granted: namely that adding more computing power will necessarily increase reasoning power.
One of the main problems in large-scale computations today is communication. Many interesting computational problems are bottlenecked not by the available computational power, but by the communication latency between processing units. Some problems can be solved in parallel trivially because they require no communication between processing units, all other problems are fundamentally more difficult because you also need to handle communication efficiently. Communication is needed to share partial results between the participating processing units.
Superintelligence <-> Supercommunication
I believe that superintelligent AI is not trivially parallelizable, and that it in fact requires a high degree of communication between processing units, hence it will be bottlenecked by communication. This is of course speculation, but you should not assume that the opposite is true either, which is what the AI Singularity Hypothesis is based on.
If communication is the bottleneck for superintelligent AI, then it won’t help to spread the computation over more processing units. That would increase the amount of communication needed, working against the bottleneck. What you need instead is a very compact processing unit with very fast, short-distance communication.
Consider the human brain. It is a very dense network of highly interconnected neurons. It seems like the ideal structure for highly communication-intensive computation. This might be an indication that human-level reasoning requires a lot of communication. It is hard for me to imagine that human-level reasoning would not require huge amounts of communication.
I am of course just speculating about AI here. I am not an expert in this field, I have merely written a few simple machine learning applications in my spare time. However, I felt like I had to voice my opinion because it always annoys me when people take it for granted that an AI with higher-than-human level intelligence would automatically become all-powerful.