I am Not Scared of the AI Singularity 2017-03-10

One common idea when discussing the future of Artificial Intelligence (AI) is that, if humans ever develop an AI that reaches above some threshold of reasoning power, then the AI will increase its own reasoning power and become progressively “smarter” until human intelligence pales in comparison.

The following quote is from the synopsis of Nick Bostrom’s Superintelligence on Wikipedia:

[…] once human-level machine intelligence is developed, a “superintelligent” system that “greatly exceeds the cognitive performance of humans in virtually all domains of interest” would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.

This idea, that there will exist a type of AI singularity, if we just develop an AI that is “smart” enough, is attractive but naive. With our current technology, I don’t believe that type of scenario would be possible.

Let’s destruct the idea a bit. My interpretation of the core hypothesis is this:

AI Singularity Hypothesis
If there is an AI running on a processing unit, and the AI is intelligent enough to take control of other processing units, it could take advantage of those other processing units to increase its own computing power and thereby increase its own intelligence.

The hypothesis assumes something I think we should not take for granted: namely that adding more computing power will necessarily increase reasoning power.

One of the main problems in large-scale computations today is communication. Many interesting computational problems are bottlenecked not by the available computational power, but by the communication latency between processing units. Some problems can be solved in parallel trivially because they require no communication between processing units, all other problems are fundamentally more difficult because you also need to handle communication efficiently. Communication is needed to share partial results between the participating processing units.

Superintelligence <-> Supercommunication

I believe that superintelligent AI is not trivially parallelizable, and that it in fact requires a high degree of communication between processing units, hence it will be bottlenecked by communication. This is of course speculation, but you should not assume that the opposite is true either, which is what the AI Singularity Hypothesis is based on.

If communication is the bottleneck for superintelligent AI, then it won’t help to spread the computation over more processing units. That would increase the amount of communication needed, working against the bottleneck. What you need instead is a very compact processing unit with very fast, short-distance communication.

Consider the human brain. It is a very dense network of highly interconnected neurons. It seems like the ideal structure for highly communication-intensive computation. This might be an indication that human-level reasoning requires a lot of communication. It is hard for me to imagine that human-level reasoning would not require huge amounts of communication.

I am of course just speculating about AI here. I am not an expert in this field, I have merely written a few simple machine learning applications in my spare time. However, I felt like I had to voice my opinion because it always annoys me when people take it for granted that an AI with higher-than-human level intelligence would automatically become all-powerful.

Categories: Uncategorized
  • >What you need instead is a very compact processing unit with very fast, short-distance communication.

    And why do you doubt that we’re going to get that in the future?

    You do say:
    >*With our current technology*, I don’t believe that type of scenario would be possible.

    but that’s obvious. When people talk about superintelligence they don’t think of it as a simulation running on a modern machine.

    We’re indeed slowly reaching physical limits, but many believe that new GPU and CPU architectures, past the von Neumann model are the future and you have to acknowledge it as posibility.

    That said, I completely agree with you on the Supercommunication argument about the AI Singularity Hypothesis.

    I don’t think that it being proven wrong necessarily invalidates the Technological singularity, however. Surely _how_ resources are used is more important than the abudance of them, and an AI could -up to a point- improve _this_ aspect instead of just indefinitely assuming control of other processing units (which may not even be scalable, as you claim).

  • > And why do you doubt that we’re going to get that in the future?

    I was careful to not claim anything to be impossible. Even if there are advances in CPU design that let us easily simulate human thought, my argument is that connecting multiple such CPUs won’t increase the intelligence of the whole system. However, this is a sketchy argument because you have to define what you mean by intelligence, and why latency matters. To keep things simple I just talked about current technology. I also didn’t mention the idea that the AI could design its own CPU, which is a much more interesting argument for an AI singularity.

Leave a Reply

Your email address will not be published. Required fields are marked *