04 02

Computer scientists have asked themselves whether we would even be able to control a superintelligent AI at all, to ensure it would not pose a threat to humanity.

“A super-intelligent machine that controls the world sounds like science fiction.

But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it.

The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity”, says study co-author Manuel Cebrian.

In their study, the team conceived a theoretical containment algorithm that ensures a superintelligent AI cannot harm people under any circumstances, by simulating the behavior of the AI first and halting it if considered harmful.

But careful analysis shows that in our current paradigm of computing, such algorithm cannot be built.

“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations.

If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI.

In effect, this makes the containment algorithm unusable”, says Iyad Rahwan.

Based on these calculations the containment problem is incomputable, i.e., no single algorithm can find a solution for determining whether an AI would produce harm to the world.

Add your comment