Superintelligent AI cannot be achieved
How can we engineer something that we cannot even define? In all of human history, we never managed to work out what natural human intelligence is, so it is not clear what engineers are trying to imitate in machines. Rather than intelligence being a single, physical parameter, there are many types of intelligence, including: emotional, musical, sporting and mathematical intelligences.
<
(1 of 2)
Next argument >
The Argument
AI is parasitic on human intelligence. It indiscriminately gorges on whatever has been produced by human creators and extracts the patterns—including some of our most detrimental habits. These machines do not have the goals or strategies or capacities for self-criticism and innovation to permit them to transcend their databases by reflectively thinking about their own thinking and their own goals. Humans will always have to control AI in order for it to accomplish tasks. AI cannot think for itself, thus, cannot and will not end humankind.
They are helpless, in the sense of not being agents at all. They do not have the capacity to be “moved by reasons” presented to them.[1]
In the long term, ASI or artificial super intelligence, it’s possible in principle but not desirable for them to be programmed with self-motivation. The far more constrained AI that’s practically possible today is not necessarily evil. But it poses its own set of dangers—chiefly that it might be mistaken for strong AI.
Counter arguments
Premises
[P1] AI cannot encompass emotional or reflective intelligence like humans.
[P2] AI needs humans to operate their systems, or else they are empty agents.