Code red

the-terminator

Could AI spell doom?

By Chitwan Khosla, Features Editor

“The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

This warning comes from none other than one of the leading theoretical physicists of the world, professor Stephen Hawking. Ironically, the renowned scientist, who is suffering from motor neuron disease-amyotrophic lateral sclerosis (ALS), uses a voice synthesizer to communicate that employs artificial intelligence (AI). It works by learning the thinking process of Hawking based on his previous word usage and then suggesting words he might want to use next.

Somewhat similar sentiments have been echoed by Elon Reeve Musk, the co-founder of SpaceX, PayPal, and Tesla Motors, at a conference at MIT in October: he said that “AI is more risky than nuclear weapons” and could be “our biggest existential threat.” Comparing AI to a demon, he called for national and international regulations on the development of AI. Among others who joined the line are Nick Bostrom, the Swedish philosopher and director of Oxford’s Future of Humanity, who expressed his concern in no uncertain terms; he calls it, “A society of economic miracles and technological awesomeness with nobody to benefit.”

Such statements have revived the debate on the further search for improved AI. Could AI replace humans as the most intelligent species on the planet? This is a matter of deep concern.

Fears about developing intelligent machines go back a long time. Fiction and pop culture are rife with depiction of machines taking over. Colossus: The Forbin Project (1970), Westworld (1973) The Terminator (1984), and perhaps the most famous, Stanley Kubrick’s A Space Odyssey (2001) in which helpful computer HAL becomes increasingly destructive and horrifying as the movies goes on.

Ray Solomonoff—a pioneer in AI—warned in 1967 about delegating responsibility to machines and first touched on the notion of a technological singularity (now understood as when AI will be beyond human control—referred to loosely by Solomonoff as the “infinity point”). He expressed in fear that realization of AI will be a sudden occurrence: “At a certain point in the development of the research we will have had no practical experience with machine intelligence of any serious level: a month or so later, we will have a very intelligent machine and all the problems and dangers associated with our inexperience.”

Ray Kurzweil—one of the leading thinkers of future technology and a director of engineering at Google—predicts the singularity may happen in year 2045. Kurzweil shares the concerns of Solomonoff and Hawking regarding consequences of faster and more intelligent machines, but he brings forward their advantageous side as well. He notes that an increasing number of responsibilities are being assigned to machines. From routine mathematical calculations on GPS, to systems for air traffic control, guided missiles, and driverless vehicles, there are benefits. He agrees that no doubt, delegating responsibility to intelligent machines has its own nightmares.

Other risks involved in computer taking over man are the enormity of mistakes made by the machines. Computer trading was largely responsible for the stock market crash of 1987 and power grid closures due to computer error are often cited as examples. Concerns have been expressed that hardware and software glitches are extremely difficult to detect in advance, and could cause havoc in large-scale systems. Combine it with the ill intentions of hackers, and the results could be disastrous. Also there are variety of other ways whereby computer systems can get out of control, creating situations which it might be hard to remedy.

Some experts may call the singularity “lightyears away” or “way overblown,” but others think that human devolution has already begun. Humans have already entrusted many intelligent traits such as the ability to write, calculate, navigate, and memorize facts to machines—is that proof enough of intelligent machines taking over? In the words of microbiologist and science fiction writer, Joan Slonczewki, “the question is, could we evolve ourselves out of existence, being gradually replaced by the machines?”

Whether the singularity is in the next couple of decades or in 2100, it might not be long.