Will Artificial Intelligence Be Humanity’s Downfall?

Artificial intelligence (AI) is increasing rapidly. From Apple's SIRI to Amazon's Alexa all the way to cars that drive themselves, singularity seems to be a possibility of our future. In short, the singularity theory suggests that with artificial superintelligence, human civilization will change drastically. Although the theory suggests this, it's hard to figure out if it will be for our benefit or disadvantage.

Artificial Intelligence Portrayed in Science Fiction

irobot-flickr

In many science fiction films and literature, AI is portrayed as robots being able to act as humans; decision-making, visual perception, etc. In fact, one of the prime examples of this is I, Robot. In the film, robots serve human needs. The Three Laws of Robotics was enacted to keep robots in check. Of course, these laws were created to prevent robots from injuring humans, and overall, taking over. Despite these laws, robots began to be built with a secondary system that bypasses the Three Laws of Robotics. This is when all the havoc happens. Robots begin to kill and attack humans in the film. Like I, Robot, the Terminator movies shows a superintelligent robotic species targeting humans.

These ideas of AI in science fiction must stem from somewhere. On paper, building technology that does normal tasks for humans seems ideal... But what if that technology becomes smarter than humans? AI has been debated among tech giants in the past several years. People like Bill Gates, Stephen Hawking, Elon Musk, among others, seem to be pushing forward the conversation of future AI.

Opinions on Artificial Intelligence

ai-thoughts-vanityfair

Several tech giants have been vocalizing their opinions on AI. Although, many of these opinions come more as warnings about humanity's future. Even though this is true, there are individuals that think that AI can be beneficial. In fact, Stephen Hawking acknowledges that “Success in creating AI could be the biggest event in the history of our civilization." But he also warns us that it might be the last. Hawking later explains that some of the dangers associated with AI include "powerful autonomous weapons" and tyranny.

Other celebrities in the tech world share similar sentiments as Hawking. Apple co-founder Steve Wozniak worries that humans will become pets to and for robots. Elon Musk thinks that there needs to be research done to make AI friendly. He even donated $10 million to the Future of Life Institute for the development of a program that would achieve finding ways to keep AI safe and favorable.

Why is Artificial Intelligence Scary?

ai-towards-data-science

With all of these sci-fi movies, you're more inclined to think that the future of AI means robots attacking the human race and taking over. But is that really what will happen? Future of Life thinks that the real threat of AI is a bit more realistic than the Terminator happening. One of their claims is that AI could be programmed to do something disastrous. They speculate an AI arms race, where humans would eventually lose control of the AI weapons, unable to shut them down. Another scenario that they highlighted was AI made to achieve a goal, but then it taking destructive measures to achieve that goal. This is possible if  the AI's goals aren't aligned fully and properly with ours. This means taking values and morality into consideration.

Although these are all possible scenarios of the effects of AI gone wrong, they are all speculations. We don't really know what could happen if a superintelligent computer or robot was made. As we move to a more technological future though, it is a good idea to start thinking about these plausible issues. Researching ways to avoid such scenarios as the ones mentioned above and seen in sci-fi films is a good path to take. What do you think about artificial intelligence?

Image Sources: VanityFair, Flickr, Pixabay, Towards Data Science
By Fernando Lopez