What is AI?

 Artificial intelligence (AI, also machine intelligence, MI) is apparently intelligent behaviour by machines, rather than the natural intelligence (NI) of humans and other animals. AI is relevant to any intellectual task. From SIRI to self-driving cars, Healthcare and medical diagnosis, creating art, proving mathematical theorems, playing games (such as Chess or Go), search engines such as Google, image recognition in photographs, spam filtering, prediction of judicial decisions and targeting online advertisements, artificial intelligence (AI) is progressing rapidly.

While science fiction often portrays AI as robots with human-like characteristics, companies like Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership with AI to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Including Apple, joining other tech companies as a founding member of the Partnership on AI in January 2017.

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The reality is, Artificial intelligence today is properly known as narrow AI (weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Can Artificially Intelligence(AI) be dangerous?

 Despite all of our advances in AI technology, it is truly difficult to make an android indistinguishable from a human, both in appearance and behaviour. And even if androids achieved super intelligence, this would not necessarily mean that they will experience the same range and types of human emotions and emotional intelligence that replicants in Blade Runner seem to display. Any sort of advanced emotional intelligence would likely require a separate and different programming approach, and such capabilities would only arrive in a later technological evolution. Most importantly, any synthetic human-android would almost certainly include fail-safe programming, preventing it from directly or indirectly harming humans.

Most researchers agree that a super intelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts at Future of Live Institute think two scenarios most likely:

The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties.

The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult.

Recently there has been a huge interest in AI safety, with Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and many other big names in science and technology have recently expressed concern in the media and via open letters about the risk posed by AI. Thanks to recent breakthroughs, many AI milestones, which experts viewed as decades away merely five years ago, have now been reached, making many experts take seriously the possibility of super intelligence in our lifetime. While some experts still guess that human-level AI is centuries away, others guess that it would happen before 2060.

 Future of Live Institute explains, “Because AI has the potential to become more intelligent than any human, we have no sure fire way of predicting how it will behave”. And they give the perfect example; our own evolution. “People now control the planet, not because we’re the strongest, fastest or biggest, but because we’re the smartest. If we’re no longer the smartest, are we assured to remain in control?”

Max Tegmark, the President of the Future of Life Institute express:

“Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial. “

Predicting the future is hard, but movies like Blade Runner make it fun to try, ultimately helping us think about how AI can improve quality of life for everyone.