The Dangers of Artificial Intelligence

by freespirit
804 views

Technological advancements are happening at a fairly rapid pace. Go back ten years, to 2005, and you will notice that much of what we enjoy now, YouTube, Netflix, Smart phones and tablets, was non-existent back then. Go further back into 1900, and the Wright brothers were looking to complete their first flight and the Ford Model T was still a prototype. This goes on to show how rapidly we are progressing in terms of technology. In fact, some experts predict that in this century we are going to make 1000 times the progress in technology, compared to the last century. This has prompted many to ask, where are we headed?

At this point, the question of singularity arises. To put it simply, singularity is the moment when an intelligence that is smarter than any human on the planet is created and when that intelligence starts to make smarter copies of itself at an ever increasing rate. Such intelligence would quickly become smarter than every human combined, making it the dominant intellectual force on Earth.

But how will it be created? Some theorists predict that the singularity would be created via the hive mind (a human, computer hybrid). While others maintain that due to the rapid advancement in computer technology, the singularity will stem from an artificial intelligence system.

As of now, the humans are only dealing with Artificial Narrow Intelligence (ANI). ANI are highly specialized system that are comparable to human intelligence in only selective niches. For example Deep Blue, the chess computer that beat grandmaster Garry Kasparov several times is ANI. It is good at chess but doesnโ€™t perform intelligibly at other tasks.ย 

Then there is the Artificial General Intelligence (AGI). AGI is comparable to the human brain in every aspect. In fact, many scientists think that the AGI will be created when we start to simulate the human brain on computers. As of now computers lack the technological capacity for such a simulation, however experts are terming the era after 2025 would likely see the advent of the AGI. In fact, the work on it is already progressing at a fairly rapid pace. In August of this year Boston Dynamics released the footage of a humanoid robot running freely through the woods!

Finally, there is Artificial Super Intelligence (ASI). At this level, the AI is smarter than humans and if given access to the outside world, its actions would be unstoppable and unpredictable. An ASI can supposedly be created from an AGI in two different ways: a soft takeoff and a hard takeoff. A soft takeoff occurs when the AGI realizes that it can make smarter copies of itself and continue to iterate these copies for a lengthy period until it reaches the level of ASI. While a hard takeoff would occur in the form of an intelligence explosion whereby the AGI would take the form of an ASI in a matter of milliseconds.

You can easily see the unpredictability and danger that an Artificial Super Intelligence brings to the table. This is why experts have already started warning about them. In a recent UN meeting about emerging global risks, prominent scientists, like MIT physicist, Max Tegmark, and the founder of Oxfordโ€™s Future of Humanity Institute, Nick Bostorm, have shed light on the fickle nature of ASI. While according to them, one can view the positive impacts that ASI would have on our world but in the long term it would become an uncontrollable machine whose actions canโ€™t be predicted by anyone. In fact, it could quite possible doctor financial markets, out-manipulate politicians, establish an unparalleled surveillance system and even create inventions, including weapons, which we canโ€™t even comprehend. The meeting was concluded by Bostorm in the following warning in the wake of our technological exploits: โ€œAll the really big existential risks are in the anthropogenic category. Humans have survived earthquakes, plagues, asteroid strikes, but in this century we will introduce entirely new phenomena and factors into the world. Most of the plausible threats have to do with anticipated future technologies.โ€

Furthermore, in an opinion piece that appeared in the The Independent, preeminent theoretical physicist, Stephan Hawking wrote about the danger of creating an Artificial Super Intelligence. He wrote that, โ€œSuccess in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets.โ€ Thus clearly stating that biological evolution would not be able to keep up with the intellectual advancements of the ASI, rendering us humans as nothing more than slaves.

In conclusion, the advent of Artificial Super Intelligence isnโ€™t that far away if we continue our ravenous appetite for technological advancement in an unchecked manner. After all, the ASI doesnโ€™t need to be created directly, it could spring out from one small misstep by a research scientist in a secluded lab. Whereby effectively ending the reign of humans on this great planet.

This article The Dangers of Artificial Intelligence was originally published here at isoulscience.com

Related Posts

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy