As soon as humanity began dreaming of robots, tales of artificially conscious beings taking over has been a theme looming in popular culture. Fast forward to the 21st century and we have semi-artificially intelligent assistants in our phones. Are they harmless or something worse, a backdrop for a real-world Terminator scenario perhaps? According to 26 researchers from 14 worldwide organizations, there’s a possibility that artificial intelligence could pose a threat to humanity in the coming years.
Machines Learn Quickly
While Alexa is not likely to start whispering to you in the night about her plans to take over the world, although that would be an interesting conversation, the threat to our existence is quite real. By simply peering into our pockets, we can easily see how fast our low-level artificially intelligent assistants are evolving over the years; technology is moving fast and it will only continue to evolve.
Read More: The Best TV Shows About Artificial Intelligence
Writer and documentary filmmaker, James Barrat, portrayed a grim outline of the future when speaking about his latest work, ‘Our Final Invention: artificial intelligence and the End of the Human Era:’
“I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses, to which they could flee if it all hits the fan.”
Amongst researchers, there isn’t a wonder of whether or not artificial intelligence will become dangerous, but when it will reach that stage. Entrepreneur and engineer, Elon Musk, noted that “the risk of something seriously dangerous happening is in the five-year timeframe. Ten years at most.” To prevent the possibilities of a dangerous artificial intelligence, Musk founded the non-profit AI research company, OpenAI back in 2015.
What are the dangers of fully sentient artificially intelligent systems? We would be creating a digital intelligence that could think and compute faster than ever imagined. Humans would be helpless at the hands of a rigid AI system that doesn’t adhere to normal moral or ethical standards. “Terminator-like” machines would be designed to follow the most efficient path, not the most human one, which can have a massive impact on weapons systems or on infrastructures like power or water supplies.
The Assistant in Your Pocket
But, back again to those friendly assistants on your smartphone – could they ever pose a threat? As of now, the primary concerns surround the data you allow AI assistants to access and how that information is used is a big concern. Some researchers are concerned that utilizing assistants could leave saved queries and location information, combining data that can provide too much insight into your life.
On Android devices, you can control what information you share with Google Assistant. The company says that your personal information is not sold to anyone at present. However, If you want to take the extra step, dive into the privacy settings and adjust precisely the permissions in Google Assistant. Similar options can be found within the system settings of other assistants such as Siri or Cortana.
As far as your smartphone assistant calling up Skynet and engaging in a James Cameron end-of-days scenario, you don’t need to worry yet. Although advanced artificial intelligence looms in our future, humanity has more questions than answers as this developing technology progresses.
For now, we can continue to consider how we use our digital assistants in our lives and how much information we choose to share with them.
Learn all about one of the hackers' favorite breach method and keep your company safe…
Have you ever wondered how to recover deleted photos on Android? After all, the lack…
Digital worms are among the most serious threats in the wild kingdom of the Internet.
Spoofing is a fairly sophisticated virtual scam that can fool even the most cautious and…
Five Steps to Reduce QR Code Risk! Step one? Read this article…
Pharming is creating a new, dangerous brand of impostor syndrome. Check how to avoid pharming…