We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Since Artificial Intelligence became a hot topic, some people started having concerns about it. Creating something which is smarter than humans and more capable of anything than humans can be pretty dangerous, according to those people.
Rich Walker, who's the Shadow Robot Company Director, is one of the people who warn others about the unregulated development of AI technology, which can be hazardous for people.
SEE ALSO: ARTIFICIAL INTELLIGENCE AND THE FEAR OF THE UNKNOWN
Rich Walker talked to express.co.uk, and he said that before deciding that we should develop AI and add more of it into our lives, we must be aware of the negative sides it has.
"If that technology starts to be used we should treat it the same way we have treated other technologies people have used in desperate situations. We don’t allow people to use chemical weapons in war, we don’t allow people to use biological weapons and the use of land mines and cluster mines is heavily regulated or controlled. We have already decided that certain types of weapons are not acceptable, maybe artificial intelligence just fits into the ‘Yeah that's not acceptable either’ bucket."
Rich Walker isn't alone on his opinions about the negative sides of artificial intelligence, many people say that creating artificial intelligence and putting it into effect in every aspect of our lives can be dangerous if the right precautions aren't taken.
In January 2019, at the World AI Conference in Shanghai, Elon Musk stated his opinions on artificial intelligence clearly. He said, "I think generally, people underestimate the capability of AI. They sort of think like, it's a smart human. But it's, it's really much—it's going to be much more than that. It’ll be much smarter than the smartest human. It’ll be like, can a chimpanzee really understand humans? Not really, you know. We just seem like strange aliens. They mostly just care about other chimpanzees. And this will be how it is more or less in relativity. In fact, if the difference is only that small, that would be amazing. Probably it's much, much greater. So like, the biggest mistake that I see artificial intelligence researchers making is assuming that they're intelligent. Yeah, they're not, compared to AI. And so like, a lot of them cannot imagine something smarter than themselves, but AI will be vastly smarter—vastly."
If advanced AI (beyond basic bots) hasn’t been applied to manipulate social media, it won’t be long before it is— Elon Musk (@elonmusk) September 26, 2019
The way artificial intelligence can be bad varies a lot. It can create autonomous weapons, can cause social manipulation via social media, invasion of privacy due to cameras that can be found everywhere and facial recognition algorithms, and so on.
RT reported that Russian President Vladimir Putin said, "Artificial intelligence is the future, not only for Russia but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world."
According to Physicist Stephen Hawking, artificial intelligence could be the "worst event in the history of our civilization."
Stephen Hawking is one of the people who think artificial intelligence can bring evil to our lives. He once said, "Unless we learn how to prepare for, and avoid the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."
So, even though some people think that artificial intelligence can be good for our future, some scientists think it can cause the biggest harm to people if it's not developed and programmed wisely and we ignore the potential risks. What are your opinions on artificial intelligence?