Tesla and SpaceX CEO, Elon Musk says AI represents a ‘fundamental risk to human civilization’ and that waiting for something bad to happen is not an option. He also stated that AI is one of the most pressing threats to the survival of the human race and that his investments into its development were made with the intention of keeping an eye on its development.
At some point, we all are afraid of the situation where heavily armed artificially intelligent robots might take over the world, enslaving humanity – or perhaps exterminating us. People from tech-industry billionaire Elon Musk to eminent physicist Stephen Hawking, says artificial intelligence technology needs to be regulated to manage the risks. But Microsoft founder Bill Gates and Facebook’s Mark Zuckerberg disagree, saying the technology is not nearly advanced enough for those worries to be realistic.
At present, how is AI regulated?
As soon as one hears the term artificial intelligence, he or she may conjure fascinating images of human-like robots. Which helps us to find similar products while shopping, offers movie and TV recommendations and helps us search for websites. Moreover, It even grades student in writing, provides them a set of personalized tutoring and even recognizes objects carried through airport scanners. In each and every case, AI makes things easier for humans. In fact, I personally feel that AI has the potential to do far more good than harm – if used properly. There are already laws on the books of nations, states, and towns governing civil and criminal liabilities for harmful actions. Our drones, for example, must obey FAA regulations, while the self-driving car AI must obey regular traffic laws to operate on public roadways. So I guess there isn’t much need for any kind of additional regulations.
It may also interest you to know that existing laws also cover what happens if a robot injures or kills a person, even if the injury is accidental and the robot’s programmer or operator isn’t criminally responsible. However, lawmakers and regulators may need to refine responsibility as technology advances across the globe.
What Are The Potential Risks From Artificial Intelligence?
It’s pretty much okay to worry about researchers developing advanced AI systems that can entirely operate outside human control. For example, self-driving car forced to make a decision about whether to run over a child who just stepped into the road or veer off into a guardrail, injuring car’s occupants as well as those in another vehicle.
Musk and Hawking, among several others, worry that hyper-capable AI systems, ones which are no longer limited to a single set of tasks like controlling a self-driving car that says it doesn’t need humans anymore. Can you imagine the world without people?
History says a science fiction author named Isaac Asimov tried to address this potential by proposing three laws limiting robot decision-making:
- First, Robots cannot injure humans or allow them to come to harm
- Second, they must obey humans- unless this would harm them
- Third, protect themselves, as long as this doesn’t harm humans or ignore an order.
However, three laws aren’t just enough and they don’t even reflect the complexity of human values. For example, Should a robot protect humanity from suffering related to overpopulation, or should it protect individuals’ freedoms to make personal reproductive decisions? It is no surprise how we have wrestled with these questions in our own, non-artificial intelligence.
Most of the risks and harms associated with AI are linked to how AI is used, not to AI itself. The regulatory responses should also be focused on AI uses, not on AI. Well, an experimental approach to regulation that is being applied in other countries. So, let’s keep our fingers crossed!