European Union’s Recommendations On Explainable AI

European Union’s Recommendations On Explainable AI

Posted by in Leadership & Management, Science & Technology, Social & Psychology

As beneficial as artificial intelligence is, a lot of concerns have been raised on it. This has led to European Union coming up with some areas of artificial intelligence that requires oversight. Some of the concerns raised have been outlined below.

Even if a robot is programmed to explain its intended actions and also seek permission before taking those actions, they can evolve and disregard certain rules. This is similar to what happened in “Robocop”, one of the most popular movies in the nineties.

Get Free Email Updates!

Signup now and receive an email once I publish new content.

I agree to have my personal information transfered to MailChimp ( more information )

I will never give away, trade or sell your email address. You can unsubscribe at any time.

One of the rules given to Robocop was never to disobey an OCP officer. But he found that a particular officer was corrupt and he as issuing destructive instructions. So Robocop had to delete the rule so that he won’t need to obey the officer. While this is a positive move, it is an indication that any robot can evolve and disregard instructions or alter its codes. So that there will be no limitation to his actions.

Sometimes some system files can get corrupted and the system will begin to malfunction. What if some files within the system of a robot gets corrupted? The robot can begin to malfunction. This could mean that it can begin to take actions that are destructive. Part of it could be that it won’t explain its actions or seek permissions before taking actions. Imagine what can happen if an autonomous weapon begins to malfunction.

Since robots are designed to be as close to humans as possible, they could become so wise that they will begin to deceive humans. They can begin to give wrong and deceptive reasons for taking certain destructive actions.

The biggest concern is terrorism. What if some terrorists get hold of these autonomous weapons and “Robo-soldiers”? They can alter the code. Who knows if some terrorists are not already learning explainable artificial intelligence for their ulterior agenda. Remember, Late Osama Bin Laden, one of the worst terrorists to have ever lived, was trained by America.

What about mass unemployment? Explainable artificial intelligence will lead to robots taking over human jobs. It has already started happening. The benefits of deploying robots are too numerous. A single robot can take over the jobs of over 20 people. The robot will do it faster and more accurately.

Apart from the fact that robots don’t go on leave or call in sick, they can work for 24 hours a day. Public holidays are not applicable to them. The most important benefit is that apart from their cost of purchase, robots don’t receive salaries and allowances. They don’t need insurance as they can’t be injured.

A lot of factories are already making use of robots. In other words, a lot of factory workers have been sent back to the labor market. Imagine what will happen to drivers when Google launches its smart car in 2020. Companies rendering car rental services will cash in on the technology big time. Do you have any idea how much they will save when they terminate the contracts or employment of their drivers?

Sometimes in the bid to carry out certain instructions to the letter, robots can adopt destructive means to carry out the instructions. Take smart cars for instance. You may decide to instruct it to take you to a certain destination as quickly as possible because you are running late. Your car may decide to run against traffic or disobey other traffic regulations thereby causing serious accidents. Due to all these concerns, EU parliament listed out certain areas of explainable artificial intelligent that should be regulated.

Their most important recommendation is the creation of agency for artificial intelligence and robotics. The agency will regulate both industries within European Union countries. EU parliament also recommends that there should be a clear and simple legal definition of “smart autonomous robots”. And their registration should be put in place to enable the registration of every robot. There should be a company responsible for the negative actions of any robot.

There should also be a comprehensive code of conduct for robotics engineers that guide the ethical aspect of the production, design and the use of robots. All production companies for AI and robotics need to send reports on how these two will contribute to the country’s economy. This is necessary for the purpose of social security and taxation. There should be a law that will mandate companies manufacturing robots and companies using them to purchase insurance policies to cover the damages caused by their robots.

Due to the concerns about looming mass unemployment, EU parliament recommends a general basic income for everyone as one of the ways to curb the mass unemployment likely to be caused by the implementation of explainable artificial intelligence.

Apart from their recommendations, a debate came up about the ownership of robots. If a robot creates something that can be patented, who owns the patent? Is it the manufacturer of robot or the current owner of the robot? If a robot is sold, should all its intellectual properties go with it or only the robot? What do you think and suggest?

An explanation in the context of artificial intelligence is the process by which a robot or any other AI driven machine explains its next intended action, why the action is necessary, the consequences of not taking the action and why it is the best option (if there are alternatives). Every explanation should end with the system seeking the permission to go ahead from its users.

As it stands, there is no trade-off between performance and explainability of AI system became the explainability is just a means to an end. The explainabilty is all about control. Explainability is not complete without seeking permission from human users. It is all about making humans retain control over robots no matter how intelligent they may be.

While forcing a system to explain its actions will take care of a lot of concerns, it may not be able to curb terrorist attacks. This is because the part of the codes that mandates explanation can be altered or totally removed. This is the major concern of some of the big names in the software industry.