The European Union on Wednesday put tough regulations in place on the use of artificial intelligence, a unique policy that describes how businesses and governments can deploy a technology that is considered to be one of the most significant but ethically-burdened scientific breakthroughs last reminder.
The draft regulation would limit the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank loans, selection of school enrollment to assessment of exams. It would also cover the use of artificial intelligence by law enforcement and judicial systems – areas classified as “high risk” because they could endanger the safety or fundamental rights of people.
Some uses would be prohibited altogether, including live facial recognition in public spaces, although there would be several exemptions for national security and other purposes.
The 108-page directive is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for large technology companies that have invested resources in artificial intelligence development, including Amazon, Google, Facebook, and Microsoft, as well as numerous other companies that use the drug development, insurance policy, and assessment software use of creditworthiness. Governments have used versions of the technology in criminal justice and in the allocation of public services such as income support.
Companies that violate the new regulations, which could take several years to go through the European Union’s political decision-making process, could face fines of up to 6 percent of global sales.
“When it comes to artificial intelligence, trust is a must and not a must,” said Margrethe Vestager, executive vice president of the European Commission, which oversees digital policy for the 27-nation bloc, in a statement. “With these landmark rules, the EU is leading the way in developing new global standards to ensure that AI can be trusted.”
European Union rules would require companies deploying artificial intelligence in risk areas to provide regulatory authorities with evidence of safety, including risk assessments and documentation explaining how the technology makes decisions. Organizations must also maintain human control over how the systems are created and used.
Some applications, such as chatbots that enable human-like conversation in customer service situations, and software that creates hard-to-see manipulated images such as “deepfakes”, would need to make users understand that what they are seeing is computer generated.
For years, the European Union has been the tech industry’s most aggressive watchdog in the world, and other nations often use their policies as blueprints. The block has already passed the world’s most far-reaching data protection regulations and is debating additional laws on antitrust law and the moderation of content.
However, Europe is no longer the only one pushing for stricter supervision. The biggest tech companies are now facing wider reckoning from governments around the world, each with their own political and political motives for containing the power of the industry.
In business today
April 21, 2021, 6:16 p.m. ET
In the United States, President Biden has filled his administration with industry critics. Britain is creating a technical regulator to oversee the industry. India tightens social media oversight. China has targeted domestic tech giants like Alibaba and Tencent.
The results of the coming years could change the way the global internet works and the use of new technologies as people have access to different content, digital services or online freedoms depending on where they are.
Artificial intelligence – the process of training machines to get things done and make decisions by examining huge amounts of data – is viewed by technologists, business leaders, and government officials as one of the most transformative technologies in the world, one that promises significant productivity gains.
However, as the systems become more complex, it can be more difficult to understand why the software is making a decision. This problem could worsen as computers become more powerful. Researchers have raised ethical questions about its use, suggesting that it could perpetuate prejudice in society, invade privacy, or lead to more workplaces being automated.
The publication of the bill by the European Commission, the bloc’s executive body, met with mixed reactions. Many industry groups expressed relief that the regulations were not stricter, while civil society groups said they should have gone further.
“There has been a lot of discussion in recent years about what it would mean to regulate AI and the previous fallback option has been to do nothing and see what happens,” said Carly Kind, director of the Ada Lovelace Institute in London , which investigates the ethical use of artificial intelligence. “This is the first time a country or regional bloc has tried.”
Ms. Kind said many have concerns that the policy is too broad and has left too much discretion to businesses and technology developers to regulate themselves.
“When it doesn’t have strict red lines and guidelines and very firm boundaries on what is acceptable, it opens up a lot for interpretation,” she said.
The development of fair and ethical artificial intelligence has become one of the most controversial issues in Silicon Valley. In December, a co-lead of a team at Google investigating the ethical use of the software said she was fired for criticizing the company’s lack of diversity and the prejudices built into modern artificial intelligence software. Debates raged within Google and other companies about selling the latest software to governments for military use.
In the United States, the risks of artificial intelligence are also considered by government agencies.
This week the Federal Trade Commission warned against sales of artificial intelligence systems that use racially biased algorithms or that “could deny people employment, housing, credit, insurance, or other benefits.”
Elsewhere in Massachusetts and cities like Oakland, California; Portland, Ore .; and San Francisco governments have taken steps to restrict police use of facial recognition.