Artificial intelligence (AI) conjures a range of images from the astonishing to the abominable. AI refers to a variety of technologies that are capable of analyzing large sets of data and using what they learn to inform decisions. Although machine learning technology has proved useful in the field of medicine to discover and develop new treatments—thereby saving lives—it is also apparent that AI is rife with dangers. AI has the potential to threaten citizens’ fundamental rights, with applications such as facial recognition in public spaces.
Aiming to channel the massive potential of AI to do good for society and limit its application where dangerous, on April 21 the European Union (EU) unveiled an array of proposed regulations to make Europe the “gold standard” for AI innovation and consumer protection.
While the EU is eager to foster technological innovation and compete with global tech leaders China and the U.S., its sweeping legislation is aimed at curtailing the development and implementation of AI with ethically questionable and even dystopian-sounding uses. One example of such use that has proven especially daunting in Western circles is China’s application of biometric identification AI towards establishing a “social credit system,” whereby the technology can be used to track people’s actions and punish those caught doing wrong. Less Orwellian but still dangerous “high-risk” uses of AI such as critical infrastructure, college admissions, and loan applications, will be severely regulated as well.
In order to ensure that AI technology can develop without harming consumers, the EU is taking a “risk-based approach” that seeks to regulate or ban the implementation of technologies with the capability to infringe on people’s rights. Margrethe Vestager, the European Commission’s executive vice president for the digital age, stressed that regulations were designed not to stymie innovation, but rather to “make sure AI can be trusted.” As such, EU regulators will investigate companies that create AI software. Companies will be required to provide risk assessments and documentation to explain how the technology makes decisions. Furthermore, the EU will mandate that AI systems are overseen by humans as a further safety precaution. As part of the EU’s pursuit of increased transparency for consumers, applications such as chatbots must clarify to users that they are not interacting with humans. Additionally, software that create “deepfakes,” digitally manipulated images, must tell users that the images are computer generated. While the EU is cracking down on “high-risk” applications of AI, it is decidedly reluctant to regulate less consequential forms such as videogames and spam filters.
The EU’s hesitance to regulate low-risk forms of AI demonstrates its desire to balance fostering technological innovation in a region competing with both the U.S. and China and preventing dangerous software from reaching the market. The EU is not inimical to AI, and it even plans to invest billions of dollars annually; that said, it will only accept the technology on appropriate terms. To oversee AI regulations, the EU is proposing to establish a European Artificial Intelligence Board, which will develop standards that apply to anyone providing AI software in the EU.
As Europe aims to fulfill its goal of becoming the “global hub for trustworthy Artificial Intelligence,” the EU is prepared to enforce its new regulations with severe penalties for violators. Companies that violate regulations must pay a fine of up to 6% of their global annual revenue. However, these penalties will only be enforced if they refuse to heed the warnings of EU authorities; officials will ask companies to remove unapproved software from the market before punishing them. Additionally, the EU rarely demands maximum fines for violators of its regulations, indicating that businesses are unlikely to face the proposed 6% penalty.
Although the proposed legislation has attracted much praise from digital-rights activists, advocates for tighter regulations and critics alike have pointed to several flaws that may undermine the regulatory mission to balance innovation and caution. Although Vestager recognizes the dangers of biometric identification technologies, she included gaping loopholes in the legislation that allow for limited use of such technologies. Law enforcement may use this AI software when searching for missing children or when necessary to prevent terror plots.
Furthermore, because the strict legislation will put AI producers to the test in Europe, critics fear that imposing strong regulations will hurt innovation on the continent, giving companies in places like China a significant advantage. Benjamin Mueller, a critic of the legislation, remarked that “the U.S. and China are going to look on with amusement as the EU kneecaps its own startups,” suggesting that the EU is putting itself at a significant competitive disadvantage by hampering innovation and making it “technologically infeasible to build AI in Europe.”
The EU’s tech regulations have frequently served as a blueprint for those implemented throughout the world—that phenomenon appears likely to continue as AI faces major legal restrictions. The Biden administration has several critics of “Big Tech” among its ranks, making the environment suitable for stricter regulations. Britain, India, and China have all begun to strengthen regulations within various arenas of the tech world. The EU’s “gold standard” marks a crucial step towards shaping the future of AI technology and its role in society. Prioritizing rights but advocating for innovation, the EU is attempting a difficult balance in a hypercompetitive world, while doing its best to preserve the freedoms enjoyed by its citizens.
Jacob Rosenzweig is from New York, New York studying History and Classics.