EU Reaches Agreement on AI Act
EU Parliament and Council negotiators have reached a provisional agreement on the Artificial Intelligence Act. MEPs reached a political deal with the Council on a bill to ensure AI in Europe is safe, respects fundamental rights and democracy, while businesses can thrive and expand.
The agreement includes:
- Safeguards on general purpose artificial intelligence
- Limitation for the use of biometric identification systems by law enforcement
- Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
- Right of consumers to launch complaints and receive meaningful explanations
- Fines ranging from 35 million euro or 7% of global turnover to 7.5 million or 1.5% of turnover
Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit biometric categorisation systems that use sensitive characteristics, untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, emotion recognition in the workplace and educational institutions, social scoring based on social behaviour or personal characteristics, AI systems that manipulate human behaviour to circumvent their free will and AI used to exploit the vulnerabilities of people.
Negotiators also agreed on a series of safeguards and narrow exceptions for the use of biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime.
For AI systems classified as high-risk clear obligations were agreed. MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk. Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.
The new Act has come in for criticism from a number of quarters. Speaking to Reuters, Daniel Castro, Vice President of the Information Technology and Innovation Foundation (ITIF) said, ‘Given how rapidly AI is developing, EU lawmakers should have hit pause on any legislation until they better understand what exactly it is they are regulating. There is likely an equal, if not greater, risk of unintended consequences from poorly conceived legislation than there is from poorly conceived technology. And unfortunately, fixing technology is usually much easier than fixing bad laws.’