The European Commission says it designed its proposed AI Regulation to protect Europeans from AI-driven harms. The proposals set out a series of banned or regulated AI systems and attempt to eliminate bias from AI training data.
But would the regulation go far enough to protect people’s fundamental rights? Is the Commission right to take a “product safety” approach to AI systems? Would the law justify certain applications of biometric surveillance, “emotion recognition” and psychological manipulation?
Without proper implementation, AI can exacerbate human biases, intrude on people’s privacy, and drive social inequality. Does the EU’s proposed AI Regulation address these problems—or could it make them worse?