Data Privacy Happenings 📰
Hello from MineOS's monthly newsletter, The Privacy Mindset! 👋
The European Union’s Artificial Intelligence Act is here, the world’s first comprehensive law to regulate AI. As was the case with GDPR, the EU AI Act applies globally as its applicability threshold simply covers EU markets and citizens.
This means that any company selling to even a single citizen of a European Union country must comply, which has set off a chain reaction of global enterprises beginning to prioritize AI governance. While the drive to comply is admirable and a step in the right direction, it presents a complicated undertaking as the AI Act seems imperfect.
Kai Zenner, a Digital Policy Advisor for the European Parliament and a key player in the creation of the AI Act, noted its conceptual flaws, “[Conceptually] … mixing product safety and fundamental rights as well as using New Legislative Framework concepts such as ‘substantial modification’ is not working for evolving AI systems.”
Granted, this was an unprecedented law that took three years and thousands of contributors to hammer out, inevitably relying on innumerable compromises to cross the finish line. Still, the end result being imperfect will lead to conflicting interpretations of the law the same way various EU member states have enforced GDPR differently. Zenner writes, “the AI Act is creating an overcomplicated governance system … As a result, Member States will designate very different national competent authorities, which will - despite the Union Safeguard Procedure in Article 66 - lead to very different interpretations and enforcement activities.”
Despite these criticisms, the EU approach is the favorite to emerge as the standard bearer, given the activity–or lack thereof–on AI globally.
India is currently choosing to let generative AI developers self-regulate and label their own products, as updates out of the country recently dropped the requirement to obtain government permission to make products available to users within India.
The UK is also taking a looser approach than the EU despite sharing a risk-based framework. Within the UK, it will be up to regulators to assess AI-specific risks as they see fit within their area of expertise, guided by the principles of safety and transparency to establish sector-specific AI regulators.
Much of what the US will do on AI regulation remains stuck at the posturing level, with guidelines and endless commentary on how to approach AI but little in the way of legislative progress. With how data privacy has unfolded within the country, expect a decentralized approach that many countries will likely not look to emulate.