AI Act Compliance Checker now live online
This excellent tool will tell you if your AI system will be compliant with the EU AI Act:
Concerns Over New Loophole in AI Act
A coalition of 118 civil society organisations, including BEUC, European Digital Rights (EDRi), and Access Now, has appealed to EU lawmakers regarding what they consider a critical oversight in the AI Act. The group points to a loophole that potentially jeopardizes the entire legislation.
In its initial form, the European Commission’s draft categorically defined ‘high-risk’ AI systems using criteria laid out in Annex III. However, subsequent amendments by the Council and European Parliament have blurred these clear lines, enabling a subjective assessment by the developers themselves. These included things like AI systems which controlled medical implants, where failure could result in death, and systems responsible for traffic control.
However, the Council and European Parliament have changed this to allow developers to decide for themselves if their system is a high-risk one. This lets these developers decide whether they’re bound by the law.
To ensure a consistent, objective, and legally sound framework, the coalition has urged for a rejection of the changes made to Article 6. They advocate for a return to the Commission’s original risk-classification method for identifying ‘high-risk’ AI systems within the AI Act.
Academics Call for a Rigorous Rights Impact Assessment in AI Act
The Brussels Privacy Hub, in collaboration with over 110 prominent academics, has issued a public letter championing the need for a stringent fundamental rights impact assessment within the AI Act. Their recommendation aligns with the European Parliament’s draft of the Act, emphasizing the criticality of:
- Defining Clear Criteria: The Act should provide unequivocal parameters to assess the repercussions of AI on fundamental human rights.
- Promoting Transparency: Results from the impact assessment should be made publicly available in detailed, comprehensible summaries.
- Prioritizing End-user Participation: There’s a pressing need to incorporate feedback from affected end-users, with particular attention to those in vulnerable situations.
- Engaging Independent Authorities: The process should actively involve impartial public bodies in the assessment proceedings and ensure auditing mechanisms are in place.