Regulating High Risk AI in EU
Regulating High Risk AI in EU
Ensuring trust, safety, security, innovation, integrity, democracy, growth, sustainability, and accountability?
With the proposed AI Act, the EU seeks to ensure trustworthy AI in the European single market. Taken together with existing and planned measures such as the GDPR, Digital Service Directive, Data Governance Directive, AI Liability Directive, the EU seeks to create a comprehensive regulatory framework for our fast developing digital world – a new code napoleon for the digital age, which will ensure safety, security, integrity, democracy, growth, sustainability, and accountability, all the while promoting innovation and a facilitating integration in a vibrant single market.The AI Act takes a ‘Risk-Based Approach’. AI applications will be regulated if, and only if, they pose a risk to fundamental rights and important public interests. The applications that are considered ‘High Risk’ will be subject to extensive regulatory burdens. This Workshop will bring together legal experts to explore what ‘High Risk’ AI means, how the AI Act proposes to regulate High Risk AI, and what public authorities, industry, scholars and the general public need to know before the Act comes into force.