AI technologies – specifically Generative AI – and automated decision-making could bring immense change to legal practices. Bird & Bird’s recent webinar “Unpacking generative AI, automated decision-making, and the AI Act” provided a comprehensive view of these technologies within legal proceedings.
Takeaways from AI Act and Automated Decision Making:
The proposed European AI Act aims to be a comprehensive regulation of artificial intelligence applications and emphasizes risk-based regulation. AI applications have become an increasingly critical part of automated decision-making processes; Biden’s Executive Order and G7 Principles serve as excellent examples. Automated decisions, as defined by the General Data Protection Regulation, (GDPR), are those that are made without human involvement.
Transparency, responsible sharing of information, and using AI for beneficial purposes such as combatting climate change are all hallmarks of good leadership that G7’s AI guiding principles and President Biden’s executive order stress.
These developments reflect an international push toward developing frameworks to enable ethical and secure deployments of AI technologies.
Data Protection & Intellectual Property:
GDPR regulations already establish standards for AI-generated data. Intellectual property rights also play a significant role when applied to generative AI for product development purposes. A key takeaway was to evaluate whether personal data was involved in AI processes and, if so, to ensure compliance with GDPR obligations such as minimization and lawful basis of processing under GDPR obligations.
Risk-Based Approach:
The session provided an in-depth breakdown of how the AI Act addresses risk, categorizing AI systems into banned, high-risk, and transparent categories; with transparent systems requiring clear disclosure of content produced by AI. High-risk systems that could harm health or fundamental rights will face stringent regulations with potentially hefty fines attached.
Within AI Value Chain: Division of Labor
The webinar provided clarity into the AI value chain from foundation model providers down through downstream system providers and end-users. Shima Abadi, an AI lawyer, emphasized the importance of understanding where one falls in the AI value chain, as different roles come with varying obligations. Under AI Act there will be separate obligations on all entities within that value chain including internal users who utilize AI systems.
Generational AI & Foundation Models:
Generative AI technologies (such as GPT and DALLE) rely on foundation models that support other systems. Much work still needs to be done regarding the regulation of foundation models and potential separate rules for those providing foundation models.
Enforcement & Compliance:
The AI Act and GDPR ensure compliance with data privacy principles throughout an AI system’s lifetime. Legal practitioners implementing AI should be cognizant of its ramifications for data protection as well as all of their obligations under GDPR.
Big Picture:
Panelists provided insightful analysis that provided lawyers with valuable guidance for integrating AI into their workflows while adhering to legal standards and ethical practices. Legal professionals need to have a firm grasp on the rapidly evolving regulatory environment by having an in-depth knowledge of GDPR and the AI Act. It will also become important to stay on top of the changes to comply with intellectual property and data protection laws as AI becomes part of legal workflows.
I also recommend lawyers and others impacted by these changes attend these webinars to gain in-depth insights, as well as attend future events to stay abreast of future developments. Here’s the link again for you to watch the webinar yourself – Link to replay of live event