Securiti has initiated an AI Regulation digest, providing a comprehensive overview of the most recent significant global developments, announcements, and changes in the field of AI regulation. Our website will regularly update this information, presenting a monthly roundup of key activities. Each regulatory update will include links to related resources at the bottom for your reference.
EMEA Jurisdiction
1. Kenyan ODPC Issues Fines Against Companies For Unlawful Access & Sharing Of Former CEO's Personal Data
Date: 4 November, 2024
Summary: The Office of the Data Protection Commissioner (ODPC) has issued a decision requiring WPP Scangroup Plc, WPP plc, and Control Risks Group Limited (CRG) to pay KES 1,950,000 (approximately $15,108) for violating the Data Protection Act. The decision comes after three complaints from a former CEO of Scangroup, who alleged that CRG accessed their personal data on a company laptop without consent and shared it with third parties.
WPP reportedly contacted CRG to access the CEO's laptop after they had resigned in 2021. Accessed data included personal messages unrelated to their work. The former CEO's data access requests to both WPP and Scangroup were rejected because they were too broad and included legally privileged information.
The ODPC has now declared that the complainant's right to access is not constrained by the presence of confidential or legally privileged data and that WPP and Scangroup can redact such information without hindering access to personal data. Non-complaint with the complainant's request was deemed a violation of the Data Protection Act under Section 26(b). The ODPC further found that CRG, WPP, and Scangroup had all breached lawful processing principles and data minimization requirements by accessing private messages beyond the necessary scope.
The awarded compensation was divided as follows: Scangroup and WPP each to pay KES 700,000 (approximately $5,425), and CRG to pay KES 550,000 (approximately $4,258). Read More.
2. ICO Publishes Recommendations On the Use Of AI Tools In Recruitment Processes In The UK
Date: 6 November, 2024
Summary: The Information Commissioner's Office (ICO) has published its recommendations for AI developers and recruitment providers. The report is based on findings from a recent audit, which revealed concerning practices such as AI tools being used to unfairly process personal data, allowing recruiters to filter candidates based on protected characteristics or inferred attributes, such as gender or ethnicity, from names instead of asking directly. These tools have also been excessively collecting and retaining data without the users' consent or knowledge.
To properly address these issues, the ICO recommends regularly monitoring and correcting any bias, inaccuracy, or unfairness in AI outputs. It also emphasizes transparency by appropriately informing candidates about what data is collected, how it is processed, and the AI's decision-making logic. The ICO has urged minimal data collection, limited to what is necessary for recruitment purposes. Data Protection Impact Assessments (DPIAs) should be conducted before deploying AI systems. Additionally, roles should be appropriately defined for both controllers and processors, with written instructions on how data is to be handled.
Lastly, the ICO advised organizations considering AI recruitment tools to ask critical questions such as whether they have completed a DPIA, documented responsibilities, established a lawful basis for processing, and ensured bias mitigation. Read More.
Securiti's AI Regulation round is an invaluable resource for staying ahead of the latest global developments in the AI industry. Our commitment to timely updates ensures that you have access to crucial information and a better understanding of the evolving AI regulatory landscape.
The team has also created a dedicated page showcasing 'An Overview of Emerging Global AI Regulations’ worldwide. Click here to delve deeper and learn more about the evolving landscape of global AI regulations.