Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

Implementing Responsible AI Assistants in Denmark: A Guide for Public and Private Sectors

Author

Syed Tatheer Kazmi

Associate Data Privacy Analyst, Securiti

Listen to the content

The increasing use of AI assistants across various sectors has created both exciting opportunities and significant challenges. Recognizing the need for responsible and ethical AI development, the Danish Digital Agency (Digitaliseringsstyrelsen) has published a white paper titled "Responsible Use of AI Assistants in the Public and Private Sector". This document, developed through a unique collaboration between public and private sector stakeholders, outlines a nine-point process for responsibly developing and implementing AI assistants, particularly those leveraging generative AI and language models, in Denmark's public and private sectors.

An AI Assistant is defined as a system, or part of a system, that utilizes language models to perform specific tasks with a certain degree of autonomy. They are designed to enhance efficiency and quality in task execution and often aim to replace human-driven processes that require judgment, leveraging tools and data from existing business systems.

The nine-point process addresses legal frameworks like the General Data Protection Regulation (GDPR) and the EU Artificial Intelligence Act (AI Act), emphasizing data security, quality assurance, and ethical considerations. Specific examples of AI assistant implementations from the organizations that contributed to this whitepaper are included for illustrative purposes. It aims to guide organizations through the challenges and opportunities of integrating AI assistants while adhering to legal and ethical standards. A checklist is also provided towards the end of the whitepaper to assist organizations in navigating the legal requirements of the GDPR and AI Act.

9 Steps of Responsible AI Assistant Integration

The white paper presents a nine-point process designed to guide organizations in the responsible development, implementation, and maintenance of AI assistants. These points address various aspects of the AI assistant's lifecycle, from the initial planning and design phase to ongoing operation and optimization. The process emphasizes a flexible and iterative approach and acknowledges that the organizations will often need to re-evaluate and adapt as their understanding of AI grows, new challenges emerge, or the scope of the AI assistant expands.

The nine points of the process are as follows:

1. Defining the Use Case

Clearly defining the AI assistant's use case is the first step. This involves identifying a specific problem or opportunity within the organization that the AI assistant can address. For instance, an organization might use an AI assistant to enhance customer service, automate repetitive tasks, or streamline a complex business process. This initial step helps to align the AI assistant with the organization's overall objectives and sets a clear direction for the implementation process.

2. Supporting with a Flexible Technical Platform

Building the AI assistant on a flexible technical platform is crucial for seamless integration, scalability, and adaptation to future advancements in AI technology.

Organizations have three primary approaches to developing the technical platform:

  • In-house development;
  • Collaboration with external suppliers; or
  • Outsourcing the entire development process.

Each approach has its own implications for cost, control over the solution, and the level of technical expertise required. It's important to consider the long-term strategy and goals of the organization when selecting such a platform.

3. Assessing Necessary Data and Data Processing

Data forms the foundation of any AI assistant, and organizations need to thoroughly assess the type, quality, and accessibility of the data required for the AI assistant to function effectively. This involves considering the accuracy, relevance, and timeliness of the data. Data preparation, validation, and security are crucial, especially when handling sensitive data. Techniques like Retrieval-Augmented Generation (RAG) can enhance accuracy and mitigate the risk of AI hallucinations, which occur when AI generates incorrect or misleading information. Systematic data masking, which replaces sensitive data in prompts with temporary values, can also help to protect privacy.

Compliance with the legal frameworks, particularly the GDPR and the AI Act, is non-negotiable. The whitepaper provides a detailed checklist in Appendix A to help organizations understand and implement the specific requirements of these regulations. While not exhaustive it serves as a valuable starting point for ensuring compliance for both in-house development and when purchasing AI assistants as standard solutions. However, a thorough legal assessment should still be conducted for each specific assistant.

This step is important in the development and implementation of AI assistants. It's not just a box to check off but rather an ongoing process that must be integrated into every stage of the AI assistant's lifecycle. This is especially important given that laws concerning AI are rapidly evolving.

By their very nature, AI assistants will process data to learn and carry out their designated tasks. This data can often include personal information, which brings the GDPR into play.

The GDPR sets out strict rules for how this data must be collected, stored, and used. Key considerations in connection with GDPR include:

  • Data minimization;
  • Security;
  • Transparency;
  • Accountability;
  • Lawful basis for processing;
  • Data subject rights; and
  • Handling transfers of personal data to third countries.

If processing personal data is not involved, GDPR does not apply. For example, an AI assistant that helps users select industry codes does not process personal data, so the GDPR would not apply.

Beyond data processing, AI assistants are also being used to automate tasks, make recommendations, or even take actions based on the data they process. This ability to make decisions, particularly when those decisions could significantly impact individuals or society, brings the AI Act into the picture.

The AI Act uses a risk-based approach, meaning that the level of scrutiny and compliance requirements for an AI assistant depends on the potential harm it could cause. The whitepaper outlines four risk levels defined in the AI Act:

  • Unacceptable Risk: AI systems that pose a substantial threat to human safety, fundamental rights, or dignity are prohibited. These might include systems like social scoring systems, or those that employ manipulative techniques to harm users.
  • High Risk: AI systems with a high potential to impact safety or fundamental rights, such as those used in critical infrastructure, education, employment, or law enforcement, are subject to stringent requirements before they can be placed on the market. These requirements include conformity assessments, risk management systems, high-quality datasets, thorough documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.

An example of high-risk AI assistants can be found in the financial sector, which involves handling sensitive personal data, making the implications of AI use particularly significant in terms of potential impact on individual rights and safety. For instance, AI assistants developed to automate aspects of accounting analysis at Jyske Bank use generative AI to produce draft reports based on the financial data drawn from bank systems and public sources. Before implementing this application, the bank conducted a comprehensive risk assessment, involving its AI committee and multiple departments. This process required collaboration among various departments to consider factors such as IT security, data protection, data ethics, data quality, and operational risks.

  • Limited Risk: AI systems that primarily pose risks related to transparency are subject to specific obligations designed to ensure that users are informed about their interaction with an AI system. Examples of AI assistants that fall under this category are TopGPT, an AI assistant developed by Topdanmark to provide information about personal insurance, and Netcompany's documentation assistant, EASLEY AI, designed to enhance the efficiency and consistency of project documentation. These AI assistants meet the transparency requirements by informing users that they are interacting with an AI system.
  • Minimal or No Risk: As per the whitepaper, the majority of AI systems currently in use in the EU fall under this category, including applications like AI-enabled video games or spam filters. The AIA allows for the free use of artificial intelligence with minimal risk.

Along with the risk associated with AI systems, understanding the specific role an organization plays in relation to an AI assistant is also crucial for determining legal obligations. The whitepaper details two primary roles within the AIA framework:

  • Provider: If the organization develops, commissions, or places the AI assistant on the market, it's considered the provider and bears the majority of the obligations under AIA, especially for high-risk systems.
  • Deployer: If the organization uses an AI assistant under its authority, it's considered the deployer and is responsible for ensuring the AI assistant is used safely and lawfully in its context.

To better understand these risk thresholds and respective obligations for provider and deployer, refer to Securiti’s  Whitepaper on the AI Act.

5. Setting Boundaries for the AI Assistant's Abilities and Responsibilities

Defining clear boundaries for the AI assistant's capabilities and responsibilities is important to ensure responsible use and minimize the risk of errors or unintended consequences. Organizations must outline the specific tasks, responsibilities, and tools the AI assistant can access. Prompt engineering involves using precise instructions to guide the AI assistant's work. It is a key technique for setting boundaries and ensuring that the AI assistant performs within its defined scope. By systematically defining and implementing these boundaries, organizations can ensure that their AI assistants operate effectively, safely, and ethically within their intended roles.

6. Building Structured Quality Assurance

Robust quality assurance processes are essential to validate the AI assistant's accuracy, reliability, and compliance with ethical and legal standards. This involves implementing both traditional testing methods and AI-specific techniques. For example, where experts attempt to exploit the AI assistant's vulnerabilities, red teaming helps identify potential risks. Pilot programs allow organizations to test the AI assistant with a limited user group in a real-world setting. Additionally, various quality control methods are recommended, such as rules-based control, which uses predefined rules to assess output, and model-based validation, which uses a separate AI model to validate the AI assistant's responses.

7. Tracking Relevant Data on the AI Assistant’s Use

Collecting and analyzing data on the AI assistant's usage provides insights into its performance, user behavior, and potential areas for improvement. Organizations should implement robust logging systems that record relevant data points, such as user queries, AI responses, the context of interactions, and the sources of information used by the AI assistant. This data can be invaluable for auditing, troubleshooting, and ensuring compliance with evolving regulations.

8. Planning Organizational Implementation and Training

Successful implementation requires a well-planned approach that considers the impact of the AI assistant on the organization and its employees. Organizations need to develop a clear narrative to communicate the purpose and benefits of the AI assistant, address potential concerns, and foster a positive perception of the change. Adequate training of the employees is crucial to ensure that they understand how to use the AI assistant effectively, interpret its output, and recognize potential limitations or biases. Building trust with users is essential to encourage adoption and maximize the benefits of the AI assistant.

9. Establishing Follow-up and Support Structures

Ongoing support and maintenance are essential to ensure the AI assistant’s continued effectiveness and reliability. This involves establishing dedicated support structures that include expert guidance, feedback mechanisms, automated monitoring systems, and procedures for human follow-up. Feedback mechanisms allow users to report errors or suggest improvements, while monitoring systems help to identify potential issues, such as security breaches or instances of AI hallucinations. The whitepaper highlights the importance of a proactive approach to maintenance, with regular evaluation of data and expert-driven adjustments to optimize performance and ensure the AI assistant remains aligned with the organization's evolving needs.

How Securiti Can Help

Organizations that process personal data through AI systems must ensure that their practices comply with the GDPR and the AI Act. Leveraging Securiti’s Data Command Center — a centralized platform designed to deliver contextual intelligence, controls, and orchestration — organizations can ensure the safe use of data and AI while navigating both existing and future regulatory compliance by:

  • Complying with the GDPR:
    • Unifying data controls across security, privacy, compliance, and governance through a single, fully integrated platform.
    • Leveraging contextual data intelligence and automation to ensure compliance with GDPR principles such as data minimization, purpose limitation, and accountability.
  • Managing AI Compliance:
    • Discovering, cataloging, and identifying the purpose and characteristics of sanctioned and unsanctioned AI models across public clouds, private clouds, and SaaS applications.
    • Conducting AI risk assessments to identify and classify AI systems by risk level.
    • Mapping AI models to data sources, processes, applications, potential risks, and compliance obligations.
    • Implementing appropriate privacy, security, and governance guardrails to protect data and AI systems.
    • Ensure compliance with applicable data and AI regulations.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New