Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

Implementing Responsible AI Assistants in Denmark: A Guide for Public and Private Sectors

Author

Syed Tatheer Kazmi

Associate Data Privacy Analyst, Securiti

CIPP/Europe

Listen to the content

The increasing use of AI assistants across various sectors has created both exciting opportunities and significant challenges. Recognizing the need for responsible and ethical AI development, the Danish Digital Agency (Digitaliseringsstyrelsen) has published a white paper titled "Responsible Use of AI Assistants in the Public and Private Sector". This document, developed through a unique collaboration between public and private sector stakeholders, outlines a nine-point process for responsibly developing and implementing AI assistants, particularly those leveraging generative AI and language models, in Denmark's public and private sectors.

An AI Assistant is defined as a system, or part of a system, that utilizes language models to perform specific tasks with a certain degree of autonomy. They are designed to enhance efficiency and quality in task execution and often aim to replace human-driven processes that require judgment, leveraging tools and data from existing business systems.

The nine-point process addresses legal frameworks like the General Data Protection Regulation (GDPR) and the EU Artificial Intelligence Act (AI Act), emphasizing data security, quality assurance, and ethical considerations. Specific examples of AI assistant implementations from the organizations that contributed to this whitepaper are included for illustrative purposes. It aims to guide organizations through the challenges and opportunities of integrating AI assistants while adhering to legal and ethical standards. A checklist is also provided towards the end of the whitepaper to assist organizations in navigating the legal requirements of the GDPR and AI Act.

9 Steps of Responsible AI Assistant Integration

The white paper presents a nine-point process designed to guide organizations in the responsible development, implementation, and maintenance of AI assistants. These points address various aspects of the AI assistant's lifecycle, from the initial planning and design phase to ongoing operation and optimization. The process emphasizes a flexible and iterative approach and acknowledges that the organizations will often need to re-evaluate and adapt as their understanding of AI grows, new challenges emerge, or the scope of the AI assistant expands.

The nine points of the process are as follows:

1. Defining the Use Case

Clearly defining the AI assistant's use case is the first step. This involves identifying a specific problem or opportunity within the organization that the AI assistant can address. For instance, an organization might use an AI assistant to enhance customer service, automate repetitive tasks, or streamline a complex business process. This initial step helps to align the AI assistant with the organization's overall objectives and sets a clear direction for the implementation process.

2. Supporting with a Flexible Technical Platform

Building the AI assistant on a flexible technical platform is crucial for seamless integration, scalability, and adaptation to future advancements in AI technology.

Organizations have three primary approaches to developing the technical platform:

  • In-house development;
  • Collaboration with external suppliers; or
  • Outsourcing the entire development process.

Each approach has its own implications for cost, control over the solution, and the level of technical expertise required. It's important to consider the long-term strategy and goals of the organization when selecting such a platform.

3. Assessing Necessary Data and Data Processing

Data forms the foundation of any AI assistant, and organizations need to thoroughly assess the type, quality, and accessibility of the data required for the AI assistant to function effectively. This involves considering the accuracy, relevance, and timeliness of the data. Data preparation, validation, and security are crucial, especially when handling sensitive data. Techniques like Retrieval-Augmented Generation (RAG) can enhance accuracy and mitigate the risk of AI hallucinations, which occur when AI generates incorrect or misleading information. Systematic data masking, which replaces sensitive data in prompts with temporary values, can also help to protect privacy.

Compliance with the legal frameworks, particularly the GDPR and the AI Act, is non-negotiable. The whitepaper provides a detailed checklist in Appendix A to help organizations understand and implement the specific requirements of these regulations. While not exhaustive it serves as a valuable starting point for ensuring compliance for both in-house development and when purchasing AI assistants as standard solutions. However, a thorough legal assessment should still be conducted for each specific assistant.

This step is important in the development and implementation of AI assistants. It's not just a box to check off but rather an ongoing process that must be integrated into every stage of the AI assistant's lifecycle. This is especially important given that laws concerning AI are rapidly evolving.

By their very nature, AI assistants will process data to learn and carry out their designated tasks. This data can often include personal information, which brings the GDPR into play.

The GDPR sets out strict rules for how this data must be collected, stored, and used. Key considerations in connection with GDPR include:

  • Data minimization;
  • Security;
  • Transparency;
  • Accountability;
  • Lawful basis for processing;
  • Data subject rights; and
  • Handling transfers of personal data to third countries.

If processing personal data is not involved, GDPR does not apply. For example, an AI assistant that helps users select industry codes does not process personal data, so the GDPR would not apply.

Beyond data processing, AI assistants are also being used to automate tasks, make recommendations, or even take actions based on the data they process. This ability to make decisions, particularly when those decisions could significantly impact individuals or society, brings the AI Act into the picture.

The AI Act uses a risk-based approach, meaning that the level of scrutiny and compliance requirements for an AI assistant depends on the potential harm it could cause. The whitepaper outlines four risk levels defined in the AI Act:

  • Unacceptable Risk: AI systems that pose a substantial threat to human safety, fundamental rights, or dignity are prohibited. These might include systems like social scoring systems, or those that employ manipulative techniques to harm users.
  • High Risk: AI systems with a high potential to impact safety or fundamental rights, such as those used in critical infrastructure, education, employment, or law enforcement, are subject to stringent requirements before they can be placed on the market. These requirements include conformity assessments, risk management systems, high-quality datasets, thorough documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.

An example of high-risk AI assistants can be found in the financial sector, which involves handling sensitive personal data, making the implications of AI use particularly significant in terms of potential impact on individual rights and safety. For instance, AI assistants developed to automate aspects of accounting analysis at Jyske Bank use generative AI to produce draft reports based on the financial data drawn from bank systems and public sources. Before implementing this application, the bank conducted a comprehensive risk assessment, involving its AI committee and multiple departments. This process required collaboration among various departments to consider factors such as IT security, data protection, data ethics, data quality, and operational risks.

  • Limited Risk: AI systems that primarily pose risks related to transparency are subject to specific obligations designed to ensure that users are informed about their interaction with an AI system. Examples of AI assistants that fall under this category are TopGPT, an AI assistant developed by Topdanmark to provide information about personal insurance, and Netcompany's documentation assistant, EASLEY AI, designed to enhance the efficiency and consistency of project documentation. These AI assistants meet the transparency requirements by informing users that they are interacting with an AI system.
  • Minimal or No Risk: As per the whitepaper, the majority of AI systems currently in use in the EU fall under this category, including applications like AI-enabled video games or spam filters. The AIA allows for the free use of artificial intelligence with minimal risk.

Along with the risk associated with AI systems, understanding the specific role an organization plays in relation to an AI assistant is also crucial for determining legal obligations. The whitepaper details two primary roles within the AIA framework:

  • Provider: If the organization develops, commissions, or places the AI assistant on the market, it's considered the provider and bears the majority of the obligations under AIA, especially for high-risk systems.
  • Deployer: If the organization uses an AI assistant under its authority, it's considered the deployer and is responsible for ensuring the AI assistant is used safely and lawfully in its context.

To better understand these risk thresholds and respective obligations for provider and deployer, refer to Securiti’s  Whitepaper on the AI Act.

5. Setting Boundaries for the AI Assistant's Abilities and Responsibilities

Defining clear boundaries for the AI assistant's capabilities and responsibilities is important to ensure responsible use and minimize the risk of errors or unintended consequences. Organizations must outline the specific tasks, responsibilities, and tools the AI assistant can access. Prompt engineering involves using precise instructions to guide the AI assistant's work. It is a key technique for setting boundaries and ensuring that the AI assistant performs within its defined scope. By systematically defining and implementing these boundaries, organizations can ensure that their AI assistants operate effectively, safely, and ethically within their intended roles.

6. Building Structured Quality Assurance

Robust quality assurance processes are essential to validate the AI assistant's accuracy, reliability, and compliance with ethical and legal standards. This involves implementing both traditional testing methods and AI-specific techniques. For example, where experts attempt to exploit the AI assistant's vulnerabilities, red teaming helps identify potential risks. Pilot programs allow organizations to test the AI assistant with a limited user group in a real-world setting. Additionally, various quality control methods are recommended, such as rules-based control, which uses predefined rules to assess output, and model-based validation, which uses a separate AI model to validate the AI assistant's responses.

7. Tracking Relevant Data on the AI Assistant’s Use

Collecting and analyzing data on the AI assistant's usage provides insights into its performance, user behavior, and potential areas for improvement. Organizations should implement robust logging systems that record relevant data points, such as user queries, AI responses, the context of interactions, and the sources of information used by the AI assistant. This data can be invaluable for auditing, troubleshooting, and ensuring compliance with evolving regulations.

8. Planning Organizational Implementation and Training

Successful implementation requires a well-planned approach that considers the impact of the AI assistant on the organization and its employees. Organizations need to develop a clear narrative to communicate the purpose and benefits of the AI assistant, address potential concerns, and foster a positive perception of the change. Adequate training of the employees is crucial to ensure that they understand how to use the AI assistant effectively, interpret its output, and recognize potential limitations or biases. Building trust with users is essential to encourage adoption and maximize the benefits of the AI assistant.

9. Establishing Follow-up and Support Structures

Ongoing support and maintenance are essential to ensure the AI assistant’s continued effectiveness and reliability. This involves establishing dedicated support structures that include expert guidance, feedback mechanisms, automated monitoring systems, and procedures for human follow-up. Feedback mechanisms allow users to report errors or suggest improvements, while monitoring systems help to identify potential issues, such as security breaches or instances of AI hallucinations. The whitepaper highlights the importance of a proactive approach to maintenance, with regular evaluation of data and expert-driven adjustments to optimize performance and ensure the AI assistant remains aligned with the organization's evolving needs.

How Securiti Can Help

Organizations that process personal data through AI systems must ensure that their practices comply with the GDPR and the AI Act. Leveraging Securiti’s Data Command Center — a centralized platform designed to deliver contextual intelligence, controls, and orchestration — organizations can ensure the safe use of data and AI while navigating both existing and future regulatory compliance by:

  • Complying with the GDPR:
    • Unifying data controls across security, privacy, compliance, and governance through a single, fully integrated platform.
    • Leveraging contextual data intelligence and automation to ensure compliance with GDPR principles such as data minimization, purpose limitation, and accountability.
  • Managing AI Compliance:
    • Discovering, cataloging, and identifying the purpose and characteristics of sanctioned and unsanctioned AI models across public clouds, private clouds, and SaaS applications.
    • Conducting AI risk assessments to identify and classify AI systems by risk level.
    • Mapping AI models to data sources, processes, applications, potential risks, and compliance obligations.
    • Implementing appropriate privacy, security, and governance guardrails to protect data and AI systems.
    • Ensure compliance with applicable data and AI regulations.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View
Spotlight 13:35

The Better Organized We’re from the Beginning, the Easier it is to Use Data

Watch Now View
Spotlight 13:11

Securing GenAI: From SaaS Copilots to Enterprise Applications

Rehan Jalil
Watch Now View
Spotlight 47:02

Navigating Emerging Technologies: AI for Security/Security for AI

Rehan Jalil
Watch Now View

Latest

View More

Accelerating Safe Enterprise AI with Gencore Sync & Databricks

We are delighted to announce new capabilities in Gencore AI to support Databricks' Mosaic AI and Delta Tables! This support enables organizations to selectively...

View More

Building Safe, Enterprise-grade AI with Securiti’s Gencore AI and NVIDIA NIM

Businesses are rapidly adopting generative AI (GenAI) to boost efficiency, productivity, innovation, customer service, and growth. However, IT & AI executives—particularly in highly regulated...

Key Differences from DLP & CNAPP View More

Why DSPM is Critical: Key Differences from DLP & CNAPP

Learn about the critical differences between DSPM vs DLP vs CNAPP and why a unified, data-centric approach is an optimal solution for robust data...

DSPM Trends View More

DSPM in 2025: Key Trends Transforming Data Security

DSPM trends in 2025 provides a quick glance at the challenges, risks, and best practices that can help security leaders evolve their data security...

The Future of Privacy View More

The Future of Privacy: Top Emerging Privacy Trends in 2025

Download the whitepaper to gain insights into the top emerging privacy trends in 2025. Analyze trends and embed necessary measures to stay ahead.

View More

Personalization vs. Privacy: Data Privacy Challenges in Retail

Download the whitepaper to learn about the regulatory landscape and enforcement actions in the retail industry, data privacy challenges, practical recommendations, and how Securiti...

Nigeria's DPA View More

Navigating Nigeria’s DPA: A Step-by-Step Compliance Roadmap

Download the infographic to learn how Nigeria's Data Protection Act (DPA) mapping impacts your organization and compliance strategy.

Decoding Data Retention Requirements Across US State Privacy Laws View More

Decoding Data Retention Requirements Across US State Privacy Laws

Download the infographic to explore data retention requirements across US state privacy laws. Understand key retention requirements and noncompliance penalties.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New