Announcing Agent Commander - The First Integrated solution from Veeam + Securiti.ai enabling the scaling of safe AI agents

View

Veeamon Tour'26 - Data & AI Trust CONVERGE for the Agentic Era

View

Implementing Responsible AI Assistants in Denmark: A Guide for Public and Private Sectors

Author

Syed Tatheer Kazmi

Data Privacy Analyst

CIPP/Europe

Published December 4, 2024

Listen to the content

The increasing use of AI assistants across various sectors has created both exciting opportunities and significant challenges. Recognizing the need for responsible and ethical AI development, the Danish Digital Agency (Digitaliseringsstyrelsen) has published a white paper titled "Responsible Use of AI Assistants in the Public and Private Sector". This document, developed through a unique collaboration between public and private sector stakeholders, outlines a nine-point process for responsibly developing and implementing AI assistants, particularly those leveraging generative AI and language models, in Denmark's public and private sectors.

An AI Assistant is defined as a system, or part of a system, that utilizes language models to perform specific tasks with a certain degree of autonomy. They are designed to enhance efficiency and quality in task execution and often aim to replace human-driven processes that require judgment, leveraging tools and data from existing business systems.

The nine-point process addresses legal frameworks like the General Data Protection Regulation (GDPR) and the EU Artificial Intelligence Act (AI Act), emphasizing data security, quality assurance, and ethical considerations. Specific examples of AI assistant implementations from the organizations that contributed to this whitepaper are included for illustrative purposes. It aims to guide organizations through the challenges and opportunities of integrating AI assistants while adhering to legal and ethical standards. A checklist is also provided towards the end of the whitepaper to assist organizations in navigating the legal requirements of the GDPR and AI Act.

9 Steps of Responsible AI Assistant Integration

The white paper presents a nine-point process designed to guide organizations in the responsible development, implementation, and maintenance of AI assistants. These points address various aspects of the AI assistant's lifecycle, from the initial planning and design phase to ongoing operation and optimization. The process emphasizes a flexible and iterative approach and acknowledges that the organizations will often need to re-evaluate and adapt as their understanding of AI grows, new challenges emerge, or the scope of the AI assistant expands.

The nine points of the process are as follows:

1. Defining the Use Case

Clearly defining the AI assistant's use case is the first step. This involves identifying a specific problem or opportunity within the organization that the AI assistant can address. For instance, an organization might use an AI assistant to enhance customer service, automate repetitive tasks, or streamline a complex business process. This initial step helps to align the AI assistant with the organization's overall objectives and sets a clear direction for the implementation process.

2. Supporting with a Flexible Technical Platform

Building the AI assistant on a flexible technical platform is crucial for seamless integration, scalability, and adaptation to future advancements in AI technology.

Organizations have three primary approaches to developing the technical platform:

  • In-house development;
  • Collaboration with external suppliers; or
  • Outsourcing the entire development process.

Each approach has its own implications for cost, control over the solution, and the level of technical expertise required. It's important to consider the long-term strategy and goals of the organization when selecting such a platform.

3. Assessing Necessary Data and Data Processing

Data forms the foundation of any AI assistant, and organizations need to thoroughly assess the type, quality, and accessibility of the data required for the AI assistant to function effectively. This involves considering the accuracy, relevance, and timeliness of the data. Data preparation, validation, and security are crucial, especially when handling sensitive data. Techniques like Retrieval-Augmented Generation (RAG) can enhance accuracy and mitigate the risk of AI hallucinations, which occur when AI generates incorrect or misleading information. Systematic data masking, which replaces sensitive data in prompts with temporary values, can also help to protect privacy.

Compliance with the legal frameworks, particularly the GDPR and the AI Act, is non-negotiable. The whitepaper provides a detailed checklist in Appendix A to help organizations understand and implement the specific requirements of these regulations. While not exhaustive it serves as a valuable starting point for ensuring compliance for both in-house development and when purchasing AI assistants as standard solutions. However, a thorough legal assessment should still be conducted for each specific assistant.

This step is important in the development and implementation of AI assistants. It's not just a box to check off but rather an ongoing process that must be integrated into every stage of the AI assistant's lifecycle. This is especially important given that laws concerning AI are rapidly evolving.

By their very nature, AI assistants will process data to learn and carry out their designated tasks. This data can often include personal information, which brings the GDPR into play.

The GDPR sets out strict rules for how this data must be collected, stored, and used. Key considerations in connection with GDPR include:

  • Data minimization;
  • Security;
  • Transparency;
  • Accountability;
  • Lawful basis for processing;
  • Data subject rights; and
  • Handling transfers of personal data to third countries.

If processing personal data is not involved, GDPR does not apply. For example, an AI assistant that helps users select industry codes does not process personal data, so the GDPR would not apply.

Beyond data processing, AI assistants are also being used to automate tasks, make recommendations, or even take actions based on the data they process. This ability to make decisions, particularly when those decisions could significantly impact individuals or society, brings the AI Act into the picture.

The AI Act uses a risk-based approach, meaning that the level of scrutiny and compliance requirements for an AI assistant depends on the potential harm it could cause. The whitepaper outlines four risk levels defined in the AI Act:

  • Unacceptable Risk: AI systems that pose a substantial threat to human safety, fundamental rights, or dignity are prohibited. These might include systems like social scoring systems, or those that employ manipulative techniques to harm users.
  • High Risk: AI systems with a high potential to impact safety or fundamental rights, such as those used in critical infrastructure, education, employment, or law enforcement, are subject to stringent requirements before they can be placed on the market. These requirements include conformity assessments, risk management systems, high-quality datasets, thorough documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.

An example of high-risk AI assistants can be found in the financial sector, which involves handling sensitive personal data, making the implications of AI use particularly significant in terms of potential impact on individual rights and safety. For instance, AI assistants developed to automate aspects of accounting analysis at Jyske Bank use generative AI to produce draft reports based on the financial data drawn from bank systems and public sources. Before implementing this application, the bank conducted a comprehensive risk assessment, involving its AI committee and multiple departments. This process required collaboration among various departments to consider factors such as IT security, data protection, data ethics, data quality, and operational risks.

  • Limited Risk: AI systems that primarily pose risks related to transparency are subject to specific obligations designed to ensure that users are informed about their interaction with an AI system. Examples of AI assistants that fall under this category are TopGPT, an AI assistant developed by Topdanmark to provide information about personal insurance, and Netcompany's documentation assistant, EASLEY AI, designed to enhance the efficiency and consistency of project documentation. These AI assistants meet the transparency requirements by informing users that they are interacting with an AI system.
  • Minimal or No Risk: As per the whitepaper, the majority of AI systems currently in use in the EU fall under this category, including applications like AI-enabled video games or spam filters. The AIA allows for the free use of artificial intelligence with minimal risk.

Along with the risk associated with AI systems, understanding the specific role an organization plays in relation to an AI assistant is also crucial for determining legal obligations. The whitepaper details two primary roles within the AIA framework:

  • Provider: If the organization develops, commissions, or places the AI assistant on the market, it's considered the provider and bears the majority of the obligations under AIA, especially for high-risk systems.
  • Deployer: If the organization uses an AI assistant under its authority, it's considered the deployer and is responsible for ensuring the AI assistant is used safely and lawfully in its context.

To better understand these risk thresholds and respective obligations for provider and deployer, refer to Securiti’s  Whitepaper on the AI Act.

5. Setting Boundaries for the AI Assistant's Abilities and Responsibilities

Defining clear boundaries for the AI assistant's capabilities and responsibilities is important to ensure responsible use and minimize the risk of errors or unintended consequences. Organizations must outline the specific tasks, responsibilities, and tools the AI assistant can access. Prompt engineering involves using precise instructions to guide the AI assistant's work. It is a key technique for setting boundaries and ensuring that the AI assistant performs within its defined scope. By systematically defining and implementing these boundaries, organizations can ensure that their AI assistants operate effectively, safely, and ethically within their intended roles.

6. Building Structured Quality Assurance

Robust quality assurance processes are essential to validate the AI assistant's accuracy, reliability, and compliance with ethical and legal standards. This involves implementing both traditional testing methods and AI-specific techniques. For example, where experts attempt to exploit the AI assistant's vulnerabilities, red teaming helps identify potential risks. Pilot programs allow organizations to test the AI assistant with a limited user group in a real-world setting. Additionally, various quality control methods are recommended, such as rules-based control, which uses predefined rules to assess output, and model-based validation, which uses a separate AI model to validate the AI assistant's responses.

7. Tracking Relevant Data on the AI Assistant’s Use

Collecting and analyzing data on the AI assistant's usage provides insights into its performance, user behavior, and potential areas for improvement. Organizations should implement robust logging systems that record relevant data points, such as user queries, AI responses, the context of interactions, and the sources of information used by the AI assistant. This data can be invaluable for auditing, troubleshooting, and ensuring compliance with evolving regulations.

8. Planning Organizational Implementation and Training

Successful implementation requires a well-planned approach that considers the impact of the AI assistant on the organization and its employees. Organizations need to develop a clear narrative to communicate the purpose and benefits of the AI assistant, address potential concerns, and foster a positive perception of the change. Adequate training of the employees is crucial to ensure that they understand how to use the AI assistant effectively, interpret its output, and recognize potential limitations or biases. Building trust with users is essential to encourage adoption and maximize the benefits of the AI assistant.

9. Establishing Follow-up and Support Structures

Ongoing support and maintenance are essential to ensure the AI assistant’s continued effectiveness and reliability. This involves establishing dedicated support structures that include expert guidance, feedback mechanisms, automated monitoring systems, and procedures for human follow-up. Feedback mechanisms allow users to report errors or suggest improvements, while monitoring systems help to identify potential issues, such as security breaches or instances of AI hallucinations. The whitepaper highlights the importance of a proactive approach to maintenance, with regular evaluation of data and expert-driven adjustments to optimize performance and ensure the AI assistant remains aligned with the organization's evolving needs.

How Securiti Can Help

Organizations that process personal data through AI systems must ensure that their practices comply with the GDPR and the AI Act. Leveraging Securiti’s Data Command Center — a centralized platform designed to deliver contextual intelligence, controls, and orchestration — organizations can ensure the safe use of data and AI while navigating both existing and future regulatory compliance by:

  • Complying with the GDPR:
    • Unifying data controls across security, privacy, compliance, and governance through a single, fully integrated platform.
    • Leveraging contextual data intelligence and automation to ensure compliance with GDPR principles such as data minimization, purpose limitation, and accountability.
  • Managing AI Compliance:
    • Discovering, cataloging, and identifying the purpose and characteristics of sanctioned and unsanctioned AI models across public clouds, private clouds, and SaaS applications.
    • Conducting AI risk assessments to identify and classify AI systems by risk level.
    • Mapping AI models to data sources, processes, applications, potential risks, and compliance obligations.
    • Implementing appropriate privacy, security, and governance guardrails to protect data and AI systems.
    • Ensure compliance with applicable data and AI regulations.

Request a demo to learn more.

Analyze this article with AI

Prompts open in third-party AI tools.
Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox



More Stories that May Interest You
Videos
View More
Rehan Jalil, Veeam on Agent Commander : theCUBE + NYSE Wired: Cyber Security Leaders
Following Veeam’s acquisition of Securiti, the launch of Agent Commander marks an important step toward helping enterprises adopt AI agents with greater confidence. In...
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight
Future-Proofing for the Privacy Professional
Watch Now View
Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Latest
View More
Building Sovereign AI with HPE Private Cloud AI and Veeam Securiti Gencore AI
How HPE Private Cloud AI, NVIDIA acceleration, and Veeam Securiti Gencore AI support secure, governed enterprise AI with policy enforcement across RAG, assistant, and agentic workflows.
View More
Securiti.ai Names Accenture as 2025 Partner of the Year
In a continued celebration of impactful collaboration in DataAI Security, Securiti.ai, a Veeam company, has honored Accenture as its 2025 Partner of the Year....
Largest Fine In CCPA History_ What The Latest CCPA Enforcement Action Teaches Businesses View More
Largest Fine In CCPA History: What The Latest CCPA Enforcement Action Teaches Businesses
Businesses can take some vital lessons from the recent biggest enforcement action in CCPA history. Securiti’s blog covers all the important details to know.
View More
AI & HIPAA: What It Means and How to Automate Compliance
Explore how the Health Insurance Portability and Accountability Act (HIPAA) applies to Artificial Intelligence (AI) in securing Protected Health Information (PHI). Learn how to...
Navigating Global AI Governance: A Comprehensive Guide For Enterprise Compliance View More
Navigating Global AI Governance: A Comprehensive Guide For Enterprise Compliance
Securiti’s latest whitepaper walks organizations through the complex challenge of navigating global AI governance challenges. Read now to leverage these insights.
View More
Minimize What You Expose: Privacy Guardrails for AI Agents and Copilots
Minimize data exposure in AI agents and copilots. Apply privacy guardrails like data minimization, access controls, masking, and policy enforcement to prevent leakage and...
View More
Agent Commander: Solution Brief
Learn how Agent Commander detects AI agents, protects enterprise data with runtime guardrails, and undoes AI errors - enabling secure, compliant AI adoption at...
Compliance with CCPA Amendments with Securiti View More
Compliance with CCPA Amendments with Securiti
Stay compliant with 2026 CCPA amendments using Securiti, covering updated consent requirements, expanded sensitive data definitions, enhanced consumer rights, and readiness assessments.
View More
Take the Data Risk Out of AI
Learn how to prepare enterprise data for safe Gemini Enterprise adoption with upstream governance, sensitive data discovery, and pre-index policy controls.
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
What's
New