Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

Singapore’s Model AI Governance Framework

Published October 8, 2024
Contributors

Anas Baig

Product Marketing Manager at Securiti

Salma Khan

Data Privacy Analyst at Securiti

CIPP/Asia

Syeda Eimaan Gardezi

Associate Data Privacy Analyst at Securiti

Listen to the content

In 2019, Singapore became one of the first countries to publish its National AI Strategy. This was followed by the release of the updated edition of the Model AI Governance Framework in 2020. These foundational documents aimed to provide guidelines for the ethical and responsible deployment and development of AI technologies across industries. The recently released Model AI Governance Framework for Generative AI (GenAI Framework) in Singapore has set a new benchmark for AI governance.

The Gen AI framework was released on 30 May 2024 by the Infocomm Media Development Authority (IMDA) and AI Verify Foundation. It incorporates the latest technological advancements, emergent principles, and concerns in Gen AI. Through a systematic and balanced approach, it is designed to promote responsible AI innovation and protect the public interest and ethical standards.

The framework encompasses nine key dimensions that must be evaluated comprehensively to build a trusted ecosystem. It calls for the collective efforts of critical stakeholders—including policymakers, industry leaders, researchers, and the public—to collaborate in this endeavor. Through this approach, Singapore seeks to ensure that the development and deployment of generative AI technologies uphold core principles such as transparency, accountability, security, and fairness. Continue reading to explore these aspects further.

9 Dimensions of Model AI Governance Framework for GenAI

The GenAI Framework outlines 9 key dimensions that should be considered holistically to foster a trusted ecosystem. These include:

1. Accountability

Ensuring accountability is essential for AI systems to function as it helps build a trusted AI ecosystem. All participants in the AI development process, including model developers, application deployers, and cloud service providers, must be responsible for end-users.

Generative AI involves multiple layers in the tech stack, similar to traditional software development. As a best practice, a comprehensive strategy calls for assigning responsibility throughout the development process (ex-ante) and includes guidelines on how to seek redress if problems surface later (ex-post).

  • Ex Ante — Allocation Upfront: Responsibility should be assigned based on each stakeholder's level of control in the generative AI development process, ensuring that those with the ability to act can take necessary measures to protect end-users. Inspiration can be taken from the cloud industry's shared responsibility models and developers, who best understand such products, should lead this initiative to promote a safer AI ecosystem.
  • Ex Post — Safety Nets: Shared responsibility models are crucial for establishing accountability and providing clarity on redress when issues arise but may not cover all scenarios, especially unforeseen risks. To address gaps, additional measures like indemnity and insurance should be considered. Some developers already underwrite risks like third-party copyright claims, acknowledging responsibility for their AI models.

2. Data

Data has a direct impact on the quality of the model outputs, making it essential for model development. It is crucial to ensure reliable, high-quality data sources are used. It's also crucial to maintain fair treatment, manage the data pragmatically, and give corporate clarity while using sensitive data, such as private or copyrighted content.

To ensure that generative AI respects individual rights, policymakers should clearly explain how current personal data rules apply to generative AI. This includes outlining the conditions for consent, privacy rights, and any relevant exclusions and guiding AI ethical data usage practices.

Moreover, Privacy Enhancing Technologies (PETs) can help protect data confidentiality during model training. The use of copyright material in training datasets also raises questions about consent and fair use, especially when models generate content mimicking creators' styles. Thus, balancing copyright concerns with data accessibility remains essential for responsible AI development.

AI developers should adopt data quality control mechanisms and follow best practices in data governance. This involves ensuring datasets are consistently annotated and cleaned. Internationally, expanding trusted datasets—especially for benchmarking and cultural representation—can enhance model reliability. Governments could also help curate datasets reflecting local contexts, improving data availability.

3. Trusted Development and Deployment

Model development and application deployment are key to AI-driven innovation. Developers and deployers must ensure transparency on safety and hygiene, embrace best practices in development and assessment, and provide 'food label'- type transparency and disclosures in areas such as data use, evaluation, results, and risks.

Model developers and application deployers should implement safety best practices throughout the AI development lifecycle. After pre-training, methods like Reinforcement Learning from Human Feedback (RLHF) and fine-tuning with input/output filters can reduce harmful outputs, while methods like Retrieval-Augmented Generation (RAG) and few-shot learning improve accuracy by minimizing hallucinations. Moreover, conducting a risk assessment based on the use case context is crucial for ensuring safety.

It is also important to note that while generative AI is typically evaluated through benchmarking and red teaming, these methods often overlook back-end risks. A more standardized approach with baseline safety tests and domain-specific evaluations is needed. Thus, industry and policymakers must collaborate to enhance safety benchmarks and ensure robust evaluation across sectors.

4. Incident Reporting

No software, including AI, is totally failsafe, even with sophisticated development procedures and protections. Incident reporting is critical for timely notification and remediation. Establishing mechanisms for event monitoring and reporting is vital since it helps the continual enhancement of AI systems. Moreover, reporting needs to be proportionate, it should combine thoroughness with practicality, ensuring that reports are informative enough without being unduly burdensome.

Vulnerability Reporting — Incentive to Act Pre-Emptively

Software product owners should implement vulnerability reporting as a proactive security strategy. They should employ white hats or independent researchers to detect vulnerabilities in the software. They typically encourage these efforts via curated bug bounty programs. Once a vulnerability is found, the product owner typically has 90 days to fix the software and disclose the issue, allowing both owners and users to improve security.

Incident Reporting

After events occur, organizations must have internal protocols to report the issue swiftly for timely notification and remediation. If AI is substantially engaged, the organization may need to inform both the public and government authorities, depending on the incident's impact. Thus, defining “severe AI incidents” or setting thresholds for mandatory reporting is important.

5. Testing and Assurance

Third-party testing and assurance are vital for developing a credible ecosystem, equivalent to processes in the finance and healthcare sectors (accreditation mechanism). While AI testing is still evolving, firms must embrace these practices to establish trust with end-users. Developing universal standards for AI testing is also vital to assure quality and consistency.

Fostering a third-party testing ecosystem requires focusing on two key aspects:

  • How to Test: Establishing an effective and uniform testing technique while specifying the testing scope to enhance internal initiatives.
  • Who to Test: Opting for independent organizations to examine to ensure fairness.

To enhance AI testing, greater emphasis should be placed on developing universal benchmarks and methods. This may be supported by providing standard tools to facilitate testing across multiple models or applications. For more advanced domains, AI testing might someday be standardized by organizations like ISO/IEC and IEEE, enabling more uniform and rigorous third-party testing. Over time, an "accreditation mechanism" could potentially be established to guarantee both autonomy and capability.

6. Security

Generative AI introduces new threat vectors that go beyond standard software security risks. Although this is a developing domain, current information security frameworks must be updated, and new testing techniques must be established to address these risks.

Adapt Security-by-Design

Organizations must adopt ‘security-by-design’ – a key concept that involves integrating security into every phase of the systems development life cycle (SDLC) to minimize vulnerabilities and reduce the attack surface. Important SDLC stages include development, evaluation, operations, and maintenance. Additionally, new tools need to be developed to enhance security, including:

  • Input Filters: Tools that filter input by detecting risky prompts, such as blocking malicious code. These tools should be programmed to address domain-specific risks.
  • Digital Forensics Tools for Generative AI: Tools for examining digital data to reconstruct cybersecurity events must be developed. New forensics techniques must also be created to identify and remove potentially malicious codes concealed in generative AI models.

7. Content Provenance

AI-generated content may promote the spread of disinformation owing to its ease of generation. A notable example of this concern is deepfakes, which have heightened risks such as misinformation. Thus, transparency regarding the source and process of information development helps individuals make educated decisions. Governments are researching technology solutions like digital watermarking and cryptographic provenance, which, once finalized, should be utilized correctly to meet evolving risks.

Collaboration with key stakeholders, such as publishers, is crucial to support digital watermarks and provenance details. Since most content is consumed via social media and media outlets, publishers play a key role in verifying content authenticity. Moreover, standardizing how AI edits are labeled would also help users distinguish between AI and non-AI content.

8. Safety and Alignment Research & Development (R&D)

Current model safety measures do not address all risks, requiring faster R&D investment to better align models with human objectives and values  One research focus is on creating more aligned models using approaches like Reinforcement Learning from AI Feedback (RLAIF), which aims to enhance feedback and oversight. Another area involves evaluating models after training to identify risks, such as dangerous capabilities, and using mechanistic interpretability to trace the source of problematic behaviors.

While most alignment research is conducted by AI companies, the establishment of AI safety R&D institutes in countries like the UK, US, Japan, and Singapore is a positive step. Additionally, global collaboration among AI safety research organizations is crucial to maximize resources and keep pace with the fast rise in model capabilities driven by commercial interests.

9. AI for Public Good

Responsible AI comprises more than simply risk mitigation; it's about enabling individuals and organizations to prosper in an AI-driven future. This involves four key areas for impact:

  1. Democratising Access to Technology: Ensuring all individuals have trusted access to generative AI, supported by digital literacy initiatives.
  2. Public Service Delivery: Enhancing public services through efficient AI integration, with coordinated resources and responsible data sharing.
  3. Workforce Development: Upskilling the workforce to adapt to AI technologies, focusing on both technical and core competencies.
  4. Sustainability: Addressing the environmental impact of AI by developing energy-efficient technologies and tracking carbon footprints.

Moreover, collaboration among governments, industry, and educational institutions is essential for maximizing AI's positive effects.

How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti’s Genstack AI Suite removes the complexities and risks inherent in the GenAI lifecycle, empowering organizations to swiftly and safely utilize their structured and unstructured data anywhere with any AI and LLMs. It provides features such as secure data ingestion and extraction, data masking, anonymization, redaction, and indexing and retrieval capabilities. Additionally, it facilitates the configuration of LLMs for Q&A, inline data controls for governance, privacy, and security, and LLM firewalls to enable the safe adoption of GenAI.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 11:29

Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like

Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18

Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh

Watch Now View
Spotlight 13:38

Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines

Sanofi Thumbnail
Watch Now View
Spotlight 10:35

There’s Been a Material Shift in the Data Center of Gravity

Watch Now View
Spotlight 14:21

AI Governance Is Much More than Technology Risk Mitigation

AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View

Latest

Inside Echoleak View More

Inside Echoleak

How Indirect Prompt Injections Exploit the AI Layer and How to Secure Your Data What is Echoleak? Echoleak (CVE-2025-32711) is a vulnerability discovered in...

The Overprivileged Access Crisis: A CISO’s Guide to Data Access Governance View More

The Overprivileged Access Crisis: A CISO’s Guide to Data Access Governance

Overprivileged data access has quietly become a systemic risk, where users, groups, and machines routinely hold far broader permissions than their jobs require. Approximately...

What is SSPM? (SaaS Security Posture Management) View More

What is SSPM? (SaaS Security Posture Management)

This blog covers all the important details related to SSPM, including why it matters, how it works, and how organizations can choose the best...

View More

“Scraping Almost Always Illegal”, Netherlands DPA Declares

Explore the Dutch Data Protection Authority's guidelines on web scraping, its legal complexities, privacy risks, and other relevant details important to your organization.

Beyond DLP: Guide to Modern Data Protection with DSPM View More

Beyond DLP: Guide to Modern Data Protection with DSPM

Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.

Mastering Cookie Consent: Global Compliance & Customer Trust View More

Mastering Cookie Consent: Global Compliance & Customer Trust

Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.

ROI of Data Minimization: Save Millions in Cost, Risk & AI With DSPM View More

ROI of Data Minimization: Save Millions in Cost, Risk & AI With DSPM

ROT data is a costly liability. Discover how DSPM-powered data minimization reduces risk and how Securiti’s two-phase framework helps.

From AI Risk to AI Readiness: Why Enterprises Need DSPM Now View More

From AI Risk to AI Readiness: Why Enterprises Need DSPM Now

Discover why shifting focus from AI risk to AI readiness is critical for enterprises. Learn how Data Security Posture Management (DSPM) empowers organizations to...

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New