Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

Singapore’s Model AI Governance Framework

Published October 8, 2024

Listen to the content

In 2019, Singapore became one of the first countries to publish its National AI Strategy. This was followed by the release of the updated edition of the Model AI Governance Framework in 2020. These foundational documents aimed to provide guidelines for the ethical and responsible deployment and development of AI technologies across industries. The recently released Model AI Governance Framework for Generative AI (GenAI Framework) in Singapore has set a new benchmark for AI governance.

The Gen AI framework was released on 30 May 2024 by the Infocomm Media Development Authority (IMDA) and AI Verify Foundation. It incorporates the latest technological advancements, emergent principles, and concerns in Gen AI. Through a systematic and balanced approach, it is designed to promote responsible AI innovation and protect the public interest and ethical standards.

The framework encompasses nine key dimensions that must be evaluated comprehensively to build a trusted ecosystem. It calls for the collective efforts of critical stakeholders—including policymakers, industry leaders, researchers, and the public—to collaborate in this endeavor. Through this approach, Singapore seeks to ensure that the development and deployment of generative AI technologies uphold core principles such as transparency, accountability, security, and fairness. Continue reading to explore these aspects further.

9 Dimensions of Model AI Governance Framework for GenAI

The GenAI Framework outlines 9 key dimensions that should be considered holistically to foster a trusted ecosystem. These include:

1. Accountability

Ensuring accountability is essential for AI systems to function as it helps build a trusted AI ecosystem. All participants in the AI development process, including model developers, application deployers, and cloud service providers, must be responsible for end-users.

Generative AI involves multiple layers in the tech stack, similar to traditional software development. As a best practice, a comprehensive strategy calls for assigning responsibility throughout the development process (ex-ante) and includes guidelines on how to seek redress if problems surface later (ex-post).

  • Ex Ante — Allocation Upfront: Responsibility should be assigned based on each stakeholder's level of control in the generative AI development process, ensuring that those with the ability to act can take necessary measures to protect end-users. Inspiration can be taken from the cloud industry's shared responsibility models and developers, who best understand such products, should lead this initiative to promote a safer AI ecosystem.
  • Ex Post — Safety Nets: Shared responsibility models are crucial for establishing accountability and providing clarity on redress when issues arise but may not cover all scenarios, especially unforeseen risks. To address gaps, additional measures like indemnity and insurance should be considered. Some developers already underwrite risks like third-party copyright claims, acknowledging responsibility for their AI models.

2. Data

Data has a direct impact on the quality of the model outputs, making it essential for model development. It is crucial to ensure reliable, high-quality data sources are used. It's also crucial to maintain fair treatment, manage the data pragmatically, and give corporate clarity while using sensitive data, such as private or copyrighted content.

To ensure that generative AI respects individual rights, policymakers should clearly explain how current personal data rules apply to generative AI. This includes outlining the conditions for consent, privacy rights, and any relevant exclusions and guiding AI ethical data usage practices.

Moreover, Privacy Enhancing Technologies (PETs) can help protect data confidentiality during model training. The use of copyright material in training datasets also raises questions about consent and fair use, especially when models generate content mimicking creators' styles. Thus, balancing copyright concerns with data accessibility remains essential for responsible AI development.

AI developers should adopt data quality control mechanisms and follow best practices in data governance. This involves ensuring datasets are consistently annotated and cleaned. Internationally, expanding trusted datasets—especially for benchmarking and cultural representation—can enhance model reliability. Governments could also help curate datasets reflecting local contexts, improving data availability.

3. Trusted Development and Deployment

Model development and application deployment are key to AI-driven innovation. Developers and deployers must ensure transparency on safety and hygiene, embrace best practices in development and assessment, and provide 'food label'- type transparency and disclosures in areas such as data use, evaluation, results, and risks.

Model developers and application deployers should implement safety best practices throughout the AI development lifecycle. After pre-training, methods like Reinforcement Learning from Human Feedback (RLHF) and fine-tuning with input/output filters can reduce harmful outputs, while methods like Retrieval-Augmented Generation (RAG) and few-shot learning improve accuracy by minimizing hallucinations. Moreover, conducting a risk assessment based on the use case context is crucial for ensuring safety.

It is also important to note that while generative AI is typically evaluated through benchmarking and red teaming, these methods often overlook back-end risks. A more standardized approach with baseline safety tests and domain-specific evaluations is needed. Thus, industry and policymakers must collaborate to enhance safety benchmarks and ensure robust evaluation across sectors.

4. Incident Reporting

No software, including AI, is totally failsafe, even with sophisticated development procedures and protections. Incident reporting is critical for timely notification and remediation. Establishing mechanisms for event monitoring and reporting is vital since it helps the continual enhancement of AI systems. Moreover, reporting needs to be proportionate, it should combine thoroughness with practicality, ensuring that reports are informative enough without being unduly burdensome.

Vulnerability Reporting — Incentive to Act Pre-Emptively

Software product owners should implement vulnerability reporting as a proactive security strategy. They should employ white hats or independent researchers to detect vulnerabilities in the software. They typically encourage these efforts via curated bug bounty programs. Once a vulnerability is found, the product owner typically has 90 days to fix the software and disclose the issue, allowing both owners and users to improve security.

Incident Reporting

After events occur, organizations must have internal protocols to report the issue swiftly for timely notification and remediation. If AI is substantially engaged, the organization may need to inform both the public and government authorities, depending on the incident's impact. Thus, defining “severe AI incidents” or setting thresholds for mandatory reporting is important.

5. Testing and Assurance

Third-party testing and assurance are vital for developing a credible ecosystem, equivalent to processes in the finance and healthcare sectors (accreditation mechanism). While AI testing is still evolving, firms must embrace these practices to establish trust with end-users. Developing universal standards for AI testing is also vital to assure quality and consistency.

Fostering a third-party testing ecosystem requires focusing on two key aspects:

  • How to Test: Establishing an effective and uniform testing technique while specifying the testing scope to enhance internal initiatives.
  • Who to Test: Opting for independent organizations to examine to ensure fairness.

To enhance AI testing, greater emphasis should be placed on developing universal benchmarks and methods. This may be supported by providing standard tools to facilitate testing across multiple models or applications. For more advanced domains, AI testing might someday be standardized by organizations like ISO/IEC and IEEE, enabling more uniform and rigorous third-party testing. Over time, an "accreditation mechanism" could potentially be established to guarantee both autonomy and capability.

6. Security

Generative AI introduces new threat vectors that go beyond standard software security risks. Although this is a developing domain, current information security frameworks must be updated, and new testing techniques must be established to address these risks.

Adapt Security-by-Design

Organizations must adopt ‘security-by-design’ – a key concept that involves integrating security into every phase of the systems development life cycle (SDLC) to minimize vulnerabilities and reduce the attack surface. Important SDLC stages include development, evaluation, operations, and maintenance. Additionally, new tools need to be developed to enhance security, including:

  • Input Filters: Tools that filter input by detecting risky prompts, such as blocking malicious code. These tools should be programmed to address domain-specific risks.
  • Digital Forensics Tools for Generative AI: Tools for examining digital data to reconstruct cybersecurity events must be developed. New forensics techniques must also be created to identify and remove potentially malicious codes concealed in generative AI models.

7. Content Provenance

AI-generated content may promote the spread of disinformation owing to its ease of generation. A notable example of this concern is deepfakes, which have heightened risks such as misinformation. Thus, transparency regarding the source and process of information development helps individuals make educated decisions. Governments are researching technology solutions like digital watermarking and cryptographic provenance, which, once finalized, should be utilized correctly to meet evolving risks.

Collaboration with key stakeholders, such as publishers, is crucial to support digital watermarks and provenance details. Since most content is consumed via social media and media outlets, publishers play a key role in verifying content authenticity. Moreover, standardizing how AI edits are labeled would also help users distinguish between AI and non-AI content.

8. Safety and Alignment Research & Development (R&D)

Current model safety measures do not address all risks, requiring faster R&D investment to better align models with human objectives and values  One research focus is on creating more aligned models using approaches like Reinforcement Learning from AI Feedback (RLAIF), which aims to enhance feedback and oversight. Another area involves evaluating models after training to identify risks, such as dangerous capabilities, and using mechanistic interpretability to trace the source of problematic behaviors.

While most alignment research is conducted by AI companies, the establishment of AI safety R&D institutes in countries like the UK, US, Japan, and Singapore is a positive step. Additionally, global collaboration among AI safety research organizations is crucial to maximize resources and keep pace with the fast rise in model capabilities driven by commercial interests.

9. AI for Public Good

Responsible AI comprises more than simply risk mitigation; it's about enabling individuals and organizations to prosper in an AI-driven future. This involves four key areas for impact:

  1. Democratising Access to Technology: Ensuring all individuals have trusted access to generative AI, supported by digital literacy initiatives.
  2. Public Service Delivery: Enhancing public services through efficient AI integration, with coordinated resources and responsible data sharing.
  3. Workforce Development: Upskilling the workforce to adapt to AI technologies, focusing on both technical and core competencies.
  4. Sustainability: Addressing the environmental impact of AI by developing energy-efficient technologies and tracking carbon footprints.

Moreover, collaboration among governments, industry, and educational institutions is essential for maximizing AI's positive effects.

How Securiti Can Help

Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls, and orchestration across hybrid multi-cloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.

Securiti’s Genstack AI Suite removes the complexities and risks inherent in the GenAI lifecycle, empowering organizations to swiftly and safely utilize their structured and unstructured data anywhere with any AI and LLMs. It provides features such as secure data ingestion and extraction, data masking, anonymization, redaction, and indexing and retrieval capabilities. Additionally, it facilitates the configuration of LLMs for Q&A, inline data controls for governance, privacy, and security, and LLM firewalls to enable the safe adoption of GenAI.

Request a demo to learn more.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New