Introduction
India is poised for an AI revolution. Its diverse socio-economic landscape and thriving digital ecosystem offer immense potential for growth and innovation. To responsibly harness this potential, the Government launched the IndiaAI Mission on March 7, 2024, with a significant investment of INR 10,371.92 crore.
This mission aims to create a comprehensive AI ecosystem through initiatives like IndiaAI Compute Capacity, Safe & Trusted AI, and the IndiaAI Innovation Centre, focusing on ethical development, bias mitigation, and privacy enhancement. Building on these efforts, the Ministry of Electronics and Information Technology (MeitY) constituted a Subcommittee in November 2023 to address the governance challenges posed by AI. On January 6, 2025, the Subcommittee released an AI Governance Guidelines Report (Report)for public consultation.
This blog concisely summarizes the Report, highlighting its key principles and recommendations for India’s AI ecosystem.
AI Governance Principles
Rapid advancements in machine learning, natural language processing, and computing capabilities have integrated AI into daily life. The Report underscores that the governance of AI is centered on minimizing risks while harnessing its vast potential. However, the complexity of AI systems and the possibility of unintended consequences present unique challenges that require careful oversight. The report proposes a set of eight principles designed to guide AI governance in India by drawing from global frameworks like the OECD AI Principles and India-specific initiatives such as the NITI Aayog Responsible AI Guidelines. These principles include the following:
- Transparency: AI systems must provide meaningful information about their development, processes, and limitations. Users should always know when they are interacting with AI.
- Accountability: Developers and deployers must be responsible for AI outcomes, ensuring respect for user rights and adherence to the rule of law.
- Safety, Reliability, and Robustness: AI systems should operate as intended, mitigating risks of errors, misuse, or adverse outcomes through regular monitoring.
- Privacy and Security: Compliance with data protection laws is critical and mechanisms for security-by-design must be incorporated to ensure data integrity and data quality.
- Fairness and Non-Discrimination: AI systems must prevent biases and promote inclusivity, ensuring decisions do not perpetuate inequality.
- Human-Centered Values & ‘Do No Harm’: Human oversight and judgment should guide AI development to address ethical dilemmas and prevent undue reliance on AI.
- Inclusive and Sustainable Innovation: AI should contribute to equitable benefits and align with sustainable development goals.
- Digital-by-Design Governance: Governance frameworks should leverage digital technologies to enhance regulation, compliance, and risk mitigation.
Implementing Strategies
In order to implement the outlined AI governance principles, the Report emphasizes the development of the following three critical strategies:
1. Examining AI systems using a lifecycle approach
Adopting a lifecycle approach is essential for understanding how AI systems evolve and how risks manifest at different stages, including:
- Development: This stage focuses on designing, training, and testing AI systems. Ideally, ethical and technical risks should be assessed early on.
- Deployment: This phase involves operationalizing AI systems, where risks related to misuse, accountability, and transparency become more prominent.
- Diffusion: At this stage, the focus shifts to the long-term effects of widely deployed AI systems across multiple sectors, raising concerns about system interoperability, data integrity, and societal impact.
The AI governance structure should address risks at all three stages to ensure that the AI governance principles are effectively operationalized.
2. Taking an ecosystem view of AI actors
The AI ecosystem comprises a wide range of stakeholders involved throughout the lifecycle of AI systems, each playing distinct roles in shaping governance. Key actors include:
- Data Principals: Those who own or control the data used in AI systems.
- Data Providers: Entities that supply data for training AI models.
- AI Developers: Organizations or individuals responsible for creating and refining AI models.
- AI Deployers: Those who deploy AI systems into practical applications, such as app builders and distributors.
- End-users: Businesses and consumers who interact with or are affected by AI systems.
A holistic AI governance approach requires considering the entire ecosystem to effectively distribute responsibilities, clarify liabilities, and foster collaboration across stakeholders.
3. Leveraging technology for governance
The complexity and rapid growth of AI technologies necessitate integrating technology into governance frameworks to improve oversight and compliance. A "techno-legal" approach combines legal regulations with technological tools to address the fast-paced development and deployment of AI. This may include automated compliance tools, governance technology, and human oversight mechanisms to monitor the AI ecosystem at scale. An example of this approach is using “consent artifacts” to create immutable digital identities for participants, enabling the tracking of activities and establishing liability chains.
Such a strategy allows for the distribution of regulatory responsibilities and enables self-regulation within the ecosystem. However, periodic reviews of the technology used for compliance are necessary to ensure fairness, security, and respect for fundamental rights like privacy and free speech.
The Gap in India’s Current AI Governance Landscape
The Report identified several critical gaps in India’s current AI governance landscape, focusing on the need to enhance compliance, transparency, and coordination across the AI ecosystem. These gaps can be categorized into three key areas:
1. The Need to Enable Effective Compliance and Enforcement of Existing Laws
a. Deepfakes, Malicious Content, and Fakes
AI technologies, particularly generative AI, have facilitated the creation of malicious synthetic media such as deepfakes. Existing legal frameworks, including the Information Technology Act, 2000 (IT Act) and the Indian Penal Code (IPC), address certain issues like identity theft, impersonation, and defamation. Specific provisions include:
- Section 66D of the IT Act: Penalizes cheating by impersonation using a computer resource.
- Sections 67A and 67B of the IT Act: Addresses obscene content that may be generated using deepfake technologies.
- Sections 419 and 499 of the IPC: Covers cheating by personation and defamation.
However, these laws require enhanced technological capabilities for timely detection, prevention, and enforcement. For instance, assigning unique digital identifiers to participants and implementing watermarking technologies can improve traceability and accountability in content creation and distribution.
b. Cybersecurity Risks
AI systems introduce new complexities in cybersecurity, as they can amplify existing vulnerabilities or enable sophisticated attacks by non-technical actors. Current measures under the IT Act, such as the Indian Computer Emergency Response Team (CERT-IN) and the National Critical Information Infrastructure Protection Centre (NCIIPC), govern cybersecurity. Moreover, sectoral guidelines from regulators like the Reserve Bank of India, the Securities and Exchange Board of India, and the Insurance Regulatory and Development Authority also establish safeguards. Despite these measures, there is a need to strengthen compliance capabilities and introduce AI-specific standards, including secure-by-design frameworks, to address AI-driven cyber threats effectively.
c. Intellectual Property Rights (IPR)
The Report's analysis of IPR under Indian copyright law focuses on two main areas related to AI. First, it examines the use of copyrighted data for training AI models without the copyright holder's permission, which could lead to infringement. Indian law, under the Copy Right Act, 1957, allows limited exceptions, such as for personal research, but does not permit commercial or institutional research without approval. Thus, the Report raises concerns about enforcing compliance and determining liability when multiple parties are involved in generating AI outputs that violate laws.
Second, the Report explores the copyrightability of AI-generated work. It states that ‘human authorship’ is required for copyright protection, leading to uncertainty about whether AI-generated work can be considered eligible for copyright. It concludes that clear policy guidance and potential legal reforms are necessary to address these issues and ensure a balanced approach to intellectual property protection in the age of AI.
d. AI Bias and Discrimination
AI systems can inadvertently reinforce pre-existing biases, leading to discriminatory outcomes. While existing laws, such as the Equal Remuneration Act and consumer protection regulations, address discrimination, these frameworks may not fully account for the unique risks posed by “black-box” AI models. Consequently, regulators and deployers must prioritize risk mitigation measures, such as robust testing and transparency in decision-making processes.
2. The Need for Transparency and Responsibility Across the AI Ecosystem
The Report emphasizes the need for transparency and responsibility across India's AI ecosystem. It requires regulators to gather adequate information on the traceability of data, models, systems, and actors throughout the lifecycle of AI systems. Additionally, it highlights the crucial need for transparency from stakeholders regarding liability allocation and risk management responsibilities.
A broader baseline framework is essential to ensure transparency and responsibility across all sectors, as risks may spill over beyond sectoral boundaries. This approach would help design effective governance mechanisms for high-risk AI scenarios.
3. The Need for a Whole-of-Government Approach
India’s fragmented regulatory landscape, with multiple agencies overseeing different aspects of AI governance, presents inefficiencies and potential gaps. While sectoral specialization is beneficial, the rapid pace of technological advancements and AI’s widespread application expose limitations in this approach. Departments and regulators often operate in isolation, examining AI systems within their individual domains. This hinders the development of a unified understanding of cross-cutting issues, making it challenging for the government to align diverse initiatives under a cohesive roadmap.
Recommendations
To address the identified gaps in AI governance, the Report proposed the following series of targeted recommendations aimed at establishing a robust and cohesive framework:
1. Implementing a Whole-of-Government Approach
The Report recommends establishing an Inter-Ministerial AI Coordination Committee or Governance Group. This group, led by MeitY and the Principal Scientific Adviser to the Government of India, aims to harmonize AI governance efforts across various sectors and assist regulators in understanding and mitigating AI-related risks. Key responsibilities of the Committee include:
- Coordinated Oversight: It should bring together regulators, government departments, and external experts to align efforts and share knowledge on cross-sectoral AI risks.
- Common Roadmap: It should develop a unified approach for applying existing laws to AI systems, ensuring clarity and efficiency in addressing sector-specific and cross-cutting issues.
2. Establishing a Technical Secretariat
To build a systems-level understanding of India’s AI ecosystem, the Report proposes creating a Technical Secretariat housed within MeitY. Its primary functions would include:
- Horizon Scanning: Regularly monitoring AI advancements to identify emerging risks and opportunities.
- Risk Assessment and Mitigation: Evaluating societal and consumer risks, including issues like antitrust, data governance, and cybersecurity, across various AI applications.
- Standardization and Metrics Development: Facilitating the creation of industry-wide metrics and frameworks for AI governance, such as data provenance, transparency reports, and system cards.
- Industry Collaboration: Engaging with stakeholders to co-develop solutions like labeling synthetic media and implementing privacy-enhancing technologies.
3. Developing an AI Incident Database
The Report recommends establishing an AI incident database to enhance understanding of real-world AI risks. Key features of the AI incident database include:
- Comprehensive Reporting: The database would collect reports on adverse AI incidents, including malfunctions, discriminatory outcomes, and privacy violations, from both public and private entities.
- Confidentiality and Learning: Reporting protocols would ensure confidentiality to encourage voluntary submissions and focus on harm mitigation rather than punitive measures.
- Evidence-Based Policy: Insights from the database would guide regulatory and governance strategies, enabling data-driven responses to recurring issues.
4. Driving Voluntary Commitments for Transparency
The Report calls for engaging industry stakeholders to develop voluntary commitments aimed at enhancing transparency and governance. These commitments will include:
- Disclosures: Public reporting on the intended use, capabilities, and limitations of AI systems.
- Monitoring and Validation: Implementing mechanisms for assessing data quality, model robustness, and system outcomes.
- Peer Reviews and Audits: Encouraging third-party evaluations to ensure adherence to responsible AI principles.
5. Examining Technological Measures for Risk Mitigation
The Report highlights the importance of leveraging technological tools to address AI-related risks, such as:
- Watermarking and Labeling: Ensuring traceability of content generated by AI systems to prevent misuse, such as in deepfakes.
- Content Provenance Standards: Developing standards and mechanisms to trace content modifications and identify the source, even across different platforms and tools.
6. Strengthening the Legal and Regulatory Framework
The Report recommends forming a subgroup to collaborate with MeitY on integrating AI governance into the proposed Digital India Act (DIA). Key aspects include:
- Harmonizing Regulations: Ensuring consistency across legal, regulatory, and technical frameworks to address AI-related challenges effectively.
- Enhanced Grievance Redressal: Proposing digital-by-design mechanisms, such as online dispute resolution systems and grievance appellate committees, to streamline and modernize redressal processes.
- Capacity Building: Reviewing and enhancing the qualifications and resources for adjudicating officers to address AI-specific cases comprehensively.
Conclusion
In conclusion, the governance of AI demands a comprehensive and coordinated approach that addresses the complexities of AI systems throughout their lifecycle, from development to deployment and diffusion. By engaging all ecosystem actors—data principals, developers, deployers, and end-users—and leveraging technology to enhance monitoring, compliance, and risk mitigation, India can create a robust framework for responsible AI adoption. A forward-thinking, whole-of-government strategy not only ensures adherence to ethical standards but also fosters innovation and builds public trust, paving the way for AI’s transformative potential to contribute meaningfully to economic and societal progress.
How Securiti Can Help
Securiti is the pioneer of the Data + AI Command Center, a centralized platform that enables the safe use of data and GenAI. It provides unified data intelligence, controls and orchestration across hybrid multicloud environments. Large global enterprises rely on Securiti's Data Command Center for data security, privacy, governance, and compliance.
Securiti Gencore AI enables organizations to safely connect to hundreds of data systems while preserving data controls and governance as data flows into modern GenAI systems. It is powered by a unique knowledge graph that maintains granular contextual insights about data and AI systems.
Gencore AI provides robust controls throughout the AI system to align with corporate policies and entitlements, safeguard against malicious attacks and protect sensitive data. This enables organizations to comply with India’s evolving AI landscape.
Request a demo to learn more.