Securiti Tops DSPM ratings by GigaOm

View

What is Ethical AI & How to Enable Its Responsible Use

Published June 6, 2024

Listen to the content

Ethical consciousness is what drives humans to build a better and safer society. But what does it mean for an Artificial Intelligence (AI) system to have ethical awareness?

For a while now, businesses have leveraged data-driven decisions to enhance operations and further innovation. The introduction of Generative AI (GenAI) shifted this to the next level, scaling speed and efficiency. Notably, that’s not the only thing AI has boosted in recent years—it has also brought unprecedented risks.

For example, a globally renowned E-commerce giant had to shut down its pet AI recruitment tool project when it displayed signs of discrimination against women applicants, preferring men over them. Similarly, in 2019, New York regulators investigated a US healthcare company for developing an AI model that suggested biased recommendations prioritizing patients based on skin tone.

This is where AI ethics come into the picture. Once found only in academic discussions and a few TED talks, ethical AI has now garnered serious attention from around the world. If there’s anything enterprises have learned in the past few years about discussing and using AI, it's that they just can’t afford failure in using data and AI safely and ethically—because the risk is too high.

What is Ethical AI - Definition & Importance

Ethical AI is a framework, approach, or set of guidelines that emphasize human or societal values in the adoption or development of AI models. In other words, ethics in AI systems isn’t just about determining the legitimacy of AI use but also its safety, transparency, fairness, explainability, accountability, and trust.

Gartner provides a similar definition of Ethical AI under the category of Responsible AI, citing it as:

“an umbrella term for aspects of making appropriate business and ethical choices when adopting AI. These include business and societal value, risk, trust, transparency, fairness, bias mitigation, explainability, sustainability, accountability, safety, privacy, and regulatory compliance.”

Standing at the zenith of technological advancements, AI offers a plethora of significant benefits. However, AI technology has a disposition towards both good and evil. AI can benefit organizations by producing better and cleaner products, improving operational efficiency, enhancing healthcare processes, speeding up innovation twofold, and contributing to positive environmental impact. Contrary, if the technology isn’t ethically designed and trained from the outset, it could pose detrimental political, environmental, and societal harm.

Ethical Risks & Challenges in AI

The path to innovation with AI is laden with various ethical challenges. Unless organizations overcome those challenges strategically, they could face significant complexities and risks when democratizing data and AI. Let’s take a closer look at some of the common ethical challenges that organizations face in the use or adoption of AI.

Explainability

AI algorithms are akin to black boxes in that the decisions some algorithms make are completely inexplicable. However, when AI-based decisions go awry or are fraught with bias, it becomes paramount to understand why the system made such a decision. There needs to be a logical map of the process, akin to a breadcrumb, that could help stakeholders trace their steps back to the actual source of the issue. Here, Explainable AI becomes a necessity.

Explainable AI refers to the logical approach to understanding an AI algorithm's decision-making process. Once the process is understood, it enables AI actors to trust the application's output. Simplifying explainability or traceability is crucial for AI actors to pinpoint the source of the problem and suggest corrective actions.

Additionally, explainability drives transparency and accountability in the use, development, and adoption of AI.

Bias and Discrimination

Removing unfairness from AI can equally be as challenging as eliminating bias and the resulting discrimination in the real world. Bias in AI can be referred to as any divergence from accuracy, leading to socioeconomic prejudice.

For instance, if a recruitment AI algorithm is trained using only male-dominated resumes or datasets, the system may become biased and unfavorable for women applicants, eventually leading to discrimination. When such bias is left unchecked or unresolved, it tends to lead to societal and environmental harm. Notably, businesses that have biased AI applications may suffer distorted results that could eventually result in reputational damage or loss of customer trust.

Bias can be introduced to the AI model at any stage during its lifecycle. For instance, it could result from a corrupt dataset containing prejudice or discrimination against a particular gender, race, ethnicity, or religion. It could also result from a poorly trained AI model that fails to detect bias accurately.

Privacy and Security

The performance and efficiency of a Large Language Model (LLM) hang on the data it is trained on. Notably, trillions of bytes of data are used to fuel LLMs, which then result in the amazing AI applications that exist across the internet. However, concern arises when the data is collected, used, and processed without the owners’ consent. Such privacy concerns put enterprises at serious legal risks.

Take, for instance, Lensa AI, which took the heat of global regulatory police for using the artwork of digital artists to fuel its AI models without the expressed consent of the individuals.

Similarly, sensitive data exposure cases have also made headlines, where employees were found using sensitive data as prompts without realizing the serious privacy and security consequences. Once trained using sensitive data, the LLM cannot be programmed to untrain itself. Sensitive data exposure cases like these open enterprises to emerging AI threats like exfiltration attacks.

AI Toxicity

Large language models have achieved commendable advancements in the area of AI-powered chatbots. These bots have proved to enable enterprises to enhance customer experience and make it more interactively enjoyable. However, it is equally crucial for enterprises to maintain a non-toxic AI chatbot network, which is quite a difficult feat to achieve for LLMs that rely heavily on prompts and user-generated queries to be trained.

Toxicity refers to the possibility of an LLM producing hateful, derogatory, abusive, or simply toxic outputs. The act of spreading abuse and hate speech is unethical in itself. But even more so, owning a model that spreads toxicity can be illegal and risky for the organization’s reputation.

An ideal example to illustrate a toxic LLM can be the AI chatbot Tay. The AI chatbot was intended as a “conversational experiment” where the bot is continuously trained via “casual and playful conversation.” However, the experiment didn’t go as planned. Swarms of users began to troll the chatbot with misogynistic and hateful remarks.

The Significant Role of Ethical AI Frameworks

As the potential risks of AI began to surface, as noted in most “AI-gone-wrong” cases, world leaders and tech giants acknowledged the need to propose unprecedented regulations, especially the AI risk management frameworks. The risk assessment frameworks are intended to help enterprises focused on AI advancements identify, assess, and mitigate risks in AI systems to ensure the responsible use and development of AI technologies.

Let’s take a quick look at some of the popular frameworks to get a high-level overview of how AI risk frameworks help develop trustworthy AI systems.

NIST AI Risk Management Framework

The National Institute of Standards and Technology (NIST) rolled out the initial version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0) in January 2023. The framework is aimed at government agencies and private enterprises, equipping them with a set of strategic guidelines on identifying and mitigating AI risks while enabling their responsible development and use.

The framework is divided into two core parts. The fundamental part introduces organizations to categories of harm that may emerge with the use or implementation of AI systems, such as harm to people, harm to an organization, and harm to an ecosystem. The guideline further discusses various challenges enterprises may face when implementing the framework, such as risk measurement challenges, risk tolerance, and risk prioritization.

The second part of the framework discusses the “Core”. The framework defines it as a set of distinct functions that enable organizations to address AI risks. The Core functions are Govern, Map, Measure, and Manage. The Govern function applies holistically to the entire risk management framework of an organization, while the rest of the functions can be leveraged particularly for AI systems or any stage of the AI lifecycle.

Learn More About NIST AI RMF 1.0

Presidio AI Framework

The AI Governance Alliance (AIGA) introduced the Presidio AI Framework at the 54th annual meeting at Davos-Klosters by The World Economic Forum (WEF) in January 2024. The framework aims to provide a standard methodology for systematizing the lifecycle management of Generative AI models.

The Presidio AI Framework provides a comprehensive set of guidelines that aim to act as more than just a static strategy but an overarching vision to guide the ethical principles and values driving the development of future AI capabilities.

These guidelines are built around three critical tenets: the expanded AI lifecycle, comprehensive risk guardrails, and shift-left methodology.

Each of these provides vital operational, functional, and philosophical considerations for AI development to ensure a nuanced and proactive approach without compromising on innovation and reliability.

Moreover, the adoption of these 3 critical considerations enables a greater collaborative effort within the organization, allowing for dynamic cooperation across various teams and stakeholders that not only incorporates diverse perspectives and expertise but also promotes shared responsibility and accountability.

Learn More About Presidio AI Framework

EU Artificial Intelligence Act

The European Union Artificial Intelligence Act, also referred to as the EU AI Act, is a comprehensive legislation that aims to balance innovation with ethical considerations, addressing both the challenges and opportunities of AI technologies. The objective of the Act is to ensure that the AI systems in the EU are safe, uphold fundamental rights, foster legal clarity for AI investment and innovation, enhance governance and enforcement, and create a unified market for trustworthy AI, preventing market fragmentation. The Act establishes obligations for AI actors—including providers, deployers, importers, distributors, and manufacturers—ensuring accountability throughout the AI system lifecycle. It also applies to non-EU entities, like those in Switzerland, if their AI system output is intended for use in the EU.

The Act adopts a risk-based approach to governing AI systems across their lifecycle. These categories include Unacceptable AI Use, High-Risk AI Systems, Limited Risk Systems, and Minimal or No-Risk Systems.

Under the EU AI Act, providers of AI systems must ensure staff AI literacy and adhere to compliance protocols, including documenting AI assessments and EU database registration.  For high-risk AI systems, additional obligations include clear labeling, maintaining documentation, implementing quality management systems, and performing conformity assessments. Deployers must ensure systems are used correctly and assess risks, while importers and distributors are required to verify compliance and handle non-compliant systems responsibly.

Article 70 of the Act specifies that each EU member state shall establish or designate at least one notifying authority and at least one market surveillance authority as national competent authorities.

Under the AI Act, penalties for breaches include fines up to €35 million or 7% of global annual turnover for major prohibitions, up to €15 million or 3% for non-compliance with high-risk AI systems, and up to €7.5 million or 1% for providing false information. SMEs will pay the lesser applicable amount.

Learn More About the Global AI Risk Management Frameworks

Best Practices to Ensure Ethical AI

Manage Trust, Risk, and Security in AI Models with Gartner’s AI TRiSM

Industry experts have developed robust AI risk management frameworks to tackle the complexities surrounding AI governance. One such framework that enterprises should champion is Gartner’s AI TRiSM. The acronym stands for AI Trust, Risk, and Security Management. It was developed to help enterprises gain complete visibility and understanding of how their AI models operate, to what extent they are consistent with the original purpose or goal, and what outcomes can be expected. The following are its core pillars.

Explainability/Model Monitoring

Transparency is paramount to the safe development and use of AI models or GenAI applications. It is critical to have complete insights into AI systems' functionality and how they process information and use that analysis to make decisions. AI model explainability further enables enterprises to meet the legal requirements of transparency as provided under various data protection and AI regulations.

Model Operations

With AI system governance and classification, enterprises can improve the AI model throughout its lifecycle. ModelOps gives teams better insight into how the models are refined and tuned after they are deployed.

AI Application Security

AI models are powered by high volumes of datasets, which include personally identifiable information (PII), including sensitive data. Hence, it is crucial to protect not only the model against unauthorized access or tampering but also the data that is being used to train or fine-tune the AI models.

Model Privacy

Businesses must develop policies and processes to enable the safe collection, storage, and usage of data for training or fine-tuning AI models. As governments worldwide are turning to strict AI laws to better control AI systems’ development and use, it is imperative to have policies and controls in place to demonstrate compliance and enhance trust.

Enable Responsible Use of AI with Our AI Security & Governance Solution

For enterprises to effectively ensure the ethical design, development, and use of AI systems, Securiti suggests the five essential steps.

  1. Discover AI Models: Discover and catalog AI models (sanctioned and unsanctioned) running across your public, private clouds, and SaaS environments.
  2. Assess AI Models’ Risks: Evaluate the risks associated with your AI models and the related data from your environment and classify models according to global laws and standards.
  3. Map Data+AI Flows: Continuously monitor the data flow by connecting the models with processes, policies, vendors, and risks.
  4. Implement Data+AI Controls: Establish privacy, governance, security, and compliance controls on model inputs and outputs.
  5. Comply with Global Regulations: Conduct assessments to comply with global frameworks, such as NIST AI RMF, etc.

AI Security and Governance, an integration of Securiti Data Command Center, enables enterprises to democratize data and AI ethically and responsibly through contextual data+AI intelligence and automated controls. Our solution aligns perfectly with the industry's best risk management approaches and frameworks, such as AI TRiSM, NIST AI RMF, etc., providing enterprises with

  • Complete visibility into their AI landscape.
  • Deep insights into risks associated with AI systems and data.
  • Details over AI data processing.
  • Adequate safeguard around AI models.
  • Compliance with global AI regulations and framework.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New