Generative AI remains a fascinating yet largely unexplored territory for most businesses and organizations. While it has led to fascinating leaps in computational capabilities and productivity, it can only continue to make such positive contributions if used and managed responsibly. "Responsibly" being the key word owing to just how many organizations have found themselves violating data and AI regulations due to their misuse and misrepresentations related to AI usage and deployment.
Texas Attorney General Ken Paxton recently investigated Pieces Technology for making similar misrepresentations. Pieces is a healthcare AI firm based in Dallas that claims to provide solutions that connect health and community. Several of Pieces' GenAI products were being used by hospitals across Texas. The Attorney General's investigation revealed that Pieces made several critical misrepresentations about the exact capabilities of its products and the internal metrics used to measure their effectiveness.
This is the first investigation in which an organization has faced such intense scrutiny over its development of AI-driven solutions within the healthcare sector. Naturally, it will set a significant precedent, both within the healthcare industry itself and among organizations developing and deploying AI products within such healthcare environments.
However, Pieces will not be charged with a financial penalty. The Attorney General has announced that they've reached a settlement with Pieces, which places several disclosure-related obligations related to their products and services.
Interestingly, this settlement comes merely a few days after the Federal Government published its report on the benefits and risks of GenAI usage in the healthcare sector.
These highlight just how important regulators at the federal level consider GenAI usage within the healthcare sector. GenAI usage and applicability will only increase with time, exacerbating the urgent need for regulatory oversight and guidance on how organizations can leverage GenAI capabilities within their operations in a manner that is safe, reliable, and, of course, compliant with regulatory requirements.
Read on to learn more about the details of the investigations, the findings, and most importantly, what lessons other organizations can learn from this entire episode.
Overview of the Investigation
In their press release, the Texas Attorney General described the settlement with Pieces Technology as the "first of its kind," and it is easy to see why.
Pieces Technology is a healthcare artificial intelligence and technology company that produces products and services that it claims "summarizes, charts, and drafts clinical notes for your doctors and nurses…so they don't have to". In other words, Pieces claimed its services allow doctors and other healthcare professionals to draw relevant insights from a barrage of data from a patient. Theoretically, this not only allows for swifter actions but can also be extremely important in cases requiring urgency, where experience can be the difference between life and death.
Among its other claims, Pieces claimed that its services demonstrated a "severe hallucination rate" of less than 1 in 100,000 uses or 0.001%. Hallucination refers to instances where a Generative AI model creates an entirely false or inaccurate output but presents it as a matter of fact.
These were tall and extravagant claims, considering how even titans of the tech industry have been struggling with similar issues related to hallucinations and other issues with GenAI outputs. However, these claims were enough to get Pieces' GenAI products in several Texas hospitals, at least four of which used Pieces' technology to analyze patient data in real-time.
Subsequent investigations by the Texas Attorney General found that Pieces Technology's claims about the accuracy and effectiveness of its products were inaccurate. Making such false representations also violated the Texas Deceptive Trade Practices—Consumer Protection Act (DTPA) provisions.
The DTPA allows for civil penalties of up to $10,000 for such violations, along with other injunctions. However, the Attorney General's office and Pieces have agreed to a settlement in the form of an Assurance of Voluntary Compliance.
Under this settlement, Pieces will avoid a monetary fine but must undertake the following measures for a period of five years:
- Provide clear and conspicuous disclosures in all its advertising materials related to their output metrics, benchmarks, and other internal measures that must be consistent with the findings of an independent auditor;
- Provide clear and conspicuous disclosures to all current and future customers related to the harmful or potentially harmful uses and misuses of its products and services;
- Provide clear and conspicuous disclosures to all current and future customers related to risks and limitations related to the use of its products and services;
- Respond to current and future requests for further information from the AG related to compliance with the Assurance within 30 days of receipt of the written request;
- Avoid false and misleading claims related to their products' or services' accuracy, reliability, or effectiveness.
Lessons to be Learnt
Here are the most important lessons an organization can learn from the Pieces settlement:
Transparency Above All
While Pieces may have dodged a financial penalty, the reputational loss and its corresponding consequences can still have devastating effects on it. Its settlement with the Attorney General highlights the critical importance of objectivity and transparency in AI development.
Organizations must adopt a culture of transparency related to developing their AI products and services while also providing insights into their internal performance and other assessment-related metrics. Pieces' claims exaggerated the effectiveness of its products, leading to their prompt adoption and use in several Texas hospitals. While no unfortunate cases were reported that could be attributed to Pieces' AI products, it was a significant ethical breach.
Furthermore, this culture of transparency must go beyond the development phase and should be adapted throughout the product cycle, especially regarding potential risks related to their use.
In the aftermath of the Pieces' settlement, healthcare institutions are likely to take measures to avoid a repetition of such an episode. These measures are likely to include better validation processes and detailed documentation from organizations proffering AI products. Hence, transparency related to the development process behind AI products and services, how models used in such products and services are trained, and their exact capabilities and limitations may well become both a business and ethical necessity.
Accountability in AI Development
While it is true that AI models can sometimes behave in unpredictable ways, the Pieces case was not one of them. The Pieces case can be summarized as an organization's inability to accurately inform potential customers of its products and services' exact capabilities and limitations. This should highlight the importance of holding organizations accountable for the tools they create, especially those driven by AI.
As per its settlement with the Attorney General, Pieces must ensure its products live up to any promises made in its advertising materials. These include minute functionalities as well as critical back-end information such as model accuracy, error reports, and regular assessment results. To do so, organizations must implement appropriate accountability mechanisms throughout the AI development process, from the initial design to the post-deployment phase.
Such a process would not only increase an organization's ability to detect and prevent errors before they can cause critical damage but also foster a culture of continuous improvement. This can and should include additional steps such as regular training and appropriate resources for staff to ensure they're regularly kept up to date on the latest developments and improvements being made on the products they're working on.
This can then be leveraged to create a more efficient internal review in which all the stakeholders are highly concerned about the capabilities, limitations, and risks of their products and services.
Urgency for Regulatory Oversight
While corporations will derive a range of lessons from this episode, it should also serve as yet another reminder for those charged with regulating AI in the US and globally how urgent it has become to tame the use of AI.
That there needs to be a uniform and coordinated approach to regulating AI that does not inhibit innovation has been clear for some time. The issue is such matters require an extensive degree of contemplation, involvement of multiple stakeholders, and a general agreement of compromises between all involved. These issues are further exacerbated by the dynamic nature of AI and how almost every week heralds a new breakthrough or leap in what AI is capable of.
However, all that being said, these issues can no longer be excuses to avoid firm regulation that would deter other corporations and developers from taking a similar path to Pieces, where market entry and profits are prioritized over integrity and the general well-being of people.
Humans In The Equation
Human oversight remains an important piece of the responsible AI usage puzzle. The Pieces episode should reiterate the importance of human oversight, regardless of how effective AI solutions become, as complete reliance on AI-generated outputs and recommendations would have seriously jeopardized the patients' well-being.
Human expertise must always play a decisive role in both interpreting and validating AI-generated results, as such involvement is necessary to ensure that AI tools are used in a manner that respects the patients' wishes and is effective. This would place the onus on the organizations that opt for such AI-powered products and services to undertake the relevant measures to appropriately train their staff on the proper use, limitations, and other risks involved in their use.
Lastly, human oversight can serve as an effective means to mitigate the risks of AI errors. This would rely on the humans involved in the oversight capacity to be highly knowledgeable within their field of expertise to identify AI errors acutely, regardless of how minute they may seem. By doing so, AI can be leveraged as more of a supplementary augmentation rather than an overt replacement of human expertise, especially in sensitive fields such as healthcare.
Ethical AI Implementation
Above all else, this case has illustrated just how complicated the ethical implications of AI implementation within healthcare are. While Pieces' primary violation has been misrepresenting its products' capabilities, it is easy to see how other, arguably more serious, violations may have occurred. For organizations, this should highlight the critical necessity of ethical AI.
In the past few months, ethical AI has evolved rapidly owing to similar developments in AI capabilities. It goes beyond taking measures to ensure the generated algorithms and models are accurate but must now also guarantee their fairness, neutrality, and appropriate protection of users' data rights whose data is being used to train these algorithms and models.
Hallucinations and other forms of bias within AI models can have serious consequences. In the context of use within the healthcare sector, it may result in a misdiagnosis, a wrongful discharge of a patient, or any number of recommendations and decisions that can result in life-or-death situations. Hence, the datasets used to train such models must be free of any historical biases that may perpetuate such biases in the generated outputs. It may also result in the unfair and unequal treatment of patients based on other factors such as race, gender, sexual preference, or socioeconomic status.
Furthermore, it is just as important to create explainable and easy-to-understand models for those directly using these models. Hence, organizations in charge of developing these models must ensure the timely availability of appropriate resources that explain how and why AI models make certain recommendations, what factors they take into account, and most importantly, what are chances of these recommendations being biased or in any way being affected in a manner that would potentially harm the patient.
How Securiti Can Help
Securiti is the pioneer of the Data Command Center, a centralized platform that enables the safe use of data+AI. It provides unified data intelligence, controls, and orchestration across hybrid multicloud environments. Some of the world's most renowned and prestigious corporations rely on Securiti's Data Command Center for their data security, privacy, governance, and compliance needs.
The Data Command Center comes equipped with several individual modules and solutions designed to ensure effective compliance with an organization's obligations under all major data and AI regulations globally.
The AI Security & Governance module enables organizations to discover and catalog all their AI modules in use across the public and private cloud as well as SaaS applications. These cataloged AI modules can then be mapped to their data sources, potential risks, and relevant compliance obligations. Furthermore, it enables automated assessments that ensure effective compliance with standards such as NIST AI RMF, EU AI Act, and others, thus reducing an organization's legal and reputational risks.
Request a demo today to learn more about how Securiti can help you ensure that your AI development, deployment, and usage practices comply with the relevant regulations.