Securiti Tops DSPM ratings by GigaOm

View

US Treasury Examines AI-Related Security Risk in FinServ: What You Need to Know

Published April 8, 2024

Listen to the content

The rampant adoption of GenAI is changing the data landscape, offering untold value for organizations looking to drive efficiency and unlock business insights using GenAI systems — but also introducing uncharted security concerns that have already inspired global regulatory action, as in the recent adoption of the EU AI Act — as well as the Biden administration’s October 2023 Executive Order for ensuring the “safe, secure, and trustworthy” use of AI.

At the direction of that order, the US Treasury has released a report entitled “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector” on the importance of responsibly managing AI-related cybersecurity risk in the FinServ industry, noting that “the financial services sector is increasingly subject to costly cybersecurity threats and cyber-enabled fraud.” The report, published March 27, 2024, recognizes the value that responsible technological advancement and innovation represents in FinServ while outlining the risks that accompany emerging technologies, and ultimately urging financial institutions to address AI-related risk, cybersecurity, and fraud threats.

Why it matters: The report lays the groundwork for future regulations

Financial services professionals might wonder what relevance or immediate applicability a Treasury report that is not a regulation has in the here and now. In the era of continuous regulatory oversight, regulators worldwide and at the US federal and state levels use these reports to signal to the industry what changes are on the horizon and may soon be mandated.

While all the items in the report may not ultimately turn into regulations, many will. This signaling helps companies get ready for regulation without losing focus on their core business, and it helps the regulators benefit from the response and commentary of the industry. Every recommendation in this report won’t become a regulator requirement, but it does tell us that regulators will be more focused on the risks of GenAI than ever before — and will be moved to take action in that direction.

Recommendations from the report

In examining direct and indirect AI-related challenges and opportunities, the Treasury’s report identifies ten recommendations and takeaways for agencies, regulators, and private FinServ companies to consider in addressing immediate AI-related risk. They are:

  1. The need for a common AI lexicon: Establishing a much-needed AI-specific lexicon and consistency across the sector in what “artificial intelligence” is would benefit financial institutions, regulators, and consumers.
  2. Addressing the growing capability gap: When it comes to developing in-house AI systems, large financial organizations far outpace small ones due to the disparity in data resources — and those that have migrated to the cloud also hold an advantage over those that don’t.
  3. Narrowing the fraud data divide: As with the capability gap, large institutions have more historical data than small ones, and an advantage for building in-house, anti-fraud AI models.
  4. Regulatory coordination: With FinServ’s highly complicated and overlapping international, federal, and state-level regulatory landscape, financial institutions and regulators are collaborating on how to address oversight concerns and regulatory fragmentation in a rapidly changing AI environment.
  5. Expanding the NIST AI RMF: Opportunities exist to expand and tailor the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) framework to include more content on AI governance and risk management for financial services.
  6. Best practices for data supply chain mapping and “nutrition labels”: There is a need to develop best practices for mapping data supply chains and to standardize descriptions — similar to nutrition labels on food — for what data was used to train an AI model, where the data originated, and how the data is being used for AI systems and data providers.
  7. Explainability for black-box AI solutions: Additional research and development is needed on explainability solutions for black-box systems, like generative AI that considers inputs, outputs, and robust testing.
  8. Gaps in human capital: The AI workforce talent gap needs to be addressed by establishing best practices for using AI systems safely for less skilled practitioners and providing role-specific AI training for employees in legal, compliance, and other fields outside of information technology.
  9. Untangling digital identity solutions: Establishing robust international, national, and industry digital identity technical standards can help financial institutions combat fraud and strengthen cybersecurity.
  10. International coordination: The view ahead for AI regulation in financial services remains murky. The Treasury will continue to engage with foreign counterparts on the risks and benefits of AI in financial services.

While these guidelines and takeaways provide an excellent framework for scoping out the road ahead, many of them are easier said than done. Financial institutions have a lot of moving parts (to use an understatement), so addressing these recommendations — even those that are under their purview — is less a matter of “if” they should than “how” they should. At the same time, there’s no doubt that, as AI-related compliance becomes increasingly critical, now is the time for these companies to look at their data environments with an eye toward responsible AI practices.

Top challenges to managing AI-specific cyber threats in FinServ

Technology advancements from AI systems are dependent on data, and while AI systems present new security challenges, many of the challenges that financial institutions face in utilizing safe and secure AI systems have data at their core. CISOs, CDOs, and CPOs in FinServ often struggle to effectively and comprehensively protect sensitive data, establish and enforce safe AI governance, and achieve compliance when it comes to orchestrating GenAI systems — all while enabling the scalability of sound data + AI practices. The top challenges preventing financial institutions from moving forward with safe, trustworthy AI include:

Uncontrolled and shadow AI: AI models function on enormous amounts of often unknown data and cannot “forget” or “unlearn” what’s already gone into its training. Furthermore, since the models do not just run on straightforward code, but also the logic that learns from that data, the output is more of a moving target. Without the right controls and oversight in place, financial institutions encounter risks related to this data opacity — as well as blind spots, shadow AI systems, uncontrolled interactions, compliance violations, and other vulnerabilities.

Sprawling, dark, and uncontrolled data: Before financial institutions can even begin to effectively determine and intelligently prioritize AI-specific security threats to their data environment, they must have consistent controls across all their data + AI environments — and know where their sensitive data lives. If organizations lack a comprehensive view into their data environment, the conversation around managing AI is a non-starter. While this is no insignificant challenge, a back-to-basics approach that involves discovering and mapping all data at scale across the enterprise — spanning on-prem, hybrid multicloud, and third-party environments — is a necessary step for some financial institutions as they look to innovate by bringing AI into the fold.

Siloed operations: Addressing regulatory trends requires context across lines of business, organizations, and data. Most financial institutions are a conglomeration of acquisitions and lines of business, with very little interaction or commonality between them — and their data tells the same story. Before unlocking trustworthy contextual insights from their data, organizations must gain a holistic view into all their data, unlocking the ability to apply a common grammar and taxonomy to classification systems.

Overprivileged access: Linking user permissions back to identity and ensuring least privileged access across an enterprise is becoming more of a challenge. Permissive approaches that grant wide access to internal users put sensitive information at risk, and having overprivileged users who have the ability to access data that they don’t use increases vulnerability.

Managing breach response: Being secure does not mean being immune to a breach incident, but it does require managing the incident with a fast, transparent, comprehensive, and compliant response. Breach response is more important than ever, especially in cloud and AI environments that move very quickly. Financial institutions need to know which data was compromised, where it is located, and what sensitive information it contains — and manage their response effectively and efficiently.

Prioritizing remediation: Remediation of vulnerable data is a heavy lift for any enterprise, and it’s become too much for a person, group of people, or team to handle. Automated processes are necessary to successfully identify, track, prioritize, and address data and systems that require remediation.

Navigating compliance: The regulatory landscape is complex and uncertain, and it is growing more layered and nuanced all the time, at a faster rate than ever before. Many financial institutions have gaps in their compliance processes, leaving them open to penalties, legal fees, and damaged brand reputation. Achieving and maintaining compliance in the fast-moving world of data + AI requires continuous regulatory monitoring.

Complete Contextual Data and AI Intelligence Is Essential

With the right data infrastructure in place, organizations can protect all of their data + AI environments across the enterprise — cloud, on-prem, SaaS, third-party — at scale, finding and mapping sensitive data wherever it lives, defining taxonomies to create much-needed standardization; establishing cross-coordinated operations among lines of business as well as privacy, governance, security, and compliance teams; enabling safe data sharing; and effectively expanding the use of their data environment to unlock more trustworthy insights that will drive innovation and translate to quantifiable business value. Protecting data + AI and minimizing risk is the baseline for success, but turning data from a risk into an asset is the promised next step — and well within financial institutions’ reach.

FinServ companies need a unified Data Command Center that allows them to centralize data + AI operations and answer such key questions as what sensitive data exists in which of their cloud, on-prem, and third-party and vendor environments, who has access to it, which cross-border or other regulations apply to it, which AI models it is tied to, where it is located, who it belongs to, and so on.

Enter contextual data and AI intelligence

Financial institutions need to be able to unlock deep contextual insight into their data and AI systems to be able to apply the right controls and effectively manage and reduce risk. Understand the context around data and gain transparency into AI systems to achieve a better understanding of what goes into them and better control over how they operate.

Data mapping: Identify and define the relationships between different data elements across your databases and systems, and understand data flows across various applications and sources to integrate, consolidate, or migrate data effectively.

Least privileged access: Enterprise security teams need to be able to manage internal access controls and ensure the least privileged access across the company. With unified data controls, they can gain granular insights into user permissions and access patterns, enforce stringent controls by dynamically masking sensitive data and enabling secure data sharing, and adhere to important compliance requirements across multiple global regulations.

Data Security Posture Management (DSPM): Discover and catalog all unknown data in the cloud; gain contextual data insights like people ownership, regulatory obligations, and security and privacy metadata; identify security and privacy risks to the data; prioritize and remediate misconfigurations; prevent unauthorized data access, use data lineage to map data across structured and unstructured sources, reduce data breach risk, and secure data sharing.

Breach remediation: Prioritize data security risks before a breach happens by remediating the riskiest violations first, be in a better position to detect threats when they arise, respond quickly in the event of a breach, and manage potential breach incidents in an efficient, smooth, compliant manner.

Risk ranking: Even if financial institutions have visibility into their data + AI systems, they may not know their systems’ risk ratings. Organizations need to unlock clear visibility into the risk their systems pose — especially the various risk parameters around each of their AI models so they can identify and mitigate potential risks effectively and understand which AI systems they should sanction and which they should block.

Compliance and audit: Comply with increasingly complex, growing, overlapping US federal, state, and global regulations while meeting your own organization’s reviews for internal security controls, policies, and procedures.

The time is now for FinServ to put the proper guardrails in place

Whether the US Treasury report puts the writing on the wall for FinServ organizations about future regulations coming their way or financial institutions just have the good sense to understand that establishing trustworthy, secure, responsible, and compliant AI practices is ultimately good for business, now is the time for financial institutions to start paying attention — especially if they have not historically made data compliance, governance, security, and privacy a priority. The adoption of AI means, more than ever, that the data risks and regulations that face FinServ are speeding up, as is the opportunity open to them.

Putting guardrails in place now is not only an important move toward best practices, but is critical for achieving and maintaining a competitive advantage — or even staying relevant — in a world where the bar for required protections is rising every day and the ceiling for value is still undetermined. Enterprise leaders from CISOs to CDOs to CPOs will find value in the cross-coordination, expanded data use, and improved transparency and data protection that the journey toward compliance unlocks, as their responsible use of AI generates more trustworthy insights, better-informed decisions, more efficient innovation, and an elevated reputation for their brand.

Want to learn more about the next steps? Check out Securiti’s  "Navigating AI Compliance: An Integrated Approach to the NIST AI RMF & EU AI Act" to fortify yourself, your team, and your company with the knowledge, strategies, and next steps they need to unleash the power of GenAI data — safely.

Explore AI Governance Center https://securiti.ai/ai-governance/

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New