Speaking session
AI Demands a New Risk Architecture: Why Privacy Must Orchestrate the Convergence
AI is accelerating faster than our governance models can keep up, and it’s exposing the fault lines in how organizations manage risk.
Right now, privacy, security, and risk teams are often solving for the same data with different playbooks. GenAI systems like Microsoft Copilot, AWS Q, and internal LLM deployments don’t create neatly categorized issues; they trigger privacy, security, legal, and ethical risks simultaneously. Treating those risks as separate domains no longer works. It slows oversight, scatters accountability, and leaves real gaps in control.
This is where privacy leaders can and must step forward. We already understand data at its most granular, contextual, and consequential levels. That makes us uniquely positioned to architect a new model of converged data risk governance: one that unifies oversight across domains without diluting their expertise.
In this session, we will explore how privacy leaders can drive this shift by:
- Reframing privacy from a compliance function to a core node in enterprise risk architecture
- Designing shared risk taxonomies and intelligence systems across functions
- Harmonizing legal and technical controls to govern AI systems at scale
- Creating unified command structures that enable fast, coordinated risk decisions
As AI transforms the enterprise, risk itself must converge. Privacy professionals are best placed to design that future.
Cassandra Maldini
VP, AI Governance & Privacy Securiti