AI Risk Management
Keynote Speaker
AI deployment is outpacing AI risk management in most organizations. Mark Lynd is a Top 5 global AI thought leader and 5x CIO/CISO who covers model risk, AI governance, NIST AI RMF, and EU AI Act compliance in the language of risk professionals. Fee range: $15,000–$50,000+.
AI Is Being Deployed Faster Than It Is Being Governed.
Most organizations are deploying AI faster than their risk management programs can assess it. Models are making decisions that affect customers, employees, and partners — and the risk frameworks to govern those decisions are still being written.
AI risk is different from traditional enterprise risk. Models are opaque. Their behavior can be emergent and unexpected. The liability questions are novel. And the regulatory landscape is moving fast — the EU AI Act is in force, US frameworks are developing, and sector-specific rules are coming.
Mark Lynd advises enterprise executives on AI risk and governance daily at Netsync. As a Top 5 global AI thought leader and 5x CIO/CISO, he translates AI risk into the language that risk management professionals, compliance teams, and boards can act on.
AI Risk Management Keynote Topics
AI Risk Management: Building a Program That Keeps Pace with Deployment
Most AI risk management programs were designed for a world where AI deployment was slow and deliberate. Generative AI changed that. Mark covers how to build an AI risk management program that can assess and govern AI at deployment speed — without becoming a bottleneck.
Best for: Risk management conferences, ERM forums, compliance summits, technology risk events
Length: 45–90 minutes
The EU AI Act and What It Means for US Organizations
The EU AI Act is the world's first comprehensive AI regulation. It applies to any organization deploying AI that affects EU residents — which includes most large US enterprises. Mark covers the risk tiers, prohibited uses, compliance requirements, and what US organizations need to do now.
Best for: Compliance conferences, legal and regulatory forums, privacy events, enterprise risk summits
Length: 45–60 minutes
Model Risk: The AI Risk Category Most Enterprises Are Missing
Model risk — the risk that an AI model produces incorrect, biased, or harmful outputs — is the fastest-growing category in enterprise risk. Mark covers model validation, bias testing, explainability requirements, and the governance structures that manage model risk at scale.
Best for: Financial services risk events, audit conferences, technology risk forums, CISO summits
Length: 45–60 minutes
Why Risk and Compliance Events Book Mark for AI Risk
Top 5 global AI thought leader — Thinkers360 consistent ranking
5x CIO/CISO — has managed AI risk from the executive seat
Current advisory work — advises enterprises on AI governance daily at Netsync
Risk language, not tech language — translates AI into probability, impact, and mitigation
Regulatory expertise — EU AI Act, NIST AI RMF, sector-specific frameworks
Practitioner, not researcher — operational experience with real AI deployments
Frequently Asked Questions
What AI risk management topics does Mark Lynd cover?
Mark covers model risk and AI decision liability, AI regulatory compliance (EU AI Act, NIST AI RMF, emerging US frameworks), AI bias and fairness risk, data privacy risk in AI systems, vendor AI risk and third-party model exposure, AI governance frameworks, and how to build an AI risk management program that keeps pace with deployment speed.
What is the NIST AI Risk Management Framework and why does it matter?
The NIST AI RMF is the US government's framework for managing AI risk across four functions: Govern, Map, Measure, and Manage. It provides organizations with a structured approach to identifying, assessing, and mitigating AI risks. Mark covers how to implement the NIST AI RMF in practice, not just as a compliance exercise.
Can Mark Lynd speak at a risk management or compliance conference about AI?
Yes. Mark speaks at risk management conferences, compliance forums, audit committee briefings, and enterprise risk summits. He translates AI risk into the language of risk management professionals — probability, impact, mitigation, and governance — without requiring technical AI background.
What makes AI risk different from traditional enterprise risk?
AI risk has several characteristics that make it different: model opacity (you can't always explain why an AI made a decision), emergent behavior (AI systems can behave unexpectedly), rapid deployment speed (risk management can't keep up with deployment pace), and novel liability questions (who is responsible when an AI makes a harmful decision). Mark covers all of these dimensions.