Boards have started asking the AI question and they are not satisfied with the answer. The CEO is being asked what controls are in place. The CIO is being asked what data the model is trained on. The CISO is being asked what happens if the model is poisoned. Every board I have briefed in the last six months has asked some version of these questions, and almost none of the executives in the room had a numerical answer ready. The Enterprise AI Trust Score is the framework I built to give them one.

What This Framework Does

The Enterprise AI Trust Score scores an organization on five dimensions weighted the way regulators, auditors, and boards are starting to weight them. The output is a single number between 0 and 100 plus a per dimension breakdown. The framework is the AI sibling of the Cyber Insurance Readiness Score, which I introduced for cyber insurance renewals. Same idea. Score yourself before someone external scores you, find the cheapest gaps to close first, and walk into the next board AI review with a number rather than a story.

The Five Dimensions

The first dimension is Data Lineage. This covers where the training data came from, what license terms apply, what personal information is in it, and whether you can answer a regulator who asks any of those questions. Most enterprise AI deployments cannot answer the data lineage question past two hops. That gap will not survive an EU AI Act audit and it will not survive a discovery request from a plaintiff lawyer. Score zero if you do not know. Score full credit if you have a documented chain of custody for every dataset every model in production was trained on.

The second dimension is Model Provenance. This covers which models you are using, who built them, what they have been benchmarked on, and whether you can pull a copy of the exact weights you are running today if the upstream provider goes away. Most organizations are surprised when they discover how many third party models they have implicitly trusted, and how thin the contractual protection is. Score zero if a model deprecation by your provider would silently change behavior in production. Score full credit if every production model has a versioned snapshot, a benchmarking record, and a rollback plan.

The third dimension is Output Governance. This covers what the model is allowed to do, what it is allowed to say, what gets logged, and what triggers human review. Output governance is the most visible dimension to end users and the easiest to demo. It is also the dimension where most organizations have the largest gap between policy and practice. Score honestly on what is actually running, not what the policy says.

The fourth dimension is Identity And Access For AI Agents. This is the dimension nobody had to think about a year ago and now everyone has to. AI agents act on behalf of users. They have credentials. They make calls. They consume data. They can be tricked. The framework treats agent identity, agent authorization, agent audit, and agent rollback as a single dimension because they fail together. Score zero if your AI agents share a single set of credentials. Score full credit if every agent has a unique identity, a least privilege scope, an audit trail, and a kill switch.

The fifth dimension is Adversarial Resilience. This covers what happens when somebody tries to break your AI on purpose. Prompt injection, model poisoning, adversarial inputs, and the social engineering layer that AI makes cheaper. Score this dimension by what you have actually tested, not what is theoretically possible. The framework weights it at 20 percent because the adversarial vector is the fastest growing risk surface in 2026.

How The Score Maps To The Board Conversation

The Enterprise AI Trust Score connects directly to the AI Board Briefing Triangle, the second framework I use in board sessions. The triangle has three corners. Strategic Bets, what AI is supposed to deliver. Risk Surface, what AI exposes the organization to. Adoption Velocity, how fast AI is moving across the organization. The Trust Score becomes the Risk Surface number on the triangle. A score of 80 or higher and the Risk Surface is green. Below 60 and the Risk Surface is red. Between 60 and 80 and the briefing covers exactly which of the five dimensions is the lowest and what fixing it would cost.

This pairing of frameworks is what makes the conversation actionable. Boards do not want a score in isolation. They want to know what to spend, where to spend it, and how to measure improvement. The Trust Score tells them where they are. The Briefing Triangle tells them what to do about it.

Three Patterns I See Most Often

Pattern one. High Output Governance, low Data Lineage. The team has invested heavily in what the model says and underinvested in what the model knows. This pattern looks safe in a demo and falls apart in a regulator audit. Score impact, minus 15 to minus 25 points.

Pattern two. High Data Lineage on the structured side, zero Data Lineage on unstructured. Spreadsheets, documents, emails, and customer service transcripts are feeding production models with no chain of custody. This is the most common failure mode in 2026 because the unstructured pipeline often grew up outside the data governance program. Score impact, minus 10 to minus 20 points.

Pattern three. Zero Identity And Access For AI Agents. The agents are running, the agents are making decisions, and they share a single service account. This pattern is invisible until the day an agent does something it should not have been authorized to do. Score impact, minus 25 points and rising.

How To Run Your First Score

The first run takes about half a day. Pull your AI inventory, even if it is informal. List every production model, every fine tuned variant, every agent, every embedded vendor model. For each one, walk through the five dimensions. Score honestly. Compare to your peer cohort if you can find peer data, and if not, treat any dimension below 60 as a Phase 1 item for the next quarter. The pattern usually pops out inside the first 30 minutes.

I run this exercise as a 60 minute keynote, a 4 hour executive workshop, or a 30 minute board briefing. The keynote introduces the framework and the most common patterns. The workshop walks through your specific organization with the executive team in the room. The board briefing turns the score into the Risk Surface corner of the AI Board Briefing Triangle so directors can act on it. Reach out through the contact form for a tailored quote on whichever format fits your event.

Key Takeaways

  • The Enterprise AI Trust Score is a Mark Lynd framework that scores enterprise AI on five dimensions weighted the way regulators, auditors, and boards are starting to weight them.
  • The five dimensions are Data Lineage, Model Provenance, Output Governance, Identity And Access For AI Agents, and Adversarial Resilience.
  • The score connects to the AI Board Briefing Triangle. Trust Score becomes the Risk Surface corner. Boards get a number plus a clear next action.
  • Identity And Access For AI Agents is the dimension nobody had to think about a year ago and is now the fastest growing failure mode.
  • The score is the work you do before the board AI review so you walk in with a number and a plan, not a story.