Established 2025

DiFilippo's Law

"Only actions and content that can prove who created them and where they came from can be trusted in the AI era."

A governance framework by Michael DiFilippo

NEW DiFilippo's Law submitted for presentation at the UNIDIR Global Conference on AI Security and Ethics, Geneva, June 18-19, 2026.

The Crisis

Trust is broken. Appearance is no longer evidence.

01

Content is Forgeable

Generative AI can produce text, images, audio, and video indistinguishable from human-created content. Visual inspection is no longer sufficient.

02

Agents Act Autonomously

AI agents execute production work, make API calls, send emails, and modify data. Most operate on borrowed credentials with no audit trail.

03

Identity is Missing

Organizations cannot answer: Who authorized this action? What data informed it? Did the agent stay within scope? There is no infrastructure for non-human identity.

The Law

A First Principle for Digital Trust

Only cryptographically identifiable, policy-enforced actions with verifiable provenance merit trust. Not intent, interface claims, or vendor assurances.
DiFilippo's Law, 2025

Three Core Principles

1

Provenance Over Appearance

Content should not be trusted based on how convincing it looks, but on whether its origin can be verified. A blurry photo with cryptographic provenance is more trustworthy than a perfect deepfake.

2

Identity as Infrastructure

Trust requires identity. Without knowing who created something, verification is impossible. Identity management is foundational infrastructure for the AI era, not an afterthought.

3

The Verification Burden Shift

The traditional model asks consumers to verify: "Is this real?" DiFilippo's Law shifts the burden to creators: "Can I prove this is real?" Systems must make verification easy and forgery hard.

Reference Architecture

Proof of Lineage: The Three-Layer Stack

A reference architecture for implementing DiFilippo's Law in production systems. Three layers, each addressing a distinct failure mode.

Layer 1: Execution

Prompts, agents, tools, data flows, and external actions. Everything that happens at runtime. This is where AI does its work.and where trust breaks without the layers below.

Layer 2: Identity Control Plane

Cryptographic identity, scoped tokens, policy evaluation, and delegation budgets. Every action must be traceable to an identity with proven authorization. No shared secrets. No implicit trust.

Layer 3: Provenance

Audit trails, lineage graphs, receipts, and defensible proof. A cryptographically sealed record of who did what, when, why, and with what authority. Enables replay, investigation, and compliance.

The Failure State

Without these layers, organizations face:

Agent sprawl.untracked autonomous actors

Token sprawl.shared credentials, no scoping

Privilege escalation.agents exceeding scope

Zero auditability.no proof of who did what

Compliance exposure.EU AI Act Article 50

Applications

DiFilippo's Law in Practice

The framework enables verification at every layer of the AI stack.from consumer-facing detection to enterprise governance.

Content Verification

Detect AI-generated media before it spreads. Apply provenance checks at the point of consumption, not after the damage is done.

HumanMeter (iOS)

Agent Governance

Ensure AI agents operate with cryptographic identity, scoped authorization, and auditable provenance. Answer "who authorized this?" for every autonomous action.

HACAS Framework

Identity Intelligence

Analyze identity system logs to detect shadow AI, unauthorized agents, and credential anomalies. The verification layer for enterprise identity infrastructure.

Identity X-Ray

Regulatory Context

Why This Matters Now

DiFilippo's Law isn't theoretical. It addresses requirements that are becoming law.

June 2026

UNIDIR Global Conference on AI Security and Ethics, Geneva

DiFilippo's Law was submitted for presentation at the United Nations Institute for Disarmament Research (UNIDIR) conference on AI governance, security, and ethics. June 18-19, 2026.

August 2026

EU AI Act Article 50

Mandates that AI-generated content must be labeled and traceable. Organizations without provenance infrastructure face non-compliance.

Active

C2PA / Content Credentials

Signs content at creation, but platforms strip metadata on upload. DiFilippo's Law addresses the gap: what happens between creation and consumption.

Active

SPIFFE / SPIRE

Handles workload identity for microservices, but isn't linked to content provenance. DiFilippo's Law connects identity to output.

The Gap

The Broken Middle Layer

No existing standard unifies content signing, agent identity, and auditable accountability into a single chain. Proof of Lineage fills this gap.

Author

About Michael DiFilippo

MD

Michael DiFilippo

Customer Success Executive, Identity & Security

Based in Brooklyn, NY. 30 years in IT spanning enterprise infrastructure, cybersecurity, and identity management. Currently leading customer success for identity and security solutions, with a focus on AI governance, content provenance, and the intersection of human trust and machine autonomy.

Founder of Flip Ventures LLC. Builder of HumanMeter, Identity X-Ray, and the Proof of Lineage framework.