"Only actions and content that can prove who created them and where they came from can be trusted in the AI era."
NEW DiFilippo's Law submitted for presentation at the UNIDIR Global Conference on AI Security and Ethics, Geneva, June 18-19, 2026.
The Crisis
Generative AI can produce text, images, audio, and video indistinguishable from human-created content. Visual inspection is no longer sufficient.
AI agents execute production work, make API calls, send emails, and modify data. Most operate on borrowed credentials with no audit trail.
Organizations cannot answer: Who authorized this action? What data informed it? Did the agent stay within scope? There is no infrastructure for non-human identity.
The Law
Only cryptographically identifiable, policy-enforced actions with verifiable provenance merit trust. Not intent, interface claims, or vendor assurances.DiFilippo's Law, 2025
Content should not be trusted based on how convincing it looks, but on whether its origin can be verified. A blurry photo with cryptographic provenance is more trustworthy than a perfect deepfake.
Trust requires identity. Without knowing who created something, verification is impossible. Identity management is foundational infrastructure for the AI era, not an afterthought.
The traditional model asks consumers to verify: "Is this real?" DiFilippo's Law shifts the burden to creators: "Can I prove this is real?" Systems must make verification easy and forgery hard.
Reference Architecture
A reference architecture for implementing DiFilippo's Law in production systems. Three layers, each addressing a distinct failure mode.
Prompts, agents, tools, data flows, and external actions. Everything that happens at runtime. This is where AI does its work.and where trust breaks without the layers below.
Cryptographic identity, scoped tokens, policy evaluation, and delegation budgets. Every action must be traceable to an identity with proven authorization. No shared secrets. No implicit trust.
Audit trails, lineage graphs, receipts, and defensible proof. A cryptographically sealed record of who did what, when, why, and with what authority. Enables replay, investigation, and compliance.
The Failure State
Without these layers, organizations face:
Agent sprawl.untracked autonomous actors
Token sprawl.shared credentials, no scoping
Privilege escalation.agents exceeding scope
Zero auditability.no proof of who did what
Compliance exposure.EU AI Act Article 50
Applications
The framework enables verification at every layer of the AI stack.from consumer-facing detection to enterprise governance.
Detect AI-generated media before it spreads. Apply provenance checks at the point of consumption, not after the damage is done.
HumanMeter (iOS)
Ensure AI agents operate with cryptographic identity, scoped authorization, and auditable provenance. Answer "who authorized this?" for every autonomous action.
HACAS Framework
Analyze identity system logs to detect shadow AI, unauthorized agents, and credential anomalies. The verification layer for enterprise identity infrastructure.
Identity X-Ray
Regulatory Context
DiFilippo's Law isn't theoretical. It addresses requirements that are becoming law.
DiFilippo's Law was submitted for presentation at the United Nations Institute for Disarmament Research (UNIDIR) conference on AI governance, security, and ethics. June 18-19, 2026.
Mandates that AI-generated content must be labeled and traceable. Organizations without provenance infrastructure face non-compliance.
Signs content at creation, but platforms strip metadata on upload. DiFilippo's Law addresses the gap: what happens between creation and consumption.
Handles workload identity for microservices, but isn't linked to content provenance. DiFilippo's Law connects identity to output.
No existing standard unifies content signing, agent identity, and auditable accountability into a single chain. Proof of Lineage fills this gap.
Author