
Agentic AI, Deepfakes and the End of Passwords
with Jasson Casey, Beyond Identity
Agentic AI, Deepfakes and the End of Passwords
Show Notes
There is a war raging between humans and machines pretending to be human. Passwords are failing. Deepfakes are rising. And identity - the question of who is actually acting - is up for grabs in a way it has never been before.
Jasson Casey is the CEO of Beyond Identity, a cybersecurity company backed by over $200 million whose mission is to make passwords obsolete. He joins Josh from Brand Hospitality Group, who has deployed Beyond Identity's technology across a high-turnover workforce where security has to be invisible enough for a front desk clerk to use it and strong enough to stop enterprise-grade attacks.
This episode covers how AI impersonation is forcing a complete rethink of authentication - not just proving who you are, but proving who is really acting when AI agents and synthetic identities start making decisions on our behalf.
Why Detection Is the Wrong Answer
The first instinct when deepfakes arrive is to build detectors. Jasson's argument is that detection is the wrong solution - for two reasons. First, detection and synthesis are in an arms race: every new detector becomes the training signal for the next, better generator. Detection rates start high and fall quickly. Second, as AI becomes embedded in legitimate workflows (real-time translation, virtual avatars, AI customer service), the question "is this AI-generated?" becomes unanswerable in a useful way. Your customer service rep might legitimately be running real-time translation. Your spokesperson might legitimately be using an AI avatar.
The right question is not "is this AI?" but "where did this data come from and who authorized it?" That question is answerable - through device-bound, hardware-backed identity. Instead of trying to detect the presence of AI, you attest the provenance of content: this data was produced by this device, authorized by this person, at this time. Provenance does not degrade as AI improves. It is the correct long-term architecture.
Frameworks from This Episode
These frameworks have been added to the AI for Founders Frameworks Library. Filter by Jasson Casey (Beyond Identity) to find them.
Device-Bound Identity Model
Credentials stored in a hardware enclave cannot be copied, exported, or used on any other device. Since 80% of security incidents involve credential movement, eliminating movement eliminates most of the attack surface. The payment card industry proved this model works: tap-to-pay is device-bound, hardware-backed, and has been running at global scale for a decade.
Agentic Authorization Chain
Before an AI agent acts, it needs a cryptographic chain that links user identity to agent identity to specific permissions over a bounded time window. Even after the agent terminates - even if it existed for seconds - that chain must be recoverable for audit. Agents are fireflies: they come and go, but the authorization record must outlive them.
Data Provenance Architecture
Instead of detecting whether content is AI-generated, attest where it came from. Device-bound credentials extend from authenticating users to attesting the provenance of data produced by cameras, sensors, and AI systems. The camera on a phone gets a passkey. The microphone gets a passkey. Anything that generates consequential data eventually needs a way to attest its origin.
Tools from This Episode
Eliminates passwords with device-bound cryptographic credentials that can't be phished, stolen, or reused. Protects 10 million+ identities across enterprise customers.
Trust execution layer for AI agents. Removes API tokens from developer machines entirely and replaces them with device-bound keys that cannot be extracted, with full provenance tracking for every agent action.
This Week's Experiment
Audit Your Agent Credentials and Build a Permission Minimization Policy
List every agent, script, and automation in your company that has access to production systems or external APIs. For each one, document where its credentials live and what they can do. The goal is a single document that answers: if this credential was stolen right now, what is the blast radius? Most founders discover credentials in places they forgot about - environment variables on developer machines, hardcoded in old scripts, in shared Notion docs. The audit makes the invisible visible.
How Passkeys Actually Work
When you pay with Apple Pay or tap a credit card, you are not using a password. You are using single-device multifactor authentication based on a hardware-backed, device-bound credential. The merchant sends a bill to your phone. Your phone asks for your face or PIN - that is your second factor. It signs the bill using a key stored in a hardware enclave that never touches general purpose memory and physically cannot be extracted. That signed transaction clears the payment.
Beyond Identity applies the same architecture to enterprise authentication. Every modern laptop, phone, and even drone ships with a hardware enclave - the CPU manufacturers added it for mobile payments and secure boot. Beyond Identity uses that enclave to create a device-bound key representing you on that specific device. That key cannot be stolen digitally. It can never move. And since 80% of security incidents today trace back to credential movement - passwords stolen, tokens copied, session cookies hijacked - eliminating movement eliminates most of the attack surface.
Giving Agents Identity
The agent problem Jasson describes is not about AI being dangerous - it is about auditability. Agents are like fireflies: they come and go. They execute tasks, call APIs, access data, and terminate. When something goes wrong - when data leaks, when a permission is violated, when an injection attack succeeds - you need to be able to reconstruct exactly what happened. Who authorized this agent? On what device did it run? What services did it access? What was the security posture of the environment?
Jasson's architecture: the user authenticates on a device with a proven posture. The agent authenticates on its device with a proven posture. Those two identities are cryptographically linked: this user on this device authorizes this agent on this device with these permissions for this period of time. That link is sealed under an attestation - tamper evident, meaning any attempt to modify the record is visible. Even after the agent is gone, the authorization chain is recoverable.
The adjacent problem: most agents today mix control and data in a way that creates injection vulnerabilities. The template that tells an agent what to do and the data it processes end up in the same context window. Classic computing solved this separation problem decades ago - control planes and data planes, type languages and kind languages. Agent builders need to rediscover that lesson and enforce the separation architecturally, not through guardrails.
Real-World Deployment: Hospitality
Josh's team at Brand Hospitality Group operates across multiple major hotel brands - Marriott, Hilton, IHG - each of which runs its own authentication system. One of his finance users was managing fifteen separate MFAs. Line-level front desk workers, many of them non-technical, were expected to navigate this stack every day on shared workstations.
After deploying Beyond Identity, the enrollment lift is real - getting a passkey onto a device takes effort, especially in high-turnover hospitality. But after enrollment, the experience is nearly invisible: show up at a shared workstation, enter a PIN tied to the device you enrolled, done. No phone, no app push, no code to type. The security posture check happens silently in the background. The GM at their Denver property called it "automagic."
Josh's wishlist for the next two years: federation between brand identity systems and operator identity systems. Right now, the passkey works for their own applications but not for Marriott or Hilton's native systems, which run their own MFA. A cross-brand trust model - where a verified passkey from one system is honored by another - would eliminate the remaining password surface in hospitality.
Q&A
Why is detecting AI-generated content the wrong approach to deepfake defense?
Detection and synthesis are in an arms race: every detector becomes the training signal for the next generator. Detection rates start high and fall quickly as generators adapt. More fundamentally, once AI is embedded in legitimate workflows - real-time translation, virtual avatars, AI customer service - detecting the presence of AI becomes meaningless. The right question is not whether content is AI-generated but whether it was authorized by a verified human on a verified device. Provenance answers that question in a way that does not degrade as AI improves.
What makes device-bound credentials fundamentally different from passwords?
A password lives in your memory, can be written down, can be transmitted, and can be stolen. A device-bound credential lives in a hardware enclave on a specific device, never enters general purpose memory, and physically cannot be copied or extracted. It can only be used on the device it was created on, authenticated with a local biometric or PIN as a second factor. The mobile payments industry has been running this architecture at global scale for a decade - Apple Pay and tap credit cards use the same model. The key insight: credential movement is what makes 80% of security incidents possible. Eliminate movement, eliminate the attack surface.
How should founders think about giving their AI agents identity?
Every agent that accesses production systems, calls external APIs, or handles user data needs a cryptographic identity that is (a) linked to the human who authorized it, (b) scoped to specific permissions, (c) time-bounded, and (d) sealed in an attestation that survives the agent's termination. Agents are fireflies - they come and go - but the authorization record needs to outlive them. If you cannot reconstruct after the fact who authorized an agent, on what device, with what permissions, you have no meaningful audit trail. That gap will matter the moment something goes wrong.
What is a hardware enclave and why does it matter for founders building AI products?
A hardware enclave is a physically isolated memory region built into modern CPUs - every laptop, phone, and most connected devices ship with one. Keys stored in an enclave are never accessible to the device's operating system or applications. They can be used to sign data but cannot be extracted or copied. CPU manufacturers added enclaves for mobile payments and secure boot; Beyond Identity uses the same hardware to create device-bound credentials for authentication. For AI builders: any agent stack that relies on API tokens stored in environment variables or developer machines is missing this layer. Ceros moves those tokens into secure cloud enclaves with device-bound access controls.
What is the control vs. data separation problem in agent context windows?
Classic computer science separates control (what the system does) from data (what it processes). Compilers, networking stacks, and operating systems all enforce this separation. Current agent architectures tend to mix both in the same context window: the template that tells the agent what to do lives alongside the data it processes, which creates two attack surfaces. First, injected data can masquerade as control instructions - prompt injection. Second, data processed in a long-lived context can leak through subsequent queries. The solution is architectural: enforce separation between the instruction layer and the data layer at the infrastructure level, not just through prompt-level guardrails.
Who are the best customers for Beyond Identity right now?
Companies that already have an identity system of record - Beyond Identity plugs into your existing identity stack rather than replacing it. From a business perspective, the target is any organization that wants to eliminate the 80% of SOC incidents that are identity-related. The CrowdStrike, Mandiant, and Verizon DBIR threat reports all converge on that number. If credential theft, phishing, and session hijacking are in your threat model, the device-bound approach stops them at the architecture level rather than the detection level.