
Vibe policy is real and it’s running the tariffs
with Jake Sanders, Existential Deep
Vibe policy is real and it’s running the tariffs
Show Notes
This is a different kind of episode. No pitch deck, no product demo - just a roundtable conversation about the existential fork in the road that 2025 represents for AI, privacy, power, and work. Jake Sanders and Vince join Ryan Estes for an unfiltered discussion that moves from vibe coding to Russian disinformation to government data harvesting to Dune, and somehow lands somewhere both terrifying and grounded.
The title is the thesis: "vibe policy" - decision-making at the highest levels of government and industry driven by vibes, speed, and narrative rather than analysis, oversight, or accountability. The tariffs are the example. The data center cancellations are another. The treasury access is the one that keeps the room up at night.
The Government Data Problem
The conversation's sharpest thread: what happens when a small team of technologists gains access to the IRS's full taxpayer dataset - the most sanitized, comprehensive financial and identity dataset in existence - and has both the motive and the capability to correlate it with every other available dataset?
The IRS database includes names, addresses, financial history, and income data for virtually every American. Crossed with behavioral data, browsing history, health records, and social media activity, it enables psychographic profiling at a level no data broker has previously achieved. The concern raised in the room isn't hypothetical capability - it's the removal of the institutional watchdogs that would normally audit, challenge, or block it.
One application floated: dynamic pricing tied to individual financial profiles. Amazon already dynamically prices. The scenario being described is personal financial data informing pricing across every purchase decision - Netflix, groceries, insurance, mortgages - with no ability to opt out and no mechanism for accountability. The room finds it hard to argue this is science fiction when the underlying data access has reportedly already occurred.
Russian Disinformation Is Already in the LLMs
One of the most concrete data points in the episode comes from France 24: researchers identified that Pravda, a Russian state propaganda outlet, has systematically seeded content deep inside Grok, Perplexity, and ChatGPT's training data. The goal is simple - when Western users query these tools on politically sensitive topics, they encounter narratives shaped by Russian disinformation, without any attribution or warning.
The practical implication: if you're using a publicly-trained LLM and asking it about anything geopolitical, economic, or politically contested, you may be receiving answers that have been deliberately shaped by adversarial actors. You have no visibility into this. The LLM presents contaminated outputs with the same confidence as clean ones.
The countermeasure discussed: keep your AI agents isolated from the open web. Feed them controlled training data. Build tight, bounded tools for specific use cases rather than querying general-purpose LLMs about contested topics. An AI that can't reach the internet can't be poisoned by it.
Vibe Coding and the Product Development Renaissance
Vince's contribution brings the conversation back to earth with a practical observation from his design work: he used V0 and Vercel for the first time and built something functional by placing blocks, listing desired behaviors in plain English, and watching the system code it in real time. No formal programming knowledge required.
The framework he proposes for what's happening in product development: the handoff between prototype and design is collapsing. The traditional cycle - product managers define features, designers prototype, engineers build, everyone iterates - is being compressed. The closer to code you can get from a natural language description, the cheaper each iteration becomes. Cheaper iterations mean faster market feedback. Faster market feedback means more informed product decisions.
Tools doing this now: Replit, Cursor, Lovable, and Bolt - with Bolt reportedly the second-fastest growing platform of all time by revenue in its first five months. The Chinese contender Manus is also mentioned as a platform to watch. The near-term prediction: within one year, design system frameworks will allow anyone to ship production-quality interfaces by describing what they want, with AI serving as both product manager and engineer.
The Mentat vs. Bene Gesserit Problem
The episode's most memorable framework comes via Dune. In Frank Herbert's universe, Mentats are humans trained to perform pure logical computation - math, data, deduction. Bene Gesserit are a quasi-mystical order that uses intuition, psychology, and long-game strategy to shape outcomes. The tension between them is the tension between rigorous analysis and vibe.
Applied to AI: vibe coding, vibe strategy, vibe policy - these are everywhere. People are making decisions at speed, guided by approximations, moving fast on pattern recognition rather than thorough analysis. AI enables this because it lowers the friction cost of generating plausible-sounding outputs. But plausible is not correct. The deploy button is real. The rubber eventually meets the road.
The room's consensus: AI screams when it's tight, bounded, and task-specific - with a human who understands the domain checking its work. AI fails when used as an oracle for contested, complex, or adversarially-influenced domains. The smartest users are sparring partners, not passengers.
Job Displacement and the Solo Startup Scenario
The room takes the job displacement question seriously without fully resolving it. The range on the table: current AI tools could displace 20% of the American workforce if fully deployed - and in 24 months, some estimate that figure could approach 80% in certain knowledge work categories. The lag is adoption speed, not capability.
The optimistic scenario offered: senior professionals who get displaced will build their own small AI-native startups rather than seeking new employment. The compute cost is accessible - a $14K Mac Studio running a locally deployed model like DeepSeek is enough to run an AI-augmented one-person operation with sophisticated workflows. The structural cost of a startup drops to approximately the cost of the hardware.
The UBI speculation: if the people making these decisions understand the displacement curve, they may already be modeling some form of universal basic income as the safety valve. Without it, 80% unemployment doesn't produce a smooth transition - it produces pitchforks. Whether this is optimistic planning or wishful thinking is left unresolved.
The more grounded near-term outcome Vince describes: velocity increase, not immediate displacement. Teams of 20 doing what previously required 100, moving to market continuously. The bottleneck shifts from production to judgment - figuring out which of the now-abundant options is actually good.
Consumer Activism as the Available Lever
With institutional watchdogs weakened, regulatory frameworks lagging, and political accountability contested, the room lands on consumer behavior as the most accessible form of resistance and signal. The Tesla boycott is the example: ordinary people deciding not to purchase a product as a direct response to its founder's behavior. It's slow, imperfect, and doesn't require coordination with any institution.
The TikTok observation: political engagement and consumer activism content appears more concentrated and visible on TikTok than on Instagram or Facebook - possibly because its algorithm surfaces it differently, possibly because the platform's user base skews differently. The expressed concern is that acquiring or regulating TikTok into a more controlled media environment would suppress one of the few venues where grassroots sentiment surfaces quickly.
Tools & Resources
- Bolt - Vibe coding platform; reportedly the second-fastest growing platform of all time by revenue in its first five months; alongside Lovable as a primary no-code/low-code AI builder
- Vercel / V0 - Frontend deployment and AI-assisted UI generation platform; Vince's first-time experience with block-based design-to-code workflow
- Replit / Cursor - AI-assisted coding environments; noted as requiring slightly more coding background than the newer design-first platforms
- Manus - Chinese AI vibe-coding platform; still in waitlist stage at time of recording; positioned as a contender to Bolt and Lovable
- DeepSeek - Open-source Chinese LLM; discussed as a locally-deployable model for isolated, web-disconnected AI workflows - a countermeasure to the disinformation contamination in public LLMs
- Mac Studio (Apple M-series) - Referenced as sufficient hardware for running local AI models at small startup scale; price point ~$14K for the configuration discussed
- France 24 - Source cited for reporting on Russian Pravda disinformation being seeded into Grok, Perplexity, and ChatGPT training data
- Palantir - Peter Thiel's data analytics company; referenced in context of large-scale government data correlation capabilities
Key Frameworks from This Episode
- Vibe Policy
- Decision-making at institutional scale driven by speed, narrative, and pattern-matching rather than analysis, oversight, or accountability. Tariff policy written with AI assistance, data center investments reversed within weeks, workforce policy set without modeling consequences - the defining governance characteristic of the current moment. The problem isn't the vibes; it's removing the Mentats who would check the math.
- The Data Correlation Threat
- Individual datasets - financial, health, behavioral, social - are incomplete profiles. The threat is correlation: combining the IRS's comprehensive financial and identity database with commercial behavioral data, health records, and social media activity to create psychographic profiles of unprecedented depth and accuracy. The concern isn't the data existing - it's that the institutional checks on its use have been weakened simultaneously.
- Contaminated Training Data
- Publicly-trained LLMs absorb whatever is on the internet, including deliberately seeded disinformation. French reporting identified Russian state media systematically stuffing content into the training pipelines of major Western chatbots. The countermeasure: for any domain where adversarial information operations are plausible, use isolated, locally-deployed AI with controlled training data rather than publicly-trained models with web access.
- Mentats vs. Bene Gesserit (Math vs. Vibes)
- From Dune: Mentats compute logic rigorously; Bene Gesserit operate through intuition and long-game strategy. AI enables vibe operation at industrial scale - generating plausible outputs quickly from pattern-matching. The failure mode is mistaking plausibility for correctness. The smartest AI practitioners use the technology as a sparring partner (Mentat-style), not an oracle (Bene Gesserit-style).
- The Velocity Trap
- AI tools increase team velocity dramatically - what used to require 100 people can be done by 20. But more output doesn't automatically mean more demand. The bottleneck shifts from production to judgment: which of the now-abundant options is actually good? Velocity without discernment accelerates toward noise as fast as it accelerates toward value.
- Bounded AI as Safe AI
- The episode's most actionable security principle: an AI agent that cannot reach the open web cannot be contaminated by what's on it. For high-stakes use cases, the right architecture is tight, bounded, task-specific tools with controlled training data - not general-purpose LLMs with internet access. Specificity is both a performance advantage and a security posture.
FAQ
What does 'vibe policy' mean and why does it matter for founders?
Vibe policy is institutional decision-making by approximation - acting on pattern-matching and narrative rather than rigorous analysis, with the oversight mechanisms that would normally catch errors either removed or ignored. For founders, it's a warning: the regulatory and economic environment you're building in is being shaped by the same fast-move-break-things ethos you might use to ship a side project. The consequences at policy scale are larger and less reversible than a bad product launch.
Should I be worried about Russian disinformation in the LLMs I use?
For non-contested domains (coding, writing, data analysis), the contamination risk is low. For politically sensitive topics, geopolitical questions, or anything where adversarial actors have motive to shape perception, yes - the contamination is documented. The practical response: don't rely on general-purpose publicly-trained LLMs for high-stakes geopolitical or contested factual questions. Use primary sources. Run isolated local models for sensitive workflows.
What's the realistic job displacement timeline?
The room's estimate: current AI tools could displace 20% of knowledge work if fully deployed today - the lag is adoption speed, not capability. At 24-month forward projections, some estimate 80% of white-collar roles become AI-augmentable to the point of significant headcount reduction. The more grounded near-term scenario: velocity increases within existing teams before displacement occurs, with the bottleneck shifting from production to judgment.
What is the solo AI startup scenario?
A senior professional displaced by AI, rather than seeking employment, deploys their own AI-native small startup. Startup cost approaches the cost of hardware: a $14K Mac Studio running a locally-deployed model like DeepSeek, with no cloud dependencies, no data contamination risk, and minimal ongoing costs. The AI serves as the operational workforce; the human provides domain expertise and judgment. The room thinks this becomes common within 24-36 months.
What tools are most relevant for non-technical founders right now?
Bolt and Lovable for vibe coding - building functional apps from natural language descriptions. Vercel/V0 for frontend deployment and AI-assisted UI generation. These platforms are collapsing the design-prototype-engineer cycle, making it possible to go from concept to deployed product without a technical team. Replit and Cursor require slightly more coding familiarity. Manus (Chinese, waitlist) is the emerging contender.
What's the consumer activism angle and does it work?
With institutional accountability mechanisms weakened, consumer purchasing decisions are the most accessible lever available to individuals. The Tesla boycott is cited as a real-world example of coordinated consumer response to founder behavior. Whether it's sufficient to influence outcomes at the scale being discussed is genuinely uncertain - but it's the mechanism that doesn't require institutional coordination or legal infrastructure to initiate.