From the Podcast
Founder Experiments
Every episode ends with a hands-on experiment you can run this week. No theory. Just build.
Build a Speed-to-Lead Bot in a Weekend
- 1
Set up a simple form using Typeform or Tally that captures name, phone number, and inquiry type.
- 2
Connect the form to a webhook using Make.com or n8n. When a form is submitted, the webhook fires.
- 3
Use an AI code tool (Claude, Cursor, or Replit) to write a script that triggers a call via Twilio or a voice AI provider like Thoughtly, ElevenLabs, or Bland AI. Pass the prospect's name and inquiry type into the opening script.
- 4
Prompt your AI engineer to write a call script with a natural opener, two qualifying questions, and a calendar booking link delivered via SMS at the end.
- 5
Run 10 test calls on warm leads. Measure hangup rate, conversation length, and booking rate against your current follow-up method.
Stretch goal: Feed three of your best real sales calls into the knowledge base and instruct the agent to mirror your objection handling style. Compare conversion rates between the generic and trained versions.
Build Your First Internal Tool This Week
- 1
Identify one internal tool you are currently paying for or evaluating. Give it three non-negotiable requirements.
- 2
Open Bloom and describe the tool in one or two sentences. Add your three requirements as constraints.
- 3
Let the agent build. Don't touch the code — just describe what's missing or wrong and iterate.
- 4
Share the result with one teammate via the App Clip link. Watch them open a native app from a single tap.
- 5
Note: how long did it take? What does this app do that the off-the-shelf version doesn't? How does it feel to own the data?
Stretch goal: Look at your SaaS bill and ask: what else on this list could I build in an afternoon? The answer might surprise you.
Build Your Own Live Pitch Scoring Tool
- 1
Open Claude or your preferred AI coding tool. Prompt it: "Build me a single-page pitch scoring web app with three sections: Founder, Business, and Gut Check. Each criterion scored 0–3. At the end, show me a summary of weak spots and generate three investor questions I should prepare for based on my lowest scores."
- 2
Score yourself honestly in the Founder section: storytelling, emotional intelligence, learning agility, domain expertise, conviction, and coachability.
- 3
Score honestly in the Business section: defensibility, ICP clarity, business model strength, TAM credibility, and tech differentiation.
- 4
Complete the Gut Check: moat type (community, data, brand, or none) and whether you would invest your own money.
- 5
Give the tool to a trusted advisor. Have them score you independently. Compare the gap between your self-assessment and theirs.
Stretch goal: Run every co-founder through the same scoring independently. The areas where your assessments diverge most are worth a dedicated conversation before you walk into any investor meeting.
Build a Soft Skills Simulation in a Weekend
- 1
Pick one high-stakes conversation relevant to your industry: a customer calling to cancel, a job candidate receiving a rejection, a patient asking about a diagnosis. Write a one-paragraph brief: persona, emotional state, what they want.
- 2
Use Claude as the simulation engine. System prompt: "Play this persona, respond dynamically, track tone and empathy markers, and note whether the user is validating or dismissing the persona's concerns."
- 3
Set a turn limit of 8–10 exchanges. After the conversation ends, send the transcript back to Claude with a new prompt: score across emotional validation, clarity of communication, and de-escalation effectiveness. Return as JSON with written explanation.
- 4
Use Cursor, Replit, or Bloom to wrap it in a basic chat UI with a post-session report screen. The full stack should be runnable in under 48 hours.
- 5
Run two people through the same scenario. Compare their transcripts and scores. Note the variance — that gap is the training opportunity.
Stretch goal: Add a second scenario and track improvement across sessions. You now have a working behavior change loop — the core of any simulation-based training product.
Build an AI-Assisted Supply Vetting Tool
- 1
Build a script (Node.js or Python) that takes a supplier name and country as inputs. Use a search API — Perplexity, Tavily, or Exa — to pull publicly available information about that supplier.
- 2
Feed that information into a Claude API call with this system prompt: "You are a supply qualification analyst for a vetted marketplace. Produce a structured brief covering: (1) verifiable credentials, (2) named practitioners and qualifications, (3) red flags including review anomalies, (4) community sentiment, and (5) an overall trust signal score from 1–10 with reasoning."
- 3
The output becomes a pre-visit research brief any team member can use before committing to an in-person site visit.
- 4
Add a simple UI with Cursor or Replit — an input form, a loader, and a formatted output card. Entire build should take under 48 hours with AI-assisted coding.
- 5
Log your actual vetting decisions against the AI brief scores over time. Use that data to refine the scoring prompt and improve calibration.
Stretch goal: This architecture works for any marketplace requiring supply qualification — freelancers, contractors, financial advisors, wellness practitioners. The AI compresses the research phase so human judgment can be applied where it counts.
Build Your Own Support Flywheel in 48 Hours
- 1
Open your AI coding tool of choice (Claude, Cursor, Replit). Prompt it: "Build me a simple Python chatbot using a JSON knowledge base. When a user asks a question the bot cannot answer, log the question with a timestamp to a CSV file called unanswered.csv. Give me a basic web interface using Flask so my team can use it in a browser."
- 2
Populate the JSON knowledge base with your 20 most common customer or team questions and answers. Your support inbox, Slack history, and FAQ page are gold mines for this.
- 3
Run the bot for five business days. Have your team use it instead of emailing you or each other for those common questions.
- 4
Open unanswered.csv. Sort by frequency. The questions that appear most often are your highest-value automation targets and your roadmap for what to build or buy next.
Stretch goal: You are replicating the core insight behind the Capacity flywheel at zero cost — letting real usage tell you exactly where the leverage is. That is the problem behind the problem, surfaced in 48 hours.
Build a Repo Consistency Agent
- 1
Create a GitHub repo and add your marketing copy, product spec, engineering spec, and QA docs as individual markdown files.
- 2
Open Claude Code and prompt it: "Build a skill that reads all markdown files in this repo and outputs a structured list of contradictions, omissions, and misalignments between them. Run it on a schedule and flag any drift whenever a document changes."
- 3
Write a constraints file — a markdown doc that tells the agent what consistent, on-brand output looks like. Ask Claude what constraints it needs to produce reliable output.
- 4
Have the agent reference the constraints file every time it runs. When any document changes, the agent checks for drift before it ships.
- 5
To go deeper: feed your reading sessions into a JSON blob, then have an agent run entity analysis, topic analysis, and sentiment scoring automatically.
Stretch goal: Git already provides file change tracking, identity attribution, and cryptographic provenance. You are building a security and audit infrastructure on top of something you already own.