She exited in 18 months, then walked away to find stillness
with Alyssa Eidam
She exited in 18 months, then walked away to find stillness
Show Notes
There is a question at the center of this episode that most founders refuse to take seriously: if stillness is the source of your best decisions, why is your calendar built like it is trying to kill it?
Alyssa Eidam built AI agents for healthcare - specifically to staff clinicians and solve nursing bottlenecks - and exited in about eighteen months. Then she did something most founders skip entirely. She stopped. Traveled solo through Italy for three weeks. Let her attention tell her what mattered. And came back with a sharper view of what AI native actually means - and why the chat interface, as a default product paradigm, needs to die.
This episode is about what happens when a builder with a design background and a deep interest in human behavior examines how AI is being deployed - and identifies the gap between what founders are building and what the technology is actually capable of.
The Calendar Is Lying to You
The idea that got most traction in Alyssa's conversation is deceptively simple: the insight that changes your trajectory almost never happens in a meeting. It happens in the car. On a walk. On a Thursday when you ditched work and went snowboarding. The hour-and-a-half drive up the mountain, alone with your thoughts, is where the ideas that actually move things come from.
That is not a soft observation about work-life balance. It is a cognitive claim. Research on the exploration instinct suggests that exploration - moving, wandering, taking in new information - is where creative pattern-matching thrives. Humans evolved to connect dots while moving through an uncertain environment. A back-to-back calendar is the opposite of that environment.
Alyssa came back from Italy a slower ramp. Not because the work stopped mattering, but because she had experienced what unstructured attention actually produces - and could not ignore it anymore. She now treats that reset time as an input, not a cost.
AI Native Is Not a Chat Interface
Alyssa's definition of AI native is precise and worth writing down: a product that intelligently selects the specific moments in the workflow where prediction, pattern matching, and reasoning actually change what is possible - and surfaces what matters before the user even knows to ask.
The contrast case is what she calls a wrapper: an AI chat button bolted onto existing software. That is not AI native. It is the same product with a new input method. The workflow underneath did not change. The intelligence was not woven into the process. It was appended to it.
The more interesting product question is not "where can we add a chat interface?" but "where in this workflow does a user have to do something that reasoning or prediction could do faster and better?" That is a design question as much as an engineering one - and Alyssa's background in interdisciplinary design makes her unusually positioned to see it. She spent a career designing across mediums - print, interaction, motion, UX - and her superpower is the mindset, not the deliverable. That mindset asks: what actually needs to change?
AI Makes Broken Systems Break Faster
One of the clearest things Alyssa said: if you try to plug AI into a broken system to fix it, you are not fixing anything. You are making it break faster. The dysfunction runs faster now. The bottlenecks get more visible. The failure modes accelerate.
This is not an argument against AI. It is an argument for doing the upstream thinking first. Before asking "how do we use AI here," ask what actually needs to evolve in the entire ecosystem around the problem you are trying to solve. What are the risks embedded in the system? What breaks if it runs at twice the speed? Who gets harmed by the acceleration?
She observed this in healthcare specifically - a space where the external pressure to adopt AI is high, the underlying systems are often genuinely broken, and the consequences of accelerating dysfunction are serious. The lesson applies to any vertical: AI amplifies the system it sits inside. Make sure that system is worth amplifying.
Where Healthcare AI Is Actually Going
The most interesting healthcare AI opportunity Alyssa sees is not diagnosis or scheduling - it is patient data ownership. Historically, health records have been owned by facilities, not patients. That is slowly changing. As wearables generate continuous biofeedback - sleep, HRV, voice pattern analysis for Alzheimer's or Parkinson's, emotional state - patients will accumulate data that no facility owns.
The question is who builds the layer that makes that data actionable for the individual, and how it connects back to the clinician who has thirteen minutes with you. That is a product problem, a data architecture problem, and a trust problem simultaneously - which is why it has not been solved yet.
Alyssa also flagged a few tools worth watching in adjacent spaces: Cal AI for photo-based calorie tracking, Prickly Pair for voice-based emotional health journaling in women's health, and Revel for AI-powered athlete-level care via a digital twin model. Each represents a version of the same pattern: personalized, predictive, longitudinal - rather than reactive and episodic.
Frameworks from This Episode
These frameworks have been added to the AI for Founders Frameworks Library. Filter by Product or Alyssa Eidam to find them.
- The Stillness Dividend - Unscheduled, undirected time generates disproportionate creative and strategic returns. The insight that changes your trajectory rarely happens in a meeting.
- The AI Native Test - Ask where in the workflow prediction, reasoning, or pattern matching can change what is possible - then build there. If the answer is "add a chat box," you are building a wrapper, not a product.
- The Broken System Accelerator - AI applied to a dysfunctional system does not fix it. It makes the dysfunction run faster. Audit the ecosystem before you deploy.
Glossary
Terms from this episode have been added to the AI for Founders Glossary. Filter by Alyssa Eidam to see them all.
- AI Native - A product that intelligently selects moments in the workflow where prediction, pattern matching, and reasoning add genuine value - not one that bolts a chat interface onto an existing experience.
- AI Wrapper - A product built on top of AI APIs without rethinking the underlying workflow. Adds an AI input method to an existing product without changing the process underneath it.
- Predictive Surfacing - The AI capability to surface relevant information, context, or action before the user knows to ask. The difference between a reactive tool and a proactive system.
- Broken System Accelerator - The pattern where AI applied to a dysfunctional system speeds up the dysfunction rather than correcting it. A reminder that AI amplifies the system it sits inside.
- Exploration Gene - An evolutionary predisposition toward wandering, connecting new information, and accepting higher risk. Linked to creative output and dot-connecting - and the reason unstructured time generates disproportionate insight.
- Stillness Dividend - The disproportionate creative and strategic return generated from unstructured, undirected time. The insight that changes your trajectory is more likely to happen on a solo drive than in a sprint review.
- Patient Data Ownership - The emerging shift in healthcare from facility-owned patient records to individual patients controlling and owning their own health data - enabling personalized, longitudinal, AI-powered care.
Q&A: What Founders Ask After This Episode
How do I know if what I'm building is AI native or just an AI wrapper?
Alyssa's test: identify a specific moment in your user's workflow where the system currently requires human judgment, searching, or waiting - and ask whether prediction or pattern matching could eliminate that friction entirely. If the AI addition is a chat box where users type questions, you've wrapped. If the AI watches the process and removes a step the user didn't know they were wasting time on, you've built native.
Is there a way to validate whether my existing system is worth augmenting with AI?
Run the broken system check first. Ask: if this workflow ran at twice the speed, what would break? Who gets hurt? What errors amplify? If the answer reveals significant dysfunction, solve that first. AI applied to a sound system compounds the value. AI applied to a broken one compounds the failure.
How should I practically build more stillness into my schedule as a founder?
Alyssa's approach is not about blocking off meditation time - it is about honoring the signal when your attention drifts to a problem during unstructured time. The drive, the walk, the snowboard day: these are not distractions from work. They are often where the work actually gets done. The practical move is to stop treating every hour of non-meeting time as inefficiency and start treating it as a different kind of output.
What does a genuinely AI native healthcare product look like?
It is longitudinal and predictive, not episodic and reactive. It owns or connects to the patient's continuous biofeedback - wearables, voice analysis, behavioral patterns - and surfaces relevant information to both the patient and the clinician before a symptom is reported. The thirteen minutes a patient gets with a doctor becomes most useful when the AI has already done the pattern-matching work and the human connection is reserved for judgment, nuance, and relationship.
Why does Alyssa say chat interfaces should die?
Not because conversation is the wrong modality - it is not. It is because chat has become a lazy default: add a chat box, call it AI native. The problem is that it puts the burden of knowing what to ask on the user. A truly intelligent system watches the workflow, detects the bottleneck, and removes the need to ask. It is the difference between a search bar and a recommendation. One requires the user to already know what they need; the other surfaces it before they do.