
“Just Use AI” Is the most dangerous advice in tech
with Geoff Gibbins, Human Machines
“Just Use AI” Is the most dangerous advice in tech
Show Notes
Geoff Gibbins has spent 20 years helping some of the world's largest companies - Coca-Cola, Nestlé, MasterCard, Pfizer - figure out new products and new business models. For the last three years, that work has centered on a single question: not how do you use AI, but how do you redesign the way humans and AI work together to do things neither could do alone.
His consultancy, Human Machines, starts from an uncomfortable premise: most enterprise AI investment is FOMO-driven, and most companies are buying the same tools as their competitors and calling it strategy. The real competitive advantage - and it's one of the hardest to copy - is genuinely redesigning how you work.
The Tool Trap: Why Buying AI Isn't a Strategy
Vendors are selling enterprise AI tools built around how most companies already work. Adopt those tools and map them to your existing process, and you get a slightly more efficient version of what you had before - and so does every competitor who bought the same tool. The moat isn't the tool. The moat is the workflow you build around it that nobody else has.
Geoff's frame: ask not how to automate each step in your current process, but what process you would design from scratch if you could do things that weren't previously possible. A content team using AI to move faster through the same waterfall isn't the same as a content team that generates a hundred variants, tests them with synthetic market data, publishes the winners, and uses real-world engagement as a live feedback loop. Same tool. Different architecture. Completely different output.
300 Ideas Before Lunch: Redesigning the Innovation Process
Most product design processes are engineered around human cognitive limits - how many post-it notes you can put on a wall, how many concepts a team can evaluate in a session. That's why processes converge on six to seven customer needs and six to seven product ideas. The constraint isn't the market; it's human working memory.
With AI, that constraint disappears. You can generate 300 customer needs, identify 400 product concepts, and run synthetic testing against all of them before a human team ever convenes. The role of the team changes: instead of generating options, they're selecting and stress-testing the handful of options that survived automated filtering. Fewer steps. Higher-quality inputs at every human decision point. And you use the actual market - not a focus group - as the feedback loop.
The Collaborative Results Index
Most companies measure AI adoption by usage: how many prompts, how many tools deployed, how many hours saved. Geoff measures something harder: the quality of the human-AI collaboration itself. His framework - the Collaborative Results Index - tracks three dimensions.
First, results: what's the actual impact of the collaboration, not just the activity? Second, relationship quality: how well are humans engaging with AI as a collaborator - asking good questions, thinking critically, or just accepting the first output? Third, resilience: is the human developing domain expertise through the collaboration, or outsourcing judgment and getting weaker over time?
That third dimension is the dangerous one. Using AI to think faster is leverage. Using AI to avoid thinking entirely is atrophy. The muscle you don't train, you lose.
Metacognition: The AI Skill Nobody Is Teaching
Companies invest heavily in prompting skills. Almost nobody teaches metacognition - the deliberate practice of thinking about how you're thinking. In an AI-native workflow, this is the foundational skill: what should I spend my cognitive capacity on? What questions should I even be asking? What should I outsource, and when does outsourcing something make me worse at it over time?
Geoff's framing: outsourcing thinking to AI is great - as long as it's deliberate. Using AI to spot blind spots, surface biases, stress-test assumptions: all leverage. Reflexively accepting the first answer to avoid thinking: atrophy. The best use of AI isn't to paint for you; it's to give you a block of marble so you can chisel. The human judgment is in the selection and refinement, not the generation.
AI Trust: The New SEO Is Consistency
As buyers increasingly ask AI systems - not search engines - for vendor recommendations, the signals LLMs use to evaluate trust are becoming strategically important. Geoff's key insight: LLMs don't just read your website. They look for consistency of narrative across the entire internet - your site, Reddit, Wikipedia, review platforms, press coverage. A company with great self-authored content and a different story on third-party platforms gets down-weighted, because the model detects the discrepancy. The new SEO is epistemic consistency: your story needs to be the same everywhere, or AI systems will quietly deprioritize you.
- The Collaborative Results Index - Measure human-AI collaboration on three axes: quality of results, quality of dialogue and relationship, and resilience of process (are humans developing expertise or atrophying?).
- Process Redesign vs. Process Automation - Don't automate your existing workflow steps. Ask what process you'd design from scratch if you could do things that weren't previously possible. 300 ideas, not 6–7.
- Metacognitive AI Collaboration - Deliberately decide what cognitive work to keep in your brain vs. outsource to AI. The quality of the questions you ask matters more than the quality of your prompts.
- ChatGPT - One of the three primary AI assistants Geoff's CRI framework tracks for collaboration quality.
- Gemini - Google's AI assistant; monitored alongside ChatGPT and Claude in Geoff's collaboration research.
- Claude - Anthropic's AI assistant; third tool in the trio Geoff's CRI framework targets.
- n8n - Workflow automation platform; mentioned as representative of automation tooling in enterprise stacks.
- Corrx.ai - Geoff's in-development Chrome extension that monitors and scores human-AI collaboration quality in real time. Alpha testers can sign up at corrx.ai.
- Collaborative Results Index (CRI) - Geoff's framework for measuring human-AI collaboration quality across three dimensions: results delivered, quality of dialogue and relationship, and resilience of the human's domain expertise over time.
- Metacognition - Deliberately thinking about how you're thinking: what to keep in your brain, what to outsource, what questions to ask. The foundational AI collaboration skill that most companies never explicitly teach.
- Process Redesign - Designing workflows from scratch based on what AI makes newly possible, rather than automating existing steps. Results in fundamentally different architectures - not just faster versions of the old process.
- Confabulation - Human psychology term for the way memory reconstructs rather than records events. Parallel to AI hallucinations: both are a system constructing a plausible story from incomplete information, not lying intentionally.
- Personal Data Lake - An individual's curated pipeline of personal context shared with AI tools - the personal-scale equivalent of an enterprise data lake. Raises questions of trust: what data do you share, with which systems, and does it stay on-device?
- AI Trust / LLM Visibility - The signals LLMs use when evaluating which brands and vendors to surface in responses. Consistency of narrative across all internet sources is the primary trust signal - not just owned content.
- Human AI Team - The near-future organizational unit in which AI is a genuine collaborator integrated into daily work - not a tool occasionally consulted - with feedback flowing between humans and AI in both directions.
Why isn't buying the same AI tools as your competitors a strategy?
Because everyone's buying the same tools. Every large enterprise is getting sold generically built agents and point solutions that automate how most companies already work. If you're using the same technology, the same tools, and the same data as all your competitors, it's very hard to differentiate. The real competitive advantage comes from redesigning how you work - integrating the best of what humans and AI can do together in ways that are unique to your organization. That's hard to copy, which is precisely why it's a moat.
What's the difference between automating a process and redesigning it?
Automation takes your existing process steps and makes them faster. Redesign asks: given what AI makes possible, what process would we build from scratch? A product design process built for human cognitive limits has six to seven customer needs, six to seven ideas, and a focus group. A redesigned process generates 300 needs, 400 ideas, tests them all synthetically against market data, and only convenes human judgment around the handful of options that survived filtering. The output isn't faster - it's categorically different. Fewer steps, higher-quality inputs at every human decision point, and the actual market as the feedback loop instead of a simulation of it.
What is the Collaborative Results Index and what does it measure?
The CRI is a framework for measuring the quality of human-AI collaboration - not just usage. It tracks three things: results (what's the actual impact?), relationship quality (how well are humans engaging - asking good questions, thinking critically, or just accepting the first answer?), and resilience (is the human developing expertise through the collaboration, or outsourcing judgment and atrophying?). Most companies measure AI adoption by usage metrics. Those tell you almost nothing about whether the collaboration is actually making people better or just faster.
Is outsourcing your thinking to AI actually bad?
Not inherently - the key is being deliberate about it. Using AI to spot blind spots, surface biases, stress-test assumptions, and figure out what you're missing is pure leverage. Reflexively outsourcing everything to avoid the discomfort of thinking is atrophy: the cognitive muscle you don't use, you lose. Metacognition - deliberately deciding what to think yourself and what to hand off - is the skill that matters. And it matters more than any prompting technique. If you can get an answer to any question in two seconds, the scarce skill isn't answering. It's knowing which questions are worth asking.
How do LLMs decide which companies to trust and surface in responses?
They look for consistency of narrative across the entire internet - not just owned content. A company with a polished website and a different story on Reddit, Wikipedia, and third-party review platforms gets down-weighted, because the model detects the discrepancy between what the company says and what independent sources say. The practical implication is that the new SEO is epistemic consistency: your story needs to be accurate and the same everywhere, because AI systems are aggregating all of it simultaneously. Great self-authored content paired with contradictory third-party coverage won't hold up.
What is Corrx.ai and why is Geoff building it?
Corrx.ai is a Chrome extension in alpha development that monitors the quality of your human-AI collaboration across ChatGPT, Gemini, and Claude in real time. Not tracking what you say - tracking how well you're collaborating: whether you're asking good questions, whether you're outsourcing judgment at the wrong moments, whether you're unnecessarily switching between tools when the capability you need is already in front of you. The goal is a measurement system and real-time coaching for collaboration quality, not just usage. Alpha testers can sign up at corrx.ai.