I’ve been to enough tech conferences to develop a finely-tuned radar for keynotes that are actually worth your time versus ones that are just vibes and venture capital optimism dressed up in slide decks. Mike Amundsen’s Wednesday afternoon keynote at Arc of AI in Austin — titled Thinking with Machines — landed firmly in the former category. I’m still chewing on it, which is the best possible sign.
An idea from a 147-year-old “movie”
Amundsen kicked things off with Eadweard Muybridge’s famous 1878 “Horse in Motion” experiment, where Muybridge set up a row of cameras, had a horse run past them, and then strung the resulting photos together into what we’d now recognize as the world’s first motion picture.
He used this particular example because there’s art based on “Horse in Motion” hanging in the hotel where Arc of AI is taking place. It’s in the hallway leading to the elevators and first-floor rooms, so it’s almost impossible to miss:
[ picture here ]
The point wasn’t “look, a fun historical curiosity, right here in the hotel!” The point was: our brains are story-completion engines. Show us a fast enough series of still images and it’s no longer a series of photos. It becomes a movie, a continuous stream of reality. It’s innate, and we can’t help it. We’re wired to fill in the gaps and manufacture coherence even when it isn’t there.
This is pretty much what’s happening with AI right now. We’re using words like “feeling,” “thoughtful,” and “trusting” to describe systems that are, at their core, sophisticated pattern-completion engines. Our brains are doing what they always do: making up a pretty good story to explain away something they weren’t prepared for.
That’s both exciting and a little terrifying. Mike was kind enough to call it “both good news and bad news” rather than just setting the room on fire.
The call center problem
Before getting into the historical heroes portion of the program, Mike took a detour through the AI-and-customer-service story that you’ve probably seen play out in the headlines. You know the one: the overreach-and-backtrack cycle that happens whenever new technology meets an industry unprepared for its consequences.
But he wasn’t making the usual point. He wasn’t talking about chatbots giving bad advice or AI agents going rogue. He was talking about the Pareto principle, a.k.a. “The 80-20 Rule.”
Here’s the setup: roughly 80% of customer service calls are easy. They’re things like password resets, store hours, and return policies, all of which a moderately caffeinated human can handle on autopilot. The remaining 20% are brutal: long, complicated, and emotionally charged; these are the kinds of calls that take everything you’ve got.
It turns out that when companies automate the call center, they don’t automate the hard problems. They do what profit-minded entities do and go after the low-hanging fruit. They automate the easy calls first. After all, it’s the rational business decision.
This yields a (supposedly) unintended consequence: the humans who remain now have a 100% hard-call day, every day. No easy wins to catch your breath. No quick “You’re welcome, have a great day!” to reset your mood between the difficult ones. Just wall-to-wall complexity and frustration, shift after shift.
“This is burnout,” Mike said, in summary. “Big time burnout.”
And nobody planned for it, because nobody was thinking past the efficiency gain. These are unintended consequences, and the call center story was just the warmup for a much bigger version of the same problem.
Three visionaries who saw this coming
Mike walked us through three figures from computing history who all had the same essential insight: computers should extend human thinking, not replace it.
Vannevar Bush was project manager at Los Alamos during the Manhattan Project, which meant his job was making sure a building full of geniuses had everything they needed to think at maximum capacity. He noticed that the real breakthroughs happened in the hallways and at dinner tables, when scientists made unexpected connections across disciplines.
He called it a “virtual brain,” a gestalt where the sum was greater than the parts. In 1945, he wrote As We May Think, which described the memex: a personal workstation (built around microfilm, because it was 1945) that would let you create, store, and share “trails” of linked information. This gave us the intellectual ancestor of the hyperlink, forty years before Tim Berners-Lee.
J.C.R. Licklider was the “party animal” of DARPA who essentially funded the creation of the internet. Licklider wrote about “man-computer symbiosis” and had a very specific vision of how the relationship should work: computers were for doing, not deciding. Computing devices should handle the drudgery and the risky mechanical work, and leave the work of judgment, creativity, and choosing to us humans.
That separation between doing and deciding was, for Licklider, the whole point. And Mike’s argument is that we’re currently watching that line get extremely blurry in ways that would have alarmed Licklider considerably.
Douglas Engelbart is known particularly for two things:
- Inventing the mouse (reluctantly, because he needed it for a demo), and
- Giving what’s now known as “The Mother of All Demos” in 1968.
The Mother of All Demos was 90-minute solo stage performance in which Engelbart debuted, among other things:
- the mouse
- the graphical interface
- hypertext
- collaborative editing
- video conferencing
- screen sharing
- version control.
In 1968. Engelbart’s driving obsession was “bootstrapping,” a word he used to describe using computers to make people smarter so they could build smarter computers so they could become smarter still. The idea was to create an upward spiral of human capability, with technology as the lever.
The throughline connecting all three is that they all saw technology as a thinking partner, not a replacement for thought.
De-skilling: The February Anthropic study
This is the part of the talk I expect to be quoting for months.
Anthropic released a study in February looking at how AI tool use affects skill formation, which is a fancy-pants term what the rest of us might call “learning.” They split developers into two groups: one with AI assistance, one without. Both groups were given a codebase they’d never seen before, in a language they knew, with bugs to fix.
Both groups finished in roughly the same time. The AI-assisted group spent more of that time talking to the AI than actually looking at the code, but the end result was comparable.
Then came the second task: a different, similar codebase with similar bugs.
The non-AI group solved it significantly faster than the first time. They’d learned. The AI group took about the same time as before.
The difference is what Amundsen called embodied knowledge, which is the stuff that gets installed in your brain through struggle and error and figuring things out the hard way. The non-AI group had gone through trial and error on the first task. Those mistakes became capability. The AI group had outsourced the trial-and-error loop to the machine, and when the machine wasn’t holding their hand anymore, they were roughly where they started.
The study went further. It wasn’t just a binary “AI or no AI” exercise, but featured a gradient of engagement styles. They found that the more actively engaged a person was in solving the problem themselves, regardless of whether they used AI, the more they learned and the better they transferred that knowledge to new problems. Engagement is the key variable. The AI is just one factor in whether engagement happens.
The creative loop that AI disrupts
Mike has a framework for this. It’s a three-stage creative loop that he argues is the core differentiator between human and machine cognition.
- Brainstorm: Generate lots of ideas without censorship. Volume is the goal.
- Refine: Evaluate, narrow, follow promising leads, and backtrack when needed.
- Execute: Build the thing you’ve decided to build.
Every creative domain uses some version of this loop, from musicians to athletes to architects to engineers. The loop is how humans make decisions and build things that are genuinely new.
AI, Mark argues, is great at brainstorming. It’ll generate ideas you’d never have thought of, and you’ll always find gems among them. It’s mediocre at refining. You can design interactive experiences that scaffold the refining process, but it doesn’t happen automatically. It’s reasonably good at execution, which is exactly where we’ve focused almost all our energy and tooling.
The biggest problem, though, is one that doesn’t fit neatly into any of those three stages: AI is terrible at stopping.
“Generate an idea, generate an idea, generate an idea. Okay, let me refine these. And then at the bottom it says, ’You know, it would make this really cool if I added an image.’”
AI always wants to do one more thing. It’s a dopamine-delivery machine with no off switch, and the cognitive load of constantly managing the firehose is real. Harvard researchers are apparently already calling it “AI brain fry.” The call center paradox is a microcosm for the larger situation: we’ve outsourced the easy cognitive work to the machine, and now we’re spending all our time on the hard parts.
The Coach Model
So what’s the alternative? Amundsen’s answer is what he calls AI coaches.
The idea: instead of building AI systems that do work, build AI systems that guide you through doing work. Make system prompts (or “skill files” or however your shop names them) engineered to embody a coaching personality. They should do things like asking questions, surfacing options, making the human choose, pausing at decision points, and crucially, stopping when the task is done.
He demoed a simple example: a coach that walks you through building a small API. It explains its scope upfront. It asks you to confirm you’re ready before proceeding. It presents choices with context (“Most APIs in our system use JSON. Are you okay with text?”). It pauses before moving from refinement to execution. It waits for your explicit go-ahead before generating code. And when it’s done, it says “We’re all done here. Stop.”
The human is always in charge of the pace. The machine doesn’t proceed without confirmation. The decisions are yours.
It sounds almost quaint compared to the “give the AI five monitors and let it loose” approach that’s been trending lately. But Mike’s been building these coaches for nine months, and the principle is backed by the research: high interactivity plus genuine engagement plus the human making real decisions equals actual learning. The result is embodied knowledge, the kind you can carry to the next problem.
Engelbart’s bootstraps and ours
Mike closed by coming back to Engelbart’s bootstrapping vision. Engelbart, it turns out, got into computers because he read As We May Think (Vannevar Bush’s 1945 article) while stationed on a Pacific island during peacetime military service. Remember, this was an article in a twenty-year-old magazine, and it changed the direction of his life!
The chain goes: Bush writes the article → Engelbart reads it decades later → Engelbart invents tools that help humans think better → those tools help us build more tools → and so on, upward.
That’s the version of AI development Mike is asking us to choose. Not the Terminator version. Not the robots-take-over-and-destroy-humans version. Instead, it’s the Licklider/Engelbart version, where technology makes us smarter, preserves the creative loop, and keeps humans in the deciding seat while offloading the drudgery.
He closed with Alan Kay’s line: “The best way to predict the future is to invent it.” And he made sure we heard the warning embedded in it: other people are also inventing futures, and not all of those futures are ones we’d choose.
This was, for my money, a contender for the most substantive talk at Arc of AI. It’s the kind of talk that gives you not just something to think about on the drive home, but something to actually do differently. I’m already reconsidering some of my own AI tooling habits, which is the highest compliment I can pay to a conference keynote.
If you want to dig deeper into the coaching approach Amundsen has been developing, he mentioned he’s working on a book. I’ll link it here when it’s available.










































