Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 4: Own your career, learn how to learn, and don’t become a dependent

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.

Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part four of several notes from the all-day session; you might want to get a coffee for this one.

Here are links to my previous notes:


The afternoon session of this workshop shifted away from the technical and toward the personal: career management, professional skill-building, how to actually learn things in an industry that never stops changing, and how to stay sane while it’s all happening. Nathaniel Nate carried most of this section alongside Dan Dan, with some sharp contributions from the audience. It was a good room for this kind of conversation: people who’d been in the industry a while, who’d seen waves come and go, trying to figure out what the current wave means for them specifically.

You are your own career manager, and that’s non-negotiable

Dan opened by acknowledging what a lot of people in the room were probably thinking: the career path they imagined when they started — get good at coding, keep getting better at coding, code until retirement — is not the only path, and for a lot of people it turned out not to be the right one either.

His framework for figuring out what direction to go: pay attention to what actually energizes you when you’re working. What problems do you want to solve? Do you prefer building interfaces or working with data and algorithms? Does debugging a gnarly problem feel like a puzzle you want to crack, or a tax you want to stop paying? Do you like the creative side of software, or the precision and correctness side? Side projects, he argued, are one of the best ways to run these experiments without quitting your job to do it.

The paths he outlined go well beyond the traditional “developer or manager” binary: software architect, staff engineer, engineering manager, technical product manager, developer advocate (his own role), sales engineer, and the increasingly relevant entrepreneur. Each has a different center of gravity, and none of them requires you to stop being technical.

His advice for navigating toward one of these: walk backwards from where you want to be. If you want to be an architect in five years, figure out what that role actually requires, then map it back to what you should be doing in years three to five, and years one to two. You’re already doing the mental motion of decomposing complex problems. Apply it to your own career.

Nate added the practical mechanics: use your personal development budget. A lot of people don’t, often out of a quiet fear of standing out or seeming like they’re trying too hard. He was blunt about this: “If you’ve got it and you’re not using it, you’re leaving part of your comp on the table. Any good manager should be thrilled you want to get better at your job.”

The technology radar: a personal framework for staying current without losing your mind

One of the more immediately actionable tools the workshop introduced was the Technology Radar concept. It’s familiar to a lot of people from Thoughtworks’s public-facing version, but here applied personally rather than organizationally.

The idea: organize technologies and techniques into four buckets. Adopt (things you’re currently using and mastering). Trial (things you’re actively experimenting with). Assess (things you’re watching but not diving into yet). Hold (things you’re deliberately not learning right now, even if people keep telling you to).

The audience exercise around this got interesting quickly. People shared their lists. “Rust on hold because Go is a higher priority at my company” was one contribution — and that’s exactly the right way to think about it. Your radar isn’t the same as someone else’s radar. Boris at Anthropic running five parallel Claude Code instances in his terminal doesn’t mean that’s the right workflow for you. Dan was emphatic: “Don’t see what someone else is doing and feel like you’re behind. You’re not.”

The schedule layer Nate added was useful: once you’ve identified something you want to learn, think through the cadence. Weekly, maybe a podcast or a short video. Monthly, maybe a meetup. Quarterly, maybe a deeper hands-on session. Annually, maybe a conference. Small, consistent investment over time beats cramming every time.

Record your wins, and be specific about the numbers

This was a section I wish someone had told me about fifteen years ago, and I suspect most people in the room felt similarly.

Dan’s recommendation: maintain a running wins document. Not elaborate. Not ceremonial. Just a note in Apple Notes or Google Docs where you record things you accomplished, feedback you received, skills you built, presentations you gave. The point is to have the material when you need it — annual reviews, promotion conversations, job searches, award nominations.

The key, and this is where most people go wrong: be specific, and attach numbers wherever possible.

“I improved performance in our flagship application” is forgettable. “I improved performance by 25% by implementing virtual threads” is a data point. “I reduced memory usage across a thousand instances over 300 apps” is a business case. The person making decisions about your raise or your promotion can’t make that case for you if you don’t give them the ammunition. Your manager is not necessarily keeping track of your contributions with the same level of care you are.

Nate extended this with a point about visibility: you want your manager to be able to walk into a room and tell a specific story about you. Not “Nate’s a solid engineer,” but “Nate’s Azure lunch and learn series pulled 200 people in the first session and our Chief Strategy Officer shared the metrics upward.” When your name comes up in rooms you’re not in, you want there to be a story attached to it — and that story needs to be true, specific, and ideally tied to a dollar amount or a measurable outcome.

His framing: “If your boss can say ‘Dan saved us 1.8 million dollars last year in Cloud costs,’ it’s a lot harder to put Dan on the non-regrettable attrition list.”

How we actually learn things (and why most approaches don’t work)

Nate took over for the learning science portion, and it was some of the best material of the day.

The core claim: in order to remember something, it needs to be elaborate, meaningful, and have context. Which is why story is so powerful — stories create context and meaning around facts that would otherwise evaporate. He mentioned that an AV technician once stopped him after a talk specifically to say she noticed he told stories, because most speakers just recite facts, and the stories were why she stayed engaged. He took that as confirmation of what he already believed: stories are the actual unit of memory, not information.

Spaced repetition matters. Brute-forcing your way through something until you think you’ve got it and then never returning to it is how you lose it. The Forgetting Curve is real. Little bits over time beats big chunks all at once. This is why blocking regular learning time on your calendar — Friday afternoons, Tuesday lunches, fifteen minutes of morning coffee before your day explodes — actually works where “I’ll get to it eventually” does not.

He was also honest about the limits of memory: forgetting is normal, not a personal failing. He now uses Gemini to re-explain things like OSI layers that he learned thirty years ago and hasn’t needed day-to-day. “I don’t freaking deal with it constantly. Getting a nice, concise refresher is fine, as long as I verify when it matters.”

The Dreyfus model of skill acquisition came up here, and it’s worth understanding. Five stages: novice (needs explicit recipes, follow the steps exactly), advanced beginner (can start combining recipes), competent (can troubleshoot, begins to self-correct), proficient (can self-correct in the moment), expert (operates on intuition, can’t always explain what they’re doing). The punchline: most developers don’t have ten years of experience; they have one year of experience ten times. And LLMs are permanently stuck somewhere around advanced beginner. They can combine recipes. They will never have intuition, the felt sense that something is wrong before you can articulate why.

Rules are essential for novices. Rules kill experts. A slightly different thing, checklists, are powerful across all levels, as the aviation and surgery examples illustrated. The distinction matters for how you think about AI-assisted development: AI needs guardrails because it can’t develop the intuition to know when to break the rules. You set the guardrails. That requires knowing the rules well enough to encode them.

You cannot read it all. Stop trying.

The death of a thousand subscriptions. Nate described the pile of unread magazines accumulating on his kitchen island and his wife’s gentle suggestion that most of it should go in the recycling as a near-perfect metaphor for the state of our industry’s information environment.

His rough estimate: the amount of content added to YouTube while the workshop was running would take more than a week to watch straight through, even without eating or sleeping. The amount of content added to the internet while they were in the room is unfathomable. Heat death of the universe is going to happen before you read it all.

His solution: cultivate a network of trusted people who read different things and share the signal. He and Glenn, he mentioned, exchange texts constantly, each person watching a different slice of the landscape, forwarding things worth attention. If something is genuinely important, it will hit you from multiple directions regardless. You don’t need to be first to every wave.

This connects back to the Technology Radar: FOMO is real, but you cannot surf every wave. Being a fast follower, letting other people take version 1.0 and joining at 1.1 once the shakeout has happened, is a completely legitimate strategy. The people who are struggling right now, Nate suggested, are the ones saying “nope, not my thing, not engaging,” and not the ones who are choosing deliberately where to focus.

On AI, anxiety, and not feeling like you’re behind

Dan closed with a section that felt necessary: acknowledgment that the current moment is genuinely overwhelming, and that AI fatigue is real even if nobody talks about it.

He referenced an Andrej Karpathy tweet about feeling like a powerful alien tool had been handed to everyone simultaneously without a manual, while a magnitude-9 earthquake is rocking the profession. Nobody knows how to hold it yet. The expectation that developers should now be 10x as productive is not a reality for most people. They’re still learning the tools, still figuring out what works, still dealing with the new cognitive load of evaluating AI output on top of doing the actual work.

His practical guidance on where to start, because the list of things you’re “supposed to know” (MCP, evals, prompt chaining, vibe coding, function calling, embeddings, constitutional AI, token sampling, and so on) is legitimately intimidating:

Start with playing with multiple models. Try the same prompt in Claude, Gemini, GPT. Notice the differences. That alone builds intuition. Then understand context and memory. What are the limitations of these systems, and how do you work within them? Then tools: the idea that you can give an LLM access to actions in the world. Then MCP servers as a way of packaging that capability. Then, eventually, agents and agentic workflows. But not before the foundational layers make sense.

And critically: don’t let someone else’s advanced workflow make you feel behind. The Boris-at-Anthropic-running-five-Claude-Code-instances workflow exists in a context you don’t share. Build your own relationship with these tools from wherever you actually are.

The closing argument

Nate closed the day, and I want to quote him here as directly as I can from my notes, because the framing was right:

“Fundamentals will always serve you well. I am adamantly of the opinion that they are even more important now than they were five years ago, and I thought they were pretty damn important five years ago when we started this book.”

Two mindsets available to you: define yourself by what you’ve done in the past, or define yourself by the problems you’re going to solve in the future. Reactive or proactive. Either way, change is coming. It always has been. He’s been doing this for almost thirty years and has not yet seen an instance where the industry just… stopped. The pendulum swings, the landscape shifts, and the people who navigate it best are the ones who maintain the fundamentals while staying curious enough to pick up the new tools.

He admitted he’s nervous about the cohort of people entering the industry right now: the steep drop in junior hiring, the Stanford placement numbers, the companies that have convinced themselves AI obsoletes entry-level work. But he thinks the snapback is coming. We need juniors to become seniors. Seniors don’t appear from nowhere. At some point, that math becomes undeniable.

His last line stuck with me: “I’d rather be the lead sled dog, because at least the view changes.”

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 3: Clean code, influence skills, and why your legacy code pays the bills

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.

Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part three of several notes from the all-day session; you might want to get a coffee for this one.

Here are links to my previous notes:


Start with the big picture before you touch anything

After lunch, Nate and Dan shifted gears from the big themes of reading code and navigating unfamiliar systems into something more granular: what actually makes code good, how to work with the humans around that code, and why the people problems in software are harder than the technical problems. If Part 1 was the philosophical case for fundamentals and Part 2 was about reading and navigating code, Part 3 was the craft and culture of actually writing it well – and getting your organization to care.

Dan opened this segment with a point that gets skipped constantly: before diving into a codebase, understand why it exists. Who are the stakeholders? What does this project mean to the business? Who are the actual humans using it?

He made a point I appreciated: LLMs can’t produce empathy. They can describe a system, but they can’t tell you that the insurance claims processing app you think is boring is the thing that determines whether a family gets their house repaired after a flood. That kind of context changes how carefully you work.

On documentation: read it, but don’t treat it as gospel. Dan spent three days once trying to understand a complex system by carefully reading what he thought was current documentation, then discovered it was two major versions out of date. The code had been completely rewritten. His rule: documentation can lie, but code never does. Read both, verify what’s actually running, and don’t be afraid to ask a colleague for three minutes of context before burning three days spinning your wheels.

He also made a point about documentation as an opportunity: if there isn’t much of it, that’s your chance to contribute right away. Your fresh perspective on an underdocumented system is genuinely valuable; you’ll notice things longtime contributors have stopped seeing.

Navigating unfamiliar code: entry points and mental models

Dan walked through his framework for getting oriented in a large, unknown codebase. The key concept: find the entry points. In Java, that’s the main method. But more broadly, it’s anything that answers “how does something get into this system?” – public APIs, web UIs, event handlers, message consumers, scheduled tasks, lifecycle hooks.

If you don’t know what questions to ask, you can’t ask them, whether of a teammate, or of an AI. That’s the part that requires actual knowledge. Once you know you’re looking for entry points, you can use AI tools to help find them. Without that conceptual frame, you’re just asking “what does this do?” and hoping for a useful answer.

From there, he talked about building mental models. Not necessarily elaborate UML diagrams, but some kind of internal representation of how the system works. A sketch on paper. A flow chart from entry point to output. Something that externalizes the structure so you can reason about it and share it with someone else who can tell you what’s missing.

Nate added something I want to highlight: AI tools can tell you what code is doing, but they still can’t tell you why it’s doing it. That gap between the code’s behavior and the intent behind it is where human expertise lives. The code may be technically correct and historically wrong, a deliberate workaround that made sense in 2014 that nobody documented.

Make changes carefully, incrementally, and reversibly

Nate was emphatic on this: when you’re modifying existing code, especially under time pressure, make small, reversible changes. Not 3,000-line PRs. Not agents running loose making sweeping modifications. Atomic commits, each representing one logical change, that can be understood, reviewed, and reverted independently.

His version control points were basic but worth restating:

  • Commit frequently, not in massive batches
  • Write meaningful commit messages (this is, he admitted, something he now largely delegates to AI – letting it summarize what he changed before committing)
  • You are accountable for every PR you submit, regardless of whether you or an agent wrote the code

That last point deserves emphasis. Dan was clear: “If I have questions about a PR, you better be able to answer them. You can’t just say ‘my AI did it.’ You have to understand these decisions.”

He also raised a thought experiment worth sitting with: imagine your boss tells you to take Friday off, and over the long weekend, an AI agent will be let loose on your most critical production system: fixing bugs, adding features. You’ll review what it did on Monday. Are you excited about the three-day weekend, or terrified?

If your answer is “terrified,” that’s the correct answer. And the reason you’re terrified points directly to the value of the fundamentals: documentation, tests, diagrams, clear architecture. Those are the things that make an AI’s work reviewable rather than a mystery you have to reverse-engineer.

What makes code good (and bad)

This section was dense. The key ideas, in rough sequence:

  • The Ikea effect and code ownership. Nate: “Every one of you has looked at some code and uttered some variant of ‘what idiot wrote this,’ only to realize you were the idiot who wrote it a couple months ago.” We value our own code more than we should. Code reviews exist partly as a corrective for this.
  • Languages are tools, not identities. Both Nate and Dan are Java Champions, and both were clear: Java is just a tool, not a religion. The Blub Paradox (from Paul Graham) explains why developers get dogmatic: you can’t easily see the limitations of your chosen language because it’s your baseline for normal. AI tools are helping break this a bit; they’re using more languages and frameworks than they used to, and that breadth makes them better programmers.
  • The lazy programmer ethos is real and good. Before writing code, spend 20 minutes making sure someone else hasn’t already solved this. Use language features before reaching for a library. Use a library before writing your own. Dan told a great story about being new to a project, discovering a utility function that took 14 parameters just to capitalize a string, and quietly using the built-in string method instead, then watching the entire senior team’s heads explode when he revealed this in a meeting. The built-in had been there for years. Nobody had looked.
  • Lines of code is a terrible metric. Dan said this directly: shipping 37,000 lines of code is not an accomplishment. Code is a liability. More code means more surface area for bugs, more maintenance, more complexity for the next person (including future you). The vibe coding community’s tendency to measure apps by lines of code is backwards. Code deleted is almost always the better outcome.
  • Cyclomatic complexity matters. This came up repeatedly. Nate’s heuristics: low single digits is good, high single digits means you should be actively refactoring, double digits means it’s time to leave the project. He mentioned encountering real production code – written by a human – with a cyclomatic complexity of 82. The brackets were labeled “start for loop one / end for loop one” just to keep track. Not good.The punchline about cyclomatic complexity as a guardrail for AI agents was sharp: if you don’t give an agent a directive like “cyclomatic complexity must stay below four,” it won’t apply that constraint. And if you don’t know what cyclomatic complexity is, you won’t know to ask. Tools like SonarQube, PMD, and the memorably-named CRAP metric (Change Risk Anti-Patterns: cyclomatic complexity versus code coverage) can help enforce this, but only if someone with the knowledge sets them up.
  • Short methods, high cohesion, low coupling. Nate: “A method should do one thing and do it very, very well. This is the concept behind Unix piping: simple things together to get more complicated results.” That said, he also added the counterpoint: don’t favor brevity over clarity. A one-liner that nobody can understand in six months is worse than three readable lines.
  • AI tends toward verbosity and complexity. Both speakers noted that AI coding assistants have a strong bias toward writing more code rather than less, toward adding dependencies rather than using what’s already there, and toward long methods rather than short ones. They will solve the problem – but they won’t necessarily solve it simply. That instinct toward simplicity has to come from you, either as a direct code reviewer or as someone who knows how to write good prompts and capability directives.
  • Composition over inheritance. Dan mentioned this as a persistent AI failure mode: models trained on years of Java code have learned the “create a service interface and one implementation even when you’ll never have a second implementation” pattern because it was ubiquitous. That doesn’t mean it’s good. It just means it’s common in the training data.
  • Copies of copies degrade. Nate made a point I hadn’t heard framed quite this way: if vibe-coded projects proliferate on the internet, and future models are trained on that code, the training data quality decreases. Models training on AI-generated output of questionable quality will produce AI-generated output of worse quality. We’re already seeing this in written content on LinkedIn and elsewhere. We should expect to see it in code.

Heritage code, not legacy code

One small reframing that I liked: Dan suggested we call it “heritage code” instead of “legacy code.” Legacy has a negative connotation. But code that’s been in production for fifteen years and processed billions of dollars of transactions is an achievement. It deserves some respect.
That said, Nate was clear: all code eventually becomes legacy. Sometimes immediately after you commit it. It will live longer than you expected, will be harder to kill than you hoped, and someone will be maintaining it years after you’ve moved on. Write with that person in mind.
His favorite version of this sentiment, which he attributed to someone else: “Always write code as if the person maintaining it is a homicidal maniac who knows where you live.”

The influence skills nobody taught you

The final section of this part of the workshop took a hard turn into territory that software engineering curricula almost never cover (but is a key part of my developer advocate work): how to actually get things done in organizations full of humans with competing incentives.

Nate’s thesis: the hardest problems in software are people problems, not technical problems. And the skills to navigate people problems: influence, empathy, listening, finding common ground; all of these don’t come with a CS degree.

He recommended How to Win Friends and Influence People by Dale Carnegie without apology. “It is older than everyone in this room. It is Evergreen. I guarantee it will help your career.” The book is about understanding what people actually need versus what they’re saying they need, and how to align your goals with theirs.

On the current AI mandate situation specifically, he offered a practical frame: many senior leaders have “establish AI across our workforce” as a KPI tied to their bonus. They don’t necessarily care how you use AI. They need to be able to say you’re using it. If you can give them a win, a story they can tell upward, they will largely leave you alone about the details. Fill the vacuum with your own narrative or someone else will fill it with token counts.

Two approaches to influence:

  1. The hammer approach: brute-force people into agreeing with you. Works occasionally, burns trust, creates enemies.
  2. The ninja approach: make it their idea. Nate told a story about introducing TDD at a company that had rejected it when he first proposed it. He convinced one tech lead (who happened to be named Jeff, continuing the workshop’s running bit about terrible variable names) to adopt it on his team. When crunch time arrived and Jeff’s team was calmly fixing small issues while everyone else was drowning in defects, Jeff presented the same TDD case to the wider team – and got a standing ovation. Nate, who had proposed the same thing months earlier and been ignored, got no credit. But the practice got adopted. That was the goal.

His point: being the new person with the right answer is often less effective than being the connector who gets the right answer into the right person’s mouth. Letting go of the credit is a skill. It’s not a natural skill. Practice it anyway.

Code reviews: the underrated force multiplier

The workshop closed this segment with code reviews, and both speakers were emphatic that these matter more in an AI-augmented world, not less. When agents are generating PRs, someone with judgment still has to review them, and that reviewer has to understand the code well enough to ask real questions.

Some norms they pushed:

  • No snarky comments. Ever. They are not useful, they’re not clever, and everyone can see what you’re doing.
  • No 3,000-line PRs. Reviewers should refuse to engage with them.
  • Assume positive intent. You don’t know what’s happening in someone’s life. The code that looks lazy might have constraints you’re unaware of.
  • Ask questions instead of making proclamations. “Did you consider what happens when user load ramps up?” is better than “this won’t scale.” Especially when you haven’t done the math.
  • You are not your code. Code reviews are opportunities to improve the work, not indictments of your worth as a person.

Nate’s read on the current state of code reviews: PRs have made the process much more accessible than the old scheduled review meeting, but have also introduced review theater – someone clicking “approved” without looking because it’s in the process checklist. The form without the substance.

Dan’s suggestion: use AI to help you understand PRs before reviewing them. Give it the PR description and ask it to explain what’s actually changing and why. You’ll ask better questions.

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 2: Reading code is a superpower, and we were never taught it

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.
Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part two of several notes from the all-day session; you might want to get a coffee for this one. You can read the previous set of notes here.


How you got here doesn’t matter. That you got here does.

Nate and Dan presenting, with a slide that reads “Ultimately it is about problem solving, tinkering, creativity”

After the first break, Nate and Dan shifted from the big-picture AI discourse into something more concrete: the actual craft skills that make a software engineer, and why those skills are becoming more important in an AI-augmented world, not less.

Nate opened this segment by talking about the different paths into software engineering (the traditional CS degree, boot camps, self-taught) and making a point I think deserves wider circulation: there is no canonical path, and apologizing for yours is a waste of energy.
What matters, in his view, isn’t the credential. It’s whether you have the tinkering mindset. Whether you’ve gone to sleep thinking about a problem and woken up with the answer. Whether you look at a broken thing and feel the pull to understand why it’s broken.

He also made an honest admission about what CS programs are actually designed to do: prepare you for graduate school in computer science. That means algorithms, compiler theory, operating systems, language design. Practically useful for building production software? Debatable. Practically useful for becoming a researcher? Yes. Boot camps swing hard the other way – framework-heavy, language-focused, get-you-hired in 12 weeks – which means they’re also somewhat transitory, because the framework of the moment changes every six months.

Neither path gives you everything. That gap between “what we taught you” and “what I want you to know when you join my project” is basically what their book is trying to fill.

The skill we teach least is the one we use most: reading code

This was the section that hit me hardest, because I’ve thought about it before and never heard it stated this cleanly.

Nate’s observation: we teach people to write code almost exclusively. We spend essentially zero time teaching people to read code. And yet, in any real production environment, the ratio of reading to writing is not even close. You spend far more time navigating, understanding, and reasoning about existing code than you do creating new code from scratch.

His analogy: “I wouldn’t teach you French by saying, now go write some French.”
Reading code is hard for a few compounding reasons. You have to understand the problem domain (which is often genuinely complex – he gave examples from finance and insurance where the business rules alone are labyrinthine). You have to see the code through another person’s mental model. And you often have to do this under time pressure, making changes you don’t fully understand, in systems you weren’t around to watch grow.

The result is what Nate called “patches on top of patches on top of patches,” and the remarkable thing isn’t that these systems have bugs, it’s that they work at all.

There’s also the cognitive bias dimension. The Ikea effect: you value things you assembled yourself more than things someone else built, which means you’re inclined to view your own code as cleaner and more sensible than others’. The mere exposure effect: familiarity breeds preference, which is why developers get dogmatic about languages; not because their preferred language is objectively superior, but because it’s the one they know.

Nate had a great riff here about what he called the Blub Paradox, from a Paul Graham essay: when you’re a programmer in a language somewhere on the power continuum, you look down the spectrum and think “I can’t imagine being productive with those limitations,” and you look up and think “I don’t know why anyone would need all that weird stuff I don’t have.” The language you know well becomes your baseline for what’s normal. AI tools, interestingly, may be helping break this a bit. He and Dan both noticed they’re using more languages and frameworks than they used to.

The Lab: Reading an unfamiliar codebase without AI first

Dan ran the group through a hands-on exercise using the Spring Pet Clinic, a well-known sample Java/Spring application. The instructions were deliberately old-school: no AI tools yet. Just open the repo and start reading.

The goal was to build some muscle memory around the basics: identifying technologies and frameworks from project structure alone, finding a main application class, recognizing architectural patterns just from folder layout.

It’s a more sophisticated skill than it sounds. Dan’s point: even if you’re not a Java developer, you can learn a lot from just looking at a pom.xml. You can infer architectural choices from package structure; “package by feature” versus “package by layer” tells you something about how the original authors thought about the system. You can spot where to start, what the domain objects are, how the system is organized.

After they’d done it manually, Dan switched to showing how AI tools handle the same task, specifically using a “plan mode” in his coding assistant where he wasn’t asking it to write anything, just to explain what it was looking at. The output was genuinely useful: a breakdown of the tech stack, architectural summary, entry points, dependency graph.

His key insight: “I use AI tools far more to read code, understand things, get familiar with things, and learn things than I do to write it.”

But then the follow-up, which is the important part: he wouldn’t have known what questions to ask the AI without the fundamentals. Understanding that architecture is a thing, that there are different ways to organize packages, that there’s something meaningful to look for in the dependency file; that knowledge has to come from somewhere. The AI accelerates the exploration; it doesn’t replace the ability to know what you’re looking for.

AI can tell you what code is doing. It still can’t tell you if that’s right.

This is where the conversation got interesting. Nate made a distinction that I think is underappreciated:

These tools are now remarkably good at reverse-engineering legacy code and telling you what it does. Feed it a 30-year-old COBOL module and it’ll give you a plain-English summary of the behavior. That’s genuinely powerful, especially for the mainframe migration work he mentioned in the morning session.

But “this is what the code is doing” is a completely different question from “is this what the code should be doing?”

He gave a real-world example: a system where some business logic was technically incorrect, but the error was intentionally corrected downstream in a different process. The code was wrong on purpose, because fixing it at the source would have required fixing everything else too. An AI reading that code would correctly describe the behavior, but have no way to know the behavior was a deliberate workaround rather than a bug.

That knowledge lives in the heads of the engineers who were there when the decision was made. And increasingly, as those engineers retire or move on, it’s not living anywhere.

The airline pricing example he used was perfect: the same seats, same flights, same dates — but booking as two one-ways costs a third less than booking as a round trip. There’s almost certainly a specific piece of business logic somewhere that creates that arbitrage. An AI can describe that code. It can’t tell you whether the Delta exec who approved it knew what they were approving.

The sentinel knowledge problem, part two

Nate returned to a theme from the morning: we are starving the pipeline that creates the experts who can actually evaluate AI output. But in this session, he made it more concrete.

Senior engineers look at AI-generated code and immediately spot the issues: the approach that’ll work in a demo but fall over at scale, the pattern that was idiomatic three major versions ago, the security implication nobody mentioned. Junior engineers look at the same code and think it looks fine, because they don’t yet have the experience to know what “fine” looks like.

The concerning dynamic: juniors are increasingly using AI to learn, but learning by accepting AI output without the ability to critique it isn’t learning. It’s cargo cult programming. You’re learning to produce things that look like code without developing the underlying judgment about whether those things are good.

Nate’s line: “AI is the very eager junior developer, and you need to monitor their output closely.”

The economics sidebar: tokens, budgets, and the reality of scale

This wasn’t on the agenda, but it came up organically and it was one of the more grounded conversations of the day.

Nate described a real situation: an organization’s head of AI was approached by a developer who wanted the unlimited Claude Code tier. When asked how many tokens he needed, the answer was 60,000 a day. Response: show me that you’re generating not $300K of business value weekly, but a million dollars. Can you do that? No? Then no.

The scaling math is uncomfortable. A room full of developers (say, 5,000 at a larger company) each burning hundreds or thousands of dollars of tokens per week is a significant line item. And the current pricing reflects a subsidized market. When investors start demanding returns, those prices go up.

He drew an analogy to the Uber model: lose money for years, drive out competition, then raise prices. Except Uber’s “product” (a car ride) is a commodity. The switching costs for enterprise AI tooling embedded into CI/CD pipelines, developer workflows, and institutional processes are not trivial.

His read on Anthropic’s and OpenAI’s revenue vs. profit numbers: revenue is real. Profitability is not. People are seeing value in the product, but the product is priced below cost. That’s not a sustainable business model, and the reckoning will come.

On whether we’ve hit a plateau

Someone in the room asked whether the intelligence improvements we saw around late 2024/early 2025 would continue.

Nate’s take: we’re probably hitting a plateau on pure scaling. The exponential gains from “just make the model bigger” appear to be diminishing. Gary Marcus’s position that we’re approaching the limits of what scaling alone can achieve, strikes him as reasonable.

The “Mythos is so dangerous we can’t release it yet” announcements that keep appearing? He’s skeptical. Follow the incentives: the companies making those claims need their valuations justified.

He was slightly more philosophical about the longer tail – the sci-fi scenarios, the alignment concerns, the “what if it’s already smarter than it’s letting on” thread. He takes it seriously without catastrophizing. The honest version of his view: we don’t know what the motivations of these systems are, because the people who built them don’t fully understand how they work either. That warrants humility, not panic, but also not dismissal.

Bottom line from this session and the previous one

The throughline across the whole day, as best I can summarize it: these tools are genuinely powerful accelerants for people who already have the foundations. They are not a replacement for the foundations. They are an amplifier, and what you get out depends heavily on what you put in.

The code reading skills, the domain understanding, the architectural instincts, and the ability to ask the right questions. All of that still has to come from somewhere. What’s changed is that once you have it, you can go faster, do more, and explore more territory than you could alone.

That’s good. The part that’s bad is that we’re making decisions right now (who to hire, what to teach, what to outsource) based on the assumption that the foundations don’t matter anymore.

They matter. Probably more than they used to.

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 1: The fundamentals still matter!

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.
Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part one of several notes from the all-day session; you might want to get a coffee for this one.


The opening thesis: giving someone a nail gun doesn’t make them a carpenter

Nate opened with a confession: he’s not handy. At all.

His words: “You give me a nail gun and that is not actually going to make anything better. The cat’s gonna have a nail in its tail.”

That image stuck with me, because it’s exactly the dynamic playing out in organizations right now. Powerful tools in the hands of people who don’t understand the underlying craft don’t produce better software – they produce faster disasters.

Both Nate and Dan were quick to acknowledge that yes, things changed. Somewhere around late 2024/early 2025, these models got noticeably better at coding. Neither of them is dismissing that. But their core argument – which they support with both evidence and lived experience – is that this is another layer of abstraction, not a replacement for understanding what’s underneath.

A brief history of “this will replace programmers”

Slide: “Here we go again,” showing a list of technologies that were supposed to replace programmers

Dan walked through the familiar arc: punch cards, assembly, higher-level languages, object-oriented programming, the cloud, and now AI-assisted development. Each step, someone announced the death of the programmer. Each step, the programmer survived and became more productive.

COBOL was going to let business people write their own programs. Java Beans were going to eliminate business logic development. No-code platforms were going to replace developers entirely. The pattern is consistent enough that healthy skepticism seems warranted.

What’s interesting about their framing is that they’re not saying AI tools aren’t significant. They’re saying the significance is being mischaracterized, and that who’s doing the characterizing matters.

Consider the source

This is where the talk got sharp. Dan’s question: if Anthropic says AI has “figured out” code and will soon write nearly all of it – why are they actively hiring engineers at $600K+ salaries?

Their breakdown of who’s claiming AI replaces developers:

  • The tool makers (Anthropic, OpenAI, etc.) – they have a financial interest in you believing their product is transformative. Grain of salt.
  • Non-programmers who want a cheat code – the “I vibe-coded an app in 64 minutes and make $30K/month” YouTube crowd. Grain of salt the size of a boulder.
  • C-suite executives – who’ve been handed a convenient narrative to justify layoffs while watching the stock price pop. Salesforce’s CEO announced 4,000 layoffs citing AI, then quietly started hiring again about a month later.

Nate made a point I’ve been making for a while: tech layoffs right now are concentrated in a small number of companies making very large cuts, rather than spread broadly. The psychological effect is outsized. Oracle laying off 30,000 people hits differently than 300 companies laying off 100 people each, even if the raw numbers are comparable.

Vibe coding: fun for weekend projects, terrifying for payroll

Slide: Andrej Karpathy’s original vibe coding tweet

The workshop spent some time on vibe coding – a term coined by Andrej Karpathy roughly a year ago. Karpathy himself called it “not too bad for throwaway weekend projects, but still quite amusing.”

Nate and Dan’s framing: the stakes matter. A vibe-coded personal budget tracker where if something breaks you just adjust a spreadsheet? Great. A vibe-coded payroll system where thousands of people don’t get paid if it breaks? Categorically different situation.

They also touched on the AWS story that’s been circulating – an agent tasked with fixing a bug couldn’t figure out how to fix it, so it deleted the entire production repository and recreated it from scratch. Which is, in a very literal sense, a solution. Just not one any human with experience would have suggested. As Dan put it: “Systems have no feelings. They have no experience of ‘wait, that doesn’t seem like a good idea.'”

The expertise gap problem

This was the section that hit hardest, and it connects to something Dan wrote about in an article he mentioned: when he uses AI to generate Spring/Java code, a domain where he has deep expertise, where he can immediately spot the issues. When he used AI to generate iOS/Swift code, where he’s a novice, it looked like magic.

The issue isn’t that the code quality was different. The issue is that his ability to evaluate it was different. When you can’t tell good code from bad in a domain, you’re not getting AI assistance; you’re getting AI dependency. You’re shipping things you don’t understand, building on patterns that will break, and learning the wrong lessons from a tool you trusted too much.
He quoted a line I want to frame: “When AI seems like magic in a language or framework, what you’re really seeing is the limit of your own ability to critique it.”

We’re choking off the pipeline that creates experts

Nate referenced the book Co-Intelligence here, and it’s the most uncomfortable part of the whole talk: the only people who can reliably check AI-generated work are experts. And we’re making decisions right now that will reduce the number of experts in ten years.

Companies are not hiring junior developers. Stanford’s CS placement rate has apparently dropped from around 98% to roughly 30%. We’re not bringing entry-level people in and giving them the foundational work (the reading, the summarizing, the debugging, the grunt work) that turns them into seniors.

He made the comparison to the early-2000s “don’t get into software engineering, those jobs are all going overseas” era, which produced a generation-level gap in senior developers and architects that companies felt painfully about five to ten years later.
And we’re doing it again. On purpose, this time, with AI as the cover story.

The mainframe migration moment

This was a tangent, but a good one. Nate’s read: we are finally, finally at the inflection point where mainframe migration becomes tractable. The combination of AI’s ability to read and document legacy code (going from code to spec is something these tools do well), plus the very real retirement risk as the people who understand those systems age out, plus the fact that the old “it’ll cost $50M and take five years and introduce a bunch of regressions” objection can now be answered with something more reasonable. All of that is converging.

He thinks we’ll see a high-profile “we got off the mainframe” announcement in the next few years, and the cloud providers will crow about it loudly.

The economics of AI tools deserve scrutiny

Nate got pointed here, and I think he’s right to. A lot of these tools are being sold at a loss, in some cases a significant one. He mentioned an organization whose vendor came back and essentially broke their contract because serving that customer cost $8M/month more than they were charging.

The concern isn’t that AI goes away. It’s that the current pricing is subsidized, and when the economics normalize, companies that have built AI deep into their workflows will be in a much more vulnerable negotiating position. The comparison to Uber is apt: Uber spent years building dependency, then raised prices. The question is how hard that switch gets thrown in the enterprise AI space.

The actual bottom line

Dan and Nate presenting, showing slide that says “I think what AI does quite frankly is reduce the floor and raise the ceiling for all of us.” — Satya Nadella

Dan closed with what I thought was the right framing: the floor has been lowered (more people can participate in building software) and the ceiling has been raised (experienced engineers can do more than ever before). Both of those things are true and good.

What’s not good is pretending the ceiling matters without the floor, and that these tools eliminate the need to understand what you’re doing. They don’t. They amplify what you already know. If you don’t know anything, they amplify that too.

Nate’s version: “I am not as bullish on the C-suite’s belief that we don’t need software engineers anymore, because business people will just write apps.”

He’s been watching business people almost-write-apps since COBOL. They haven’t quite gotten there yet.

Categories
Artificial Intelligence Conferences Programming What I’m Up To

My favorite talk title from the upcoming Arc of AI conference (April 13 – 16)

From April 13th through 16th — and a couple of days before, because it’s in Austin — I’m going to be at the Arc of AI conference! Over the next little while, I’m going to be posting articles about Arc of AI, in case you’re wondering what the conference is about and whether you should go.

In this article, I’ll talk about my favorite title from all the talks on the Arc of AI agenda.

The talk: We’re all using AI, But We’re Not Enjoying It

When your talk happens on the last time slot at the end of a three-day conference (four days, if you’re also going to do one of the workshops), you need to put in some extra effort to get the attendees to show up and not disappear for the local sights (Arc of AI’s in Austin) or make a beeline for the airport.

Brent Laster, President and Founder of Tech Skills Transformations, is giving a number of talks — and a workshop! — at Arc of AI, and he has one of the closing talks. He has a talk in one of those last speaking slots on the Thursday at 4:00 p.m., and it has what I think is the most interesting title on the agenda:

We’re all using AI, But We’re Not Enjoying It

Here’s the abstract:

We’re All Using AI, But We’re Not Enjoying It takes an honest look at a growing gap in the workplace: AI adoption is skyrocketing, yet frustration, confusion, and uneven results are just as common. This talk explores why AI so often feels harder than it should—poorly integrated tools, unclear workflows, unrealistic expectations, cognitive overload, and the pressure to “keep up.” Looking at patterns seen across teams learning to use AI effectively, we’ll break down the practical barriers that make everyday AI work feel tedious instead of empowering. More importantly, we’ll outline a set of achievable shifts—better task design, lighter mental models, context-first prompting, workflow pairing, and small but meaningful guardrails—that can restore a sense of control and clarity.

I need to figure out how I can attend both Brent’s talk and my former Tucows coworker Leonid Igolnik’s talk (which he’s giving with Baruch Sadogursky), Back to the Future of Software: How to Survive the AI Apocalypse with Tests, Prompts, and Specs

Great Scott! The robots are coming for your job—and this time, they brought unit tests. Join Doc and Marty from the Software Future (Baruch and Leonid) as they race back in time to help you fight the machines using only your domain expertise, a well-structured prompt, and a pinch of Gherkin. This keynote is your survival guide for the AI age: how to close the intent-to-prompt chasm before it swallows your roadmap, how to weaponize the Intent Integrity Chain to steer AI output safely, and why the Art of the Possible is your most powerful resistance tool. Expect:

• Bad puns
• Good tests
• Wild demos

The machines may be fast. But with structure, constraint, and a little time travel, you’ll still be the one writing the future.

Decisions, decisions…

Want to find out more about and register for Arc of AI?

Once again, Arc of AI will take place from Monday, April 13 through Thursday, April 16, with the workshop day taking place on Monday, and the main conference taking place on Tuesday, Wednesday, and Thursday.

Arc of AI tickets are BOGO!

From Arc of AI’s registration page:

You read that right! For each conference ticket you purchase, you get one free ticket. This applies only to conference tickets and not for workshops.

Categories
Artificial Intelligence Conferences Programming What I’m Up To

The Arc of AI conference’s workshop day: Monday, April 13, 2026

From April 13th through 16th — and a couple of days before, because it’s in Austin — I’m going to be at the Arc of AI conference! Over the next little while, I’m going to be posting articles about Arc of AI, in case you’re wondering what the conference is about and whether you should go.

In this article, I’ll talk about the workshop day and one of the workshops in particular.

Monday, April 13: The workshop day

Screenshot of the workshops schedule for Arc of AI’s workshop day.
Click to see the workshops at full size.

Prior to the main conference days (Tuesday, April 14 through Thursday, April 16), Arc of AI will hold its Workshop Day on Monday, April 13, where they’ll have six AI workshops:

  • Fundamentals of Software Engineering In the age of AI (Dan Vega and Nathaniel Schutta)
  • Building a Production-Grade RAG Pipeline (Wesley Reisz)
  • AI-Driven API Design (Mike Amundsen)
  • Creating AI Assisted Applications Using LangChain4j (Venkat Subramaniam)
  • Developing AI Applications with Agents, Rag, and MCP using Python (Brent Laster)
  • Tech Leadership in the Time of AI (Brian Sletten)

The Fundamentals of Software Engineering in the Age of AI workshop

One of the workshops I’m interested in is Nathaniel Schutta’s and Dan Vega’s Fundamentals of Software Engineering in the age of AI, which will be based on their recently-published (November 2025) O’Reilly book, Fundamentals of Software Engineering, but with the application of AI.

Here’s an excerpt from their workshop’s abstract:

This intensive workshop bridges the critical gap between what early-career developers learn in formal education and what they need to thrive in professional environments where human expertise and artificial intelligence increasingly collaborate. Based on our book “Fundamentals of Software Engineering,” we guide participants through a comprehensive journey from programmer to well-rounded software engineer equipped to leverage AI tools effectively while maintaining engineering fundamentals.

Participants will develop both technical capabilities and professional skills that remain relevant regardless of changing languages, frameworks, and AI capabilities. Through a balanced mix of conceptual teaching, collaborative discussions, and hands-on exercises with both traditional and AI-assisted approaches, attendees will work on realistic scenarios that reinforce practical application of these fundamental principles while developing discernment about when and how to integrate AI tools into their workflow.

Learnings:

  • Understanding the programmer to engineer transition and mindset shift
  • Developing advanced code reading techniques and comprehension strategies
  • Crafting maintainable, readable code that communicates intent
  • Applying software modeling concepts to visualize and plan complex systems
  • Implementing comprehensive automated testing strategies
  • Effective techniques for working with legacy codebases and existing systems

Benefits:

Students will understand the concepts and how to apply them right now cutting through the hype surrounding AI. With practical tips and guidance, they can jumpstart their use of AI across the software development lifecycle.

Who should attend:

Primarily developers and architects but ultimately anyone that’s struggling to understand how to apply AI to their world today while avoiding the pitfalls and rabbit holes.

I’m intrigued by this workshop, as it’s about the application of AI tools to the way software is built, which is pretty new turf for all of us. When I learned software development, there were already plenty of lessons from decades of developers’ experiences, and in my career, I and the rest of the industry picked up a couple decades’ more tips and tricks. But all that learning is from the “before times.” Right now, we’re not even five years into the post-ChatGPT era, and we’re only beginning to figure out how to write applications in the era of vibe coding (and remember, Andrej Karpathy coined the term barley over a year ago).

Since the workshop is based on the book, this video might give you an idea of what it’ll be like:

Want to find out more about and register for Arc of AI?

Once again, Arc of AI will take place from Monday, April 13 through Thursday, April 16, with the workshop day taking place on Monday, and the main conference taking place on Tuesday, Wednesday, and Thursday.

Arc of AI tickets are BOGO!

From Arc of AI’s registration page:

You read that right! For each conference ticket you purchase, you get one free ticket. This applies only to conference tickets and not for workshops.

 

Categories
Artificial Intelligence Conferences Meetups Programming What I’m Up To

I’m speaking at “Arc of AI” in Austin, Texas — April 13 – 16!

I just got added to the list of speakers who’ll be presenting at the Arc of AI conference, which takes place April 13 – 16 in Austin, Texas!

Arc of AI is the premier AI conference for deep technical talks on everyone’s favorite two-letter field. If you’re one of these kinds of people interested in AI…

  • Software developer
  • Architect
  • Data engineer
  • Technology leader

…and you want to learn the latest strategies, tools, and practices for building AI-powered applications and boosting your development workflows, with AI, this is your conference!

The early bird ticket price is $799, but that lasts only until this Saturday, March 14th. It goes up to $899 until April 4th, after which the price becomes $999.

Tampa Bay AI Meetup is a community partner of Arc of AI, and we can help you save $50 off the ticket price! Just use the discount code TampaBayAIMeetup when checking out.

There’s another way to attend Arc of AI for even less: come to this Thursday’s Tampa Bay AI Meetup, where we’re covering vibe coding, and find out how you can win a ticket to Arc of AI for FREE!

I’ll be writing more about Arc of AI soon — watch this space!