I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).
Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.
Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.
Here’s part three of several notes from the all-day session; you might want to get a coffee for this one.
Here are links to my previous notes:
Start with the big picture before you touch anything
After lunch, Nate and Dan shifted gears from the big themes of reading code and navigating unfamiliar systems into something more granular: what actually makes code good, how to work with the humans around that code, and why the people problems in software are harder than the technical problems. If Part 1 was the philosophical case for fundamentals and Part 2 was about reading and navigating code, Part 3 was the craft and culture of actually writing it well – and getting your organization to care.
Dan opened this segment with a point that gets skipped constantly: before diving into a codebase, understand why it exists. Who are the stakeholders? What does this project mean to the business? Who are the actual humans using it?
He made a point I appreciated: LLMs can’t produce empathy. They can describe a system, but they can’t tell you that the insurance claims processing app you think is boring is the thing that determines whether a family gets their house repaired after a flood. That kind of context changes how carefully you work.
On documentation: read it, but don’t treat it as gospel. Dan spent three days once trying to understand a complex system by carefully reading what he thought was current documentation, then discovered it was two major versions out of date. The code had been completely rewritten. His rule: documentation can lie, but code never does. Read both, verify what’s actually running, and don’t be afraid to ask a colleague for three minutes of context before burning three days spinning your wheels.
He also made a point about documentation as an opportunity: if there isn’t much of it, that’s your chance to contribute right away. Your fresh perspective on an underdocumented system is genuinely valuable; you’ll notice things longtime contributors have stopped seeing.
Navigating unfamiliar code: entry points and mental models
Dan walked through his framework for getting oriented in a large, unknown codebase. The key concept: find the entry points. In Java, that’s the main method. But more broadly, it’s anything that answers “how does something get into this system?” – public APIs, web UIs, event handlers, message consumers, scheduled tasks, lifecycle hooks.
If you don’t know what questions to ask, you can’t ask them, whether of a teammate, or of an AI. That’s the part that requires actual knowledge. Once you know you’re looking for entry points, you can use AI tools to help find them. Without that conceptual frame, you’re just asking “what does this do?” and hoping for a useful answer.
From there, he talked about building mental models. Not necessarily elaborate UML diagrams, but some kind of internal representation of how the system works. A sketch on paper. A flow chart from entry point to output. Something that externalizes the structure so you can reason about it and share it with someone else who can tell you what’s missing.
Nate added something I want to highlight: AI tools can tell you what code is doing, but they still can’t tell you why it’s doing it. That gap between the code’s behavior and the intent behind it is where human expertise lives. The code may be technically correct and historically wrong, a deliberate workaround that made sense in 2014 that nobody documented.
Make changes carefully, incrementally, and reversibly
Nate was emphatic on this: when you’re modifying existing code, especially under time pressure, make small, reversible changes. Not 3,000-line PRs. Not agents running loose making sweeping modifications. Atomic commits, each representing one logical change, that can be understood, reviewed, and reverted independently.
His version control points were basic but worth restating:
- Commit frequently, not in massive batches
- Write meaningful commit messages (this is, he admitted, something he now largely delegates to AI – letting it summarize what he changed before committing)
- You are accountable for every PR you submit, regardless of whether you or an agent wrote the code
That last point deserves emphasis. Dan was clear: “If I have questions about a PR, you better be able to answer them. You can’t just say ‘my AI did it.’ You have to understand these decisions.”
He also raised a thought experiment worth sitting with: imagine your boss tells you to take Friday off, and over the long weekend, an AI agent will be let loose on your most critical production system: fixing bugs, adding features. You’ll review what it did on Monday. Are you excited about the three-day weekend, or terrified?
If your answer is “terrified,” that’s the correct answer. And the reason you’re terrified points directly to the value of the fundamentals: documentation, tests, diagrams, clear architecture. Those are the things that make an AI’s work reviewable rather than a mystery you have to reverse-engineer.
What makes code good (and bad)
This section was dense. The key ideas, in rough sequence:
- The Ikea effect and code ownership. Nate: “Every one of you has looked at some code and uttered some variant of ‘what idiot wrote this,’ only to realize you were the idiot who wrote it a couple months ago.” We value our own code more than we should. Code reviews exist partly as a corrective for this.
- Languages are tools, not identities. Both Nate and Dan are Java Champions, and both were clear: Java is just a tool, not a religion. The Blub Paradox (from Paul Graham) explains why developers get dogmatic: you can’t easily see the limitations of your chosen language because it’s your baseline for normal. AI tools are helping break this a bit; they’re using more languages and frameworks than they used to, and that breadth makes them better programmers.
- The lazy programmer ethos is real and good. Before writing code, spend 20 minutes making sure someone else hasn’t already solved this. Use language features before reaching for a library. Use a library before writing your own. Dan told a great story about being new to a project, discovering a utility function that took 14 parameters just to capitalize a string, and quietly using the built-in string method instead, then watching the entire senior team’s heads explode when he revealed this in a meeting. The built-in had been there for years. Nobody had looked.
- Lines of code is a terrible metric. Dan said this directly: shipping 37,000 lines of code is not an accomplishment. Code is a liability. More code means more surface area for bugs, more maintenance, more complexity for the next person (including future you). The vibe coding community’s tendency to measure apps by lines of code is backwards. Code deleted is almost always the better outcome.
- Cyclomatic complexity matters. This came up repeatedly. Nate’s heuristics: low single digits is good, high single digits means you should be actively refactoring, double digits means it’s time to leave the project. He mentioned encountering real production code – written by a human – with a cyclomatic complexity of 82. The brackets were labeled “start for loop one / end for loop one” just to keep track. Not good.The punchline about cyclomatic complexity as a guardrail for AI agents was sharp: if you don’t give an agent a directive like “cyclomatic complexity must stay below four,” it won’t apply that constraint. And if you don’t know what cyclomatic complexity is, you won’t know to ask. Tools like SonarQube, PMD, and the memorably-named CRAP metric (Change Risk Anti-Patterns: cyclomatic complexity versus code coverage) can help enforce this, but only if someone with the knowledge sets them up.
- Short methods, high cohesion, low coupling. Nate: “A method should do one thing and do it very, very well. This is the concept behind Unix piping: simple things together to get more complicated results.” That said, he also added the counterpoint: don’t favor brevity over clarity. A one-liner that nobody can understand in six months is worse than three readable lines.
- AI tends toward verbosity and complexity. Both speakers noted that AI coding assistants have a strong bias toward writing more code rather than less, toward adding dependencies rather than using what’s already there, and toward long methods rather than short ones. They will solve the problem – but they won’t necessarily solve it simply. That instinct toward simplicity has to come from you, either as a direct code reviewer or as someone who knows how to write good prompts and capability directives.
- Composition over inheritance. Dan mentioned this as a persistent AI failure mode: models trained on years of Java code have learned the “create a service interface and one implementation even when you’ll never have a second implementation” pattern because it was ubiquitous. That doesn’t mean it’s good. It just means it’s common in the training data.
- Copies of copies degrade. Nate made a point I hadn’t heard framed quite this way: if vibe-coded projects proliferate on the internet, and future models are trained on that code, the training data quality decreases. Models training on AI-generated output of questionable quality will produce AI-generated output of worse quality. We’re already seeing this in written content on LinkedIn and elsewhere. We should expect to see it in code.
Heritage code, not legacy code
One small reframing that I liked: Dan suggested we call it “heritage code” instead of “legacy code.” Legacy has a negative connotation. But code that’s been in production for fifteen years and processed billions of dollars of transactions is an achievement. It deserves some respect.
That said, Nate was clear: all code eventually becomes legacy. Sometimes immediately after you commit it. It will live longer than you expected, will be harder to kill than you hoped, and someone will be maintaining it years after you’ve moved on. Write with that person in mind.
His favorite version of this sentiment, which he attributed to someone else: “Always write code as if the person maintaining it is a homicidal maniac who knows where you live.”
The influence skills nobody taught you
The final section of this part of the workshop took a hard turn into territory that software engineering curricula almost never cover (but is a key part of my developer advocate work): how to actually get things done in organizations full of humans with competing incentives.
Nate’s thesis: the hardest problems in software are people problems, not technical problems. And the skills to navigate people problems: influence, empathy, listening, finding common ground; all of these don’t come with a CS degree.
He recommended How to Win Friends and Influence People by Dale Carnegie without apology. “It is older than everyone in this room. It is Evergreen. I guarantee it will help your career.” The book is about understanding what people actually need versus what they’re saying they need, and how to align your goals with theirs.
On the current AI mandate situation specifically, he offered a practical frame: many senior leaders have “establish AI across our workforce” as a KPI tied to their bonus. They don’t necessarily care how you use AI. They need to be able to say you’re using it. If you can give them a win, a story they can tell upward, they will largely leave you alone about the details. Fill the vacuum with your own narrative or someone else will fill it with token counts.
Two approaches to influence:
- The hammer approach: brute-force people into agreeing with you. Works occasionally, burns trust, creates enemies.
- The ninja approach: make it their idea. Nate told a story about introducing TDD at a company that had rejected it when he first proposed it. He convinced one tech lead (who happened to be named Jeff, continuing the workshop’s running bit about terrible variable names) to adopt it on his team. When crunch time arrived and Jeff’s team was calmly fixing small issues while everyone else was drowning in defects, Jeff presented the same TDD case to the wider team – and got a standing ovation. Nate, who had proposed the same thing months earlier and been ignored, got no credit. But the practice got adopted. That was the goal.
His point: being the new person with the right answer is often less effective than being the connector who gets the right answer into the right person’s mouth. Letting go of the credit is a skill. It’s not a natural skill. Practice it anyway.
Code reviews: the underrated force multiplier
The workshop closed this segment with code reviews, and both speakers were emphatic that these matter more in an AI-augmented world, not less. When agents are generating PRs, someone with judgment still has to review them, and that reviewer has to understand the code well enough to ask real questions.
Some norms they pushed:
- No snarky comments. Ever. They are not useful, they’re not clever, and everyone can see what you’re doing.
- No 3,000-line PRs. Reviewers should refuse to engage with them.
- Assume positive intent. You don’t know what’s happening in someone’s life. The code that looks lazy might have constraints you’re unaware of.
- Ask questions instead of making proclamations. “Did you consider what happens when user load ramps up?” is better than “this won’t scale.” Especially when you haven’t done the math.
- You are not your code. Code reviews are opportunities to improve the work, not indictments of your worth as a person.
Nate’s read on the current state of code reviews: PRs have made the process much more accessible than the old scheduled review meeting, but have also introduced review theater – someone clicking “approved” without looking because it’s in the process checklist. The form without the substance.
Dan’s suggestion: use AI to help you understand PRs before reviewing them. Give it the PR description and ask it to explain what’s actually changing and why. You’ll ask better questions.