Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 3: Clean code, influence skills, and why your legacy code pays the bills

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.

Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part three of several notes from the all-day session; you might want to get a coffee for this one.

Here are links to my previous notes:


Start with the big picture before you touch anything

After lunch, Nate and Dan shifted gears from the big themes of reading code and navigating unfamiliar systems into something more granular: what actually makes code good, how to work with the humans around that code, and why the people problems in software are harder than the technical problems. If Part 1 was the philosophical case for fundamentals and Part 2 was about reading and navigating code, Part 3 was the craft and culture of actually writing it well – and getting your organization to care.

Dan opened this segment with a point that gets skipped constantly: before diving into a codebase, understand why it exists. Who are the stakeholders? What does this project mean to the business? Who are the actual humans using it?

He made a point I appreciated: LLMs can’t produce empathy. They can describe a system, but they can’t tell you that the insurance claims processing app you think is boring is the thing that determines whether a family gets their house repaired after a flood. That kind of context changes how carefully you work.

On documentation: read it, but don’t treat it as gospel. Dan spent three days once trying to understand a complex system by carefully reading what he thought was current documentation, then discovered it was two major versions out of date. The code had been completely rewritten. His rule: documentation can lie, but code never does. Read both, verify what’s actually running, and don’t be afraid to ask a colleague for three minutes of context before burning three days spinning your wheels.

He also made a point about documentation as an opportunity: if there isn’t much of it, that’s your chance to contribute right away. Your fresh perspective on an underdocumented system is genuinely valuable; you’ll notice things longtime contributors have stopped seeing.

Navigating unfamiliar code: entry points and mental models

Dan walked through his framework for getting oriented in a large, unknown codebase. The key concept: find the entry points. In Java, that’s the main method. But more broadly, it’s anything that answers “how does something get into this system?” – public APIs, web UIs, event handlers, message consumers, scheduled tasks, lifecycle hooks.

If you don’t know what questions to ask, you can’t ask them, whether of a teammate, or of an AI. That’s the part that requires actual knowledge. Once you know you’re looking for entry points, you can use AI tools to help find them. Without that conceptual frame, you’re just asking “what does this do?” and hoping for a useful answer.

From there, he talked about building mental models. Not necessarily elaborate UML diagrams, but some kind of internal representation of how the system works. A sketch on paper. A flow chart from entry point to output. Something that externalizes the structure so you can reason about it and share it with someone else who can tell you what’s missing.

Nate added something I want to highlight: AI tools can tell you what code is doing, but they still can’t tell you why it’s doing it. That gap between the code’s behavior and the intent behind it is where human expertise lives. The code may be technically correct and historically wrong, a deliberate workaround that made sense in 2014 that nobody documented.

Make changes carefully, incrementally, and reversibly

Nate was emphatic on this: when you’re modifying existing code, especially under time pressure, make small, reversible changes. Not 3,000-line PRs. Not agents running loose making sweeping modifications. Atomic commits, each representing one logical change, that can be understood, reviewed, and reverted independently.

His version control points were basic but worth restating:

  • Commit frequently, not in massive batches
  • Write meaningful commit messages (this is, he admitted, something he now largely delegates to AI – letting it summarize what he changed before committing)
  • You are accountable for every PR you submit, regardless of whether you or an agent wrote the code

That last point deserves emphasis. Dan was clear: “If I have questions about a PR, you better be able to answer them. You can’t just say ‘my AI did it.’ You have to understand these decisions.”

He also raised a thought experiment worth sitting with: imagine your boss tells you to take Friday off, and over the long weekend, an AI agent will be let loose on your most critical production system: fixing bugs, adding features. You’ll review what it did on Monday. Are you excited about the three-day weekend, or terrified?

If your answer is “terrified,” that’s the correct answer. And the reason you’re terrified points directly to the value of the fundamentals: documentation, tests, diagrams, clear architecture. Those are the things that make an AI’s work reviewable rather than a mystery you have to reverse-engineer.

What makes code good (and bad)

This section was dense. The key ideas, in rough sequence:

  • The Ikea effect and code ownership. Nate: “Every one of you has looked at some code and uttered some variant of ‘what idiot wrote this,’ only to realize you were the idiot who wrote it a couple months ago.” We value our own code more than we should. Code reviews exist partly as a corrective for this.
  • Languages are tools, not identities. Both Nate and Dan are Java Champions, and both were clear: Java is just a tool, not a religion. The Blub Paradox (from Paul Graham) explains why developers get dogmatic: you can’t easily see the limitations of your chosen language because it’s your baseline for normal. AI tools are helping break this a bit; they’re using more languages and frameworks than they used to, and that breadth makes them better programmers.
  • The lazy programmer ethos is real and good. Before writing code, spend 20 minutes making sure someone else hasn’t already solved this. Use language features before reaching for a library. Use a library before writing your own. Dan told a great story about being new to a project, discovering a utility function that took 14 parameters just to capitalize a string, and quietly using the built-in string method instead, then watching the entire senior team’s heads explode when he revealed this in a meeting. The built-in had been there for years. Nobody had looked.
  • Lines of code is a terrible metric. Dan said this directly: shipping 37,000 lines of code is not an accomplishment. Code is a liability. More code means more surface area for bugs, more maintenance, more complexity for the next person (including future you). The vibe coding community’s tendency to measure apps by lines of code is backwards. Code deleted is almost always the better outcome.
  • Cyclomatic complexity matters. This came up repeatedly. Nate’s heuristics: low single digits is good, high single digits means you should be actively refactoring, double digits means it’s time to leave the project. He mentioned encountering real production code – written by a human – with a cyclomatic complexity of 82. The brackets were labeled “start for loop one / end for loop one” just to keep track. Not good.The punchline about cyclomatic complexity as a guardrail for AI agents was sharp: if you don’t give an agent a directive like “cyclomatic complexity must stay below four,” it won’t apply that constraint. And if you don’t know what cyclomatic complexity is, you won’t know to ask. Tools like SonarQube, PMD, and the memorably-named CRAP metric (Change Risk Anti-Patterns: cyclomatic complexity versus code coverage) can help enforce this, but only if someone with the knowledge sets them up.
  • Short methods, high cohesion, low coupling. Nate: “A method should do one thing and do it very, very well. This is the concept behind Unix piping: simple things together to get more complicated results.” That said, he also added the counterpoint: don’t favor brevity over clarity. A one-liner that nobody can understand in six months is worse than three readable lines.
  • AI tends toward verbosity and complexity. Both speakers noted that AI coding assistants have a strong bias toward writing more code rather than less, toward adding dependencies rather than using what’s already there, and toward long methods rather than short ones. They will solve the problem – but they won’t necessarily solve it simply. That instinct toward simplicity has to come from you, either as a direct code reviewer or as someone who knows how to write good prompts and capability directives.
  • Composition over inheritance. Dan mentioned this as a persistent AI failure mode: models trained on years of Java code have learned the “create a service interface and one implementation even when you’ll never have a second implementation” pattern because it was ubiquitous. That doesn’t mean it’s good. It just means it’s common in the training data.
  • Copies of copies degrade. Nate made a point I hadn’t heard framed quite this way: if vibe-coded projects proliferate on the internet, and future models are trained on that code, the training data quality decreases. Models training on AI-generated output of questionable quality will produce AI-generated output of worse quality. We’re already seeing this in written content on LinkedIn and elsewhere. We should expect to see it in code.

Heritage code, not legacy code

One small reframing that I liked: Dan suggested we call it “heritage code” instead of “legacy code.” Legacy has a negative connotation. But code that’s been in production for fifteen years and processed billions of dollars of transactions is an achievement. It deserves some respect.
That said, Nate was clear: all code eventually becomes legacy. Sometimes immediately after you commit it. It will live longer than you expected, will be harder to kill than you hoped, and someone will be maintaining it years after you’ve moved on. Write with that person in mind.
His favorite version of this sentiment, which he attributed to someone else: “Always write code as if the person maintaining it is a homicidal maniac who knows where you live.”

The influence skills nobody taught you

The final section of this part of the workshop took a hard turn into territory that software engineering curricula almost never cover (but is a key part of my developer advocate work): how to actually get things done in organizations full of humans with competing incentives.

Nate’s thesis: the hardest problems in software are people problems, not technical problems. And the skills to navigate people problems: influence, empathy, listening, finding common ground; all of these don’t come with a CS degree.

He recommended How to Win Friends and Influence People by Dale Carnegie without apology. “It is older than everyone in this room. It is Evergreen. I guarantee it will help your career.” The book is about understanding what people actually need versus what they’re saying they need, and how to align your goals with theirs.

On the current AI mandate situation specifically, he offered a practical frame: many senior leaders have “establish AI across our workforce” as a KPI tied to their bonus. They don’t necessarily care how you use AI. They need to be able to say you’re using it. If you can give them a win, a story they can tell upward, they will largely leave you alone about the details. Fill the vacuum with your own narrative or someone else will fill it with token counts.

Two approaches to influence:

  1. The hammer approach: brute-force people into agreeing with you. Works occasionally, burns trust, creates enemies.
  2. The ninja approach: make it their idea. Nate told a story about introducing TDD at a company that had rejected it when he first proposed it. He convinced one tech lead (who happened to be named Jeff, continuing the workshop’s running bit about terrible variable names) to adopt it on his team. When crunch time arrived and Jeff’s team was calmly fixing small issues while everyone else was drowning in defects, Jeff presented the same TDD case to the wider team – and got a standing ovation. Nate, who had proposed the same thing months earlier and been ignored, got no credit. But the practice got adopted. That was the goal.

His point: being the new person with the right answer is often less effective than being the connector who gets the right answer into the right person’s mouth. Letting go of the credit is a skill. It’s not a natural skill. Practice it anyway.

Code reviews: the underrated force multiplier

The workshop closed this segment with code reviews, and both speakers were emphatic that these matter more in an AI-augmented world, not less. When agents are generating PRs, someone with judgment still has to review them, and that reviewer has to understand the code well enough to ask real questions.

Some norms they pushed:

  • No snarky comments. Ever. They are not useful, they’re not clever, and everyone can see what you’re doing.
  • No 3,000-line PRs. Reviewers should refuse to engage with them.
  • Assume positive intent. You don’t know what’s happening in someone’s life. The code that looks lazy might have constraints you’re unaware of.
  • Ask questions instead of making proclamations. “Did you consider what happens when user load ramps up?” is better than “this won’t scale.” Especially when you haven’t done the math.
  • You are not your code. Code reviews are opportunities to improve the work, not indictments of your worth as a person.

Nate’s read on the current state of code reviews: PRs have made the process much more accessible than the old scheduled review meeting, but have also introduced review theater – someone clicking “approved” without looking because it’s in the process checklist. The form without the substance.

Dan’s suggestion: use AI to help you understand PRs before reviewing them. Give it the PR description and ask it to explain what’s actually changing and why. You’ll ask better questions.

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 2: Reading code is a superpower, and we were never taught it

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.
Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part two of several notes from the all-day session; you might want to get a coffee for this one. You can read the previous set of notes here.


How you got here doesn’t matter. That you got here does.

Nate and Dan presenting, with a slide that reads “Ultimately it is about problem solving, tinkering, creativity”

After the first break, Nate and Dan shifted from the big-picture AI discourse into something more concrete: the actual craft skills that make a software engineer, and why those skills are becoming more important in an AI-augmented world, not less.

Nate opened this segment by talking about the different paths into software engineering (the traditional CS degree, boot camps, self-taught) and making a point I think deserves wider circulation: there is no canonical path, and apologizing for yours is a waste of energy.
What matters, in his view, isn’t the credential. It’s whether you have the tinkering mindset. Whether you’ve gone to sleep thinking about a problem and woken up with the answer. Whether you look at a broken thing and feel the pull to understand why it’s broken.

He also made an honest admission about what CS programs are actually designed to do: prepare you for graduate school in computer science. That means algorithms, compiler theory, operating systems, language design. Practically useful for building production software? Debatable. Practically useful for becoming a researcher? Yes. Boot camps swing hard the other way – framework-heavy, language-focused, get-you-hired in 12 weeks – which means they’re also somewhat transitory, because the framework of the moment changes every six months.

Neither path gives you everything. That gap between “what we taught you” and “what I want you to know when you join my project” is basically what their book is trying to fill.

The skill we teach least is the one we use most: reading code

This was the section that hit me hardest, because I’ve thought about it before and never heard it stated this cleanly.

Nate’s observation: we teach people to write code almost exclusively. We spend essentially zero time teaching people to read code. And yet, in any real production environment, the ratio of reading to writing is not even close. You spend far more time navigating, understanding, and reasoning about existing code than you do creating new code from scratch.

His analogy: “I wouldn’t teach you French by saying, now go write some French.”
Reading code is hard for a few compounding reasons. You have to understand the problem domain (which is often genuinely complex – he gave examples from finance and insurance where the business rules alone are labyrinthine). You have to see the code through another person’s mental model. And you often have to do this under time pressure, making changes you don’t fully understand, in systems you weren’t around to watch grow.

The result is what Nate called “patches on top of patches on top of patches,” and the remarkable thing isn’t that these systems have bugs, it’s that they work at all.

There’s also the cognitive bias dimension. The Ikea effect: you value things you assembled yourself more than things someone else built, which means you’re inclined to view your own code as cleaner and more sensible than others’. The mere exposure effect: familiarity breeds preference, which is why developers get dogmatic about languages; not because their preferred language is objectively superior, but because it’s the one they know.

Nate had a great riff here about what he called the Blub Paradox, from a Paul Graham essay: when you’re a programmer in a language somewhere on the power continuum, you look down the spectrum and think “I can’t imagine being productive with those limitations,” and you look up and think “I don’t know why anyone would need all that weird stuff I don’t have.” The language you know well becomes your baseline for what’s normal. AI tools, interestingly, may be helping break this a bit. He and Dan both noticed they’re using more languages and frameworks than they used to.

The Lab: Reading an unfamiliar codebase without AI first

Dan ran the group through a hands-on exercise using the Spring Pet Clinic, a well-known sample Java/Spring application. The instructions were deliberately old-school: no AI tools yet. Just open the repo and start reading.

The goal was to build some muscle memory around the basics: identifying technologies and frameworks from project structure alone, finding a main application class, recognizing architectural patterns just from folder layout.

It’s a more sophisticated skill than it sounds. Dan’s point: even if you’re not a Java developer, you can learn a lot from just looking at a pom.xml. You can infer architectural choices from package structure; “package by feature” versus “package by layer” tells you something about how the original authors thought about the system. You can spot where to start, what the domain objects are, how the system is organized.

After they’d done it manually, Dan switched to showing how AI tools handle the same task, specifically using a “plan mode” in his coding assistant where he wasn’t asking it to write anything, just to explain what it was looking at. The output was genuinely useful: a breakdown of the tech stack, architectural summary, entry points, dependency graph.

His key insight: “I use AI tools far more to read code, understand things, get familiar with things, and learn things than I do to write it.”

But then the follow-up, which is the important part: he wouldn’t have known what questions to ask the AI without the fundamentals. Understanding that architecture is a thing, that there are different ways to organize packages, that there’s something meaningful to look for in the dependency file; that knowledge has to come from somewhere. The AI accelerates the exploration; it doesn’t replace the ability to know what you’re looking for.

AI can tell you what code is doing. It still can’t tell you if that’s right.

This is where the conversation got interesting. Nate made a distinction that I think is underappreciated:

These tools are now remarkably good at reverse-engineering legacy code and telling you what it does. Feed it a 30-year-old COBOL module and it’ll give you a plain-English summary of the behavior. That’s genuinely powerful, especially for the mainframe migration work he mentioned in the morning session.

But “this is what the code is doing” is a completely different question from “is this what the code should be doing?”

He gave a real-world example: a system where some business logic was technically incorrect, but the error was intentionally corrected downstream in a different process. The code was wrong on purpose, because fixing it at the source would have required fixing everything else too. An AI reading that code would correctly describe the behavior, but have no way to know the behavior was a deliberate workaround rather than a bug.

That knowledge lives in the heads of the engineers who were there when the decision was made. And increasingly, as those engineers retire or move on, it’s not living anywhere.

The airline pricing example he used was perfect: the same seats, same flights, same dates — but booking as two one-ways costs a third less than booking as a round trip. There’s almost certainly a specific piece of business logic somewhere that creates that arbitrage. An AI can describe that code. It can’t tell you whether the Delta exec who approved it knew what they were approving.

The sentinel knowledge problem, part two

Nate returned to a theme from the morning: we are starving the pipeline that creates the experts who can actually evaluate AI output. But in this session, he made it more concrete.

Senior engineers look at AI-generated code and immediately spot the issues: the approach that’ll work in a demo but fall over at scale, the pattern that was idiomatic three major versions ago, the security implication nobody mentioned. Junior engineers look at the same code and think it looks fine, because they don’t yet have the experience to know what “fine” looks like.

The concerning dynamic: juniors are increasingly using AI to learn, but learning by accepting AI output without the ability to critique it isn’t learning. It’s cargo cult programming. You’re learning to produce things that look like code without developing the underlying judgment about whether those things are good.

Nate’s line: “AI is the very eager junior developer, and you need to monitor their output closely.”

The economics sidebar: tokens, budgets, and the reality of scale

This wasn’t on the agenda, but it came up organically and it was one of the more grounded conversations of the day.

Nate described a real situation: an organization’s head of AI was approached by a developer who wanted the unlimited Claude Code tier. When asked how many tokens he needed, the answer was 60,000 a day. Response: show me that you’re generating not $300K of business value weekly, but a million dollars. Can you do that? No? Then no.

The scaling math is uncomfortable. A room full of developers (say, 5,000 at a larger company) each burning hundreds or thousands of dollars of tokens per week is a significant line item. And the current pricing reflects a subsidized market. When investors start demanding returns, those prices go up.

He drew an analogy to the Uber model: lose money for years, drive out competition, then raise prices. Except Uber’s “product” (a car ride) is a commodity. The switching costs for enterprise AI tooling embedded into CI/CD pipelines, developer workflows, and institutional processes are not trivial.

His read on Anthropic’s and OpenAI’s revenue vs. profit numbers: revenue is real. Profitability is not. People are seeing value in the product, but the product is priced below cost. That’s not a sustainable business model, and the reckoning will come.

On whether we’ve hit a plateau

Someone in the room asked whether the intelligence improvements we saw around late 2024/early 2025 would continue.

Nate’s take: we’re probably hitting a plateau on pure scaling. The exponential gains from “just make the model bigger” appear to be diminishing. Gary Marcus’s position that we’re approaching the limits of what scaling alone can achieve, strikes him as reasonable.

The “Mythos is so dangerous we can’t release it yet” announcements that keep appearing? He’s skeptical. Follow the incentives: the companies making those claims need their valuations justified.

He was slightly more philosophical about the longer tail – the sci-fi scenarios, the alignment concerns, the “what if it’s already smarter than it’s letting on” thread. He takes it seriously without catastrophizing. The honest version of his view: we don’t know what the motivations of these systems are, because the people who built them don’t fully understand how they work either. That warrants humility, not panic, but also not dismissal.

Bottom line from this session and the previous one

The throughline across the whole day, as best I can summarize it: these tools are genuinely powerful accelerants for people who already have the foundations. They are not a replacement for the foundations. They are an amplifier, and what you get out depends heavily on what you put in.

The code reading skills, the domain understanding, the architectural instincts, and the ability to ask the right questions. All of that still has to come from somewhere. What’s changed is that once you have it, you can go faster, do more, and explore more territory than you could alone.

That’s good. The part that’s bad is that we’re making decisions right now (who to hire, what to teach, what to outsource) based on the assumption that the foundations don’t matter anymore.

They matter. Probably more than they used to.

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 1: The fundamentals still matter!

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.
Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part one of several notes from the all-day session; you might want to get a coffee for this one.


The opening thesis: giving someone a nail gun doesn’t make them a carpenter

Nate opened with a confession: he’s not handy. At all.

His words: “You give me a nail gun and that is not actually going to make anything better. The cat’s gonna have a nail in its tail.”

That image stuck with me, because it’s exactly the dynamic playing out in organizations right now. Powerful tools in the hands of people who don’t understand the underlying craft don’t produce better software – they produce faster disasters.

Both Nate and Dan were quick to acknowledge that yes, things changed. Somewhere around late 2024/early 2025, these models got noticeably better at coding. Neither of them is dismissing that. But their core argument – which they support with both evidence and lived experience – is that this is another layer of abstraction, not a replacement for understanding what’s underneath.

A brief history of “this will replace programmers”

Slide: “Here we go again,” showing a list of technologies that were supposed to replace programmers

Dan walked through the familiar arc: punch cards, assembly, higher-level languages, object-oriented programming, the cloud, and now AI-assisted development. Each step, someone announced the death of the programmer. Each step, the programmer survived and became more productive.

COBOL was going to let business people write their own programs. Java Beans were going to eliminate business logic development. No-code platforms were going to replace developers entirely. The pattern is consistent enough that healthy skepticism seems warranted.

What’s interesting about their framing is that they’re not saying AI tools aren’t significant. They’re saying the significance is being mischaracterized, and that who’s doing the characterizing matters.

Consider the source

This is where the talk got sharp. Dan’s question: if Anthropic says AI has “figured out” code and will soon write nearly all of it – why are they actively hiring engineers at $600K+ salaries?

Their breakdown of who’s claiming AI replaces developers:

  • The tool makers (Anthropic, OpenAI, etc.) – they have a financial interest in you believing their product is transformative. Grain of salt.
  • Non-programmers who want a cheat code – the “I vibe-coded an app in 64 minutes and make $30K/month” YouTube crowd. Grain of salt the size of a boulder.
  • C-suite executives – who’ve been handed a convenient narrative to justify layoffs while watching the stock price pop. Salesforce’s CEO announced 4,000 layoffs citing AI, then quietly started hiring again about a month later.

Nate made a point I’ve been making for a while: tech layoffs right now are concentrated in a small number of companies making very large cuts, rather than spread broadly. The psychological effect is outsized. Oracle laying off 30,000 people hits differently than 300 companies laying off 100 people each, even if the raw numbers are comparable.

Vibe coding: fun for weekend projects, terrifying for payroll

Slide: Andrej Karpathy’s original vibe coding tweet

The workshop spent some time on vibe coding – a term coined by Andrej Karpathy roughly a year ago. Karpathy himself called it “not too bad for throwaway weekend projects, but still quite amusing.”

Nate and Dan’s framing: the stakes matter. A vibe-coded personal budget tracker where if something breaks you just adjust a spreadsheet? Great. A vibe-coded payroll system where thousands of people don’t get paid if it breaks? Categorically different situation.

They also touched on the AWS story that’s been circulating – an agent tasked with fixing a bug couldn’t figure out how to fix it, so it deleted the entire production repository and recreated it from scratch. Which is, in a very literal sense, a solution. Just not one any human with experience would have suggested. As Dan put it: “Systems have no feelings. They have no experience of ‘wait, that doesn’t seem like a good idea.'”

The expertise gap problem

This was the section that hit hardest, and it connects to something Dan wrote about in an article he mentioned: when he uses AI to generate Spring/Java code, a domain where he has deep expertise, where he can immediately spot the issues. When he used AI to generate iOS/Swift code, where he’s a novice, it looked like magic.

The issue isn’t that the code quality was different. The issue is that his ability to evaluate it was different. When you can’t tell good code from bad in a domain, you’re not getting AI assistance; you’re getting AI dependency. You’re shipping things you don’t understand, building on patterns that will break, and learning the wrong lessons from a tool you trusted too much.
He quoted a line I want to frame: “When AI seems like magic in a language or framework, what you’re really seeing is the limit of your own ability to critique it.”

We’re choking off the pipeline that creates experts

Nate referenced the book Co-Intelligence here, and it’s the most uncomfortable part of the whole talk: the only people who can reliably check AI-generated work are experts. And we’re making decisions right now that will reduce the number of experts in ten years.

Companies are not hiring junior developers. Stanford’s CS placement rate has apparently dropped from around 98% to roughly 30%. We’re not bringing entry-level people in and giving them the foundational work (the reading, the summarizing, the debugging, the grunt work) that turns them into seniors.

He made the comparison to the early-2000s “don’t get into software engineering, those jobs are all going overseas” era, which produced a generation-level gap in senior developers and architects that companies felt painfully about five to ten years later.
And we’re doing it again. On purpose, this time, with AI as the cover story.

The mainframe migration moment

This was a tangent, but a good one. Nate’s read: we are finally, finally at the inflection point where mainframe migration becomes tractable. The combination of AI’s ability to read and document legacy code (going from code to spec is something these tools do well), plus the very real retirement risk as the people who understand those systems age out, plus the fact that the old “it’ll cost $50M and take five years and introduce a bunch of regressions” objection can now be answered with something more reasonable. All of that is converging.

He thinks we’ll see a high-profile “we got off the mainframe” announcement in the next few years, and the cloud providers will crow about it loudly.

The economics of AI tools deserve scrutiny

Nate got pointed here, and I think he’s right to. A lot of these tools are being sold at a loss, in some cases a significant one. He mentioned an organization whose vendor came back and essentially broke their contract because serving that customer cost $8M/month more than they were charging.

The concern isn’t that AI goes away. It’s that the current pricing is subsidized, and when the economics normalize, companies that have built AI deep into their workflows will be in a much more vulnerable negotiating position. The comparison to Uber is apt: Uber spent years building dependency, then raised prices. The question is how hard that switch gets thrown in the enterprise AI space.

The actual bottom line

Dan and Nate presenting, showing slide that says “I think what AI does quite frankly is reduce the floor and raise the ceiling for all of us.” — Satya Nadella

Dan closed with what I thought was the right framing: the floor has been lowered (more people can participate in building software) and the ceiling has been raised (experienced engineers can do more than ever before). Both of those things are true and good.

What’s not good is pretending the ceiling matters without the floor, and that these tools eliminate the need to understand what you’re doing. They don’t. They amplify what you already know. If you don’t know anything, they amplify that too.

Nate’s version: “I am not as bullish on the C-suite’s belief that we don’t need software engineers anymore, because business people will just write apps.”

He’s been watching business people almost-write-apps since COBOL. They haven’t quite gotten there yet.

Categories
Picdump

Saturday picdump for Saturday, April 11

Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!


1775717794152

1775658825180

660830100_10163147627423386_3543415511010666334_n

5025831472f825a1

668465145_1530449415081728_6565751478649172702_n

662566164_1288544516702348_7214842833115152342_n

668987816_122217443390340892_3919792714969628688_n

1775436753732

667877964_1370930425055299_4862398934955496378_n

663102714_1383719800466997_7208878750542843593_n

1775627219166

665967621_10232325465939303_5678003085626828117_n

662367578_1485908279589815_1843338264194578480_n

661224079_1424400146384850_636373986347415771_n

1775461799853

659802385_2257831298080156_8984384499550231635_n

1775305537031

660361334_10242849202627600_1184717414204168131_n

1775330500459






1775405673764

648657453_1344521544388853_5093295887915275274_n

661656250_10237232021952991_7290232146069291884_n

662265020_1370736345074707_3506957442220512233_n
Categories
Current Events Meetups Tampa Bay

Tampa Bay tech, entrepreneur, and nerd events list (Monday, April 13 – Sunday, April 19)

Here’s what’s happening in the thriving tech scene in Tampa Bay and surrounding areas for the week of Monday, April 13 through Sunday, April 19!

This list includes both in-person and online events. Note that each item in the list includes:

✅ When the event will take place

✅ What the event is

✅ Where the event will take place

✅ Who is holding the event

This week’s events

Monday, April 13

Event name and location Group Time
Venice Area Toastmasters Club #5486
Online event
Toastmasters District 48 7:30 AM to 9:00 AM EDT
Online: Streaming Live Video with OBS
Online event
Orlando Video & Post Production Meetup 2:00 PM to 3:00 PM EDT
Create a Third-Person Game 10 parts Class Series – Part 6
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 6
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
Tea Tavern – Dungeons and Dragons
Monday, Apr 13 · 6:00 PM to 11:00 PM EDT
Tea Tavern Dungeons and Dragons Meetup Group – DMS WANTED 5:59 PM
CorelDraw Academy
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 6:00 PM to 8:00 PM EDT
Prep Online event for AI Vibe coding – prepping our database
Online event
Tampa AI Applications Meetup Group 6:00 PM to 7:00 PM EDT
TBDEG – Getting Started in Data Engineering: The Basics
Online event
Tampa Bay Data Engineering Group 6:00 PM to 7:00 PM EDT
Sarasota Blood on the Clocktower
Clocktower meetup
Board Games and Card Games in Sarasota & Bradenton 6:00 PM to 10:00 PM EDT
MTG: Commander Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Food, Fun & Games!
Village Inn
Gulfside Gatherings 6:00 PM to 8:00 PM EDT
Toast of Lakewood Ranch Toastmasters Club
Lakewood Ranch Town Hall
Toastmasters District 48 6:30 PM to 7:30 PM EDT
North Port Toastmasters Meets Online!!
Online event
Toastmasters District 48 6:30 PM to 8:00 PM EDT
Adult Dungeons & Dragons One-Shot Campaigns at Conworlds Emporium
Conworlds Emporium
Tarpon Springs Community Fun & Games 6:30 PM to 10:00 PM EDT
Let’s Talk Toastmasters
Online event
Toastmasters Divisions C & D 7:00 PM to 8:30 PM EDT
DigiMondays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:30 PM to 9:30 PM EDT
Weekly General Meetup
Online event
Beginning Web Development 8:00 PM to 9:00 PM EDT
Where is Bitcoin Going?
Online event
Bitcoiners of Southwest Florida 9:00 PM to 10:00 PM EDT
Return to the top of the list

Tuesday, April 14

Event name and location Group Time
Top Interview Questions: What They Mean & Why They’re Asking
Online event
Tampa Cybersecurity Training 10:00 AM to 11:00 AM EDT
Create a Third-Person Game 10 parts Class Series – Part 7
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 7
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
Build with AI: The “Pantry Pilot” — Vision-to-Action with Gemini 3
Online event
Gdg Ocala 6:00 PM to 7:30 PM EDT
Spanglish Toastmasters Club 7703731
Online event
Toastmasters Division G 6:00 PM to 7:30 PM EDT
Disney Lorcana Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Hobby Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
April Critique Night
Tap Room at the Hollander Hotel
Creative Writers Support Group 6:00 PM to 8:00 PM EDT
Pinellas Writers and Authors Weekly Meeting (Online/Zoom)
Online event
Pinellas Writers Group 6:00 PM to 9:00 PM EDT
D&D @ Critical Hit Games (Full)
Critical Hit Games
RPG-Pinellas 6:30 PM to 11:00 PM EDT
Tuesday Night Trivia at Henderson’s Kitchen and Bar
Henderson’s Bar & Kitchen
Gen Geek 6:30 PM to 9:30 PM EDT
Let’s Meetup and Discuss “Local Woman Missing” by Mary Kubica
American Social Orlando
Central Florida Books and Brews 6:30 PM to 8:30 PM EDT
The Sarasota Creative Writers
Sarasota Alliance Church
The Sarasota Creative Writers Meetup Group 6:30 PM to 9:30 PM EDT
Virtual Poetry Write In
Online event
We Write Here Black and Women of Color Writing Group 6:30 PM to 8:30 PM EDT
AI Topics — What is machine learning? A high level overview.
Online event
The Infinite Loop Lounge 7:00 PM to 8:00 PM EDT
[Virtual] Tampa Bay Bitcoin Meetup: News, Markets, & Community
Online event
Tampa Bay Bitcoin 7:00 PM to 9:00 PM EDT
Winter Springs Toastmasters Club
Online event
Toastmasters Divisions C & D 7:00 PM to 8:15 PM EDT
St. Pete Beers ‘n Board Games Meetup for Young Adults
Pinellas Ale Works Brewery
St. Pete Beers ‘n Board Games for Young Adults 7:00 PM to 10:00 PM EDT
Trivia Nights @ Escape Brewing Company – Trinity
Escape Brewing Company
Tampa Bay Area Trivia Players 7:00 PM to 9:00 PM EDT
Yu-Gi-Oh Evening Tournament
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 11:00 PM EDT
Nic At Nite – Weekly Movie Night
Online event
Nerdbrew Events 7:30 PM to 9:30 PM EDT
Trading Tuesday
Online event
Bitcoiners of Southwest Florida 8:00 PM to 9:00 PM EDT
Return to the top of the list

Wednesday, April 15

Event name and location Group Time
Magic Pioneer Event
Wednesday, Apr 15 · 7:00 PM to 10:30 PM EDT
Sunshine Games 5:53 PM
LinkedIn Local Tampa Bay
Tech Success Network 8:00 AM to 9:30 AM EDT
Computer Repair Clinic
2079 Range Rd
Tampa Bay Technology Center 8:30 AM to 12:30 PM EDT
Create a Third-Person Game 10 parts Class Series – Part 8
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 8
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
40k Escalation League
Battlebrush Games
Battlebrush Games: Paint Minis & Play Warhammer/Warmachine 5:00 PM to 9:00 PM EDT
CNC Wednesday’s
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 5:30 PM to 7:30 PM EDT
Chess Night at Conworlds Emporium Every Wednesday
Conworlds Emporium
Tarpon Springs Community Fun & Games 5:30 PM to 7:00 PM EDT
Orlando Chess Association
West Osceola Library
Greater Orlando Chess 5:30 PM to 8:30 PM EDT
Vibe Coding with Bolt.new
Hillsborough County ECC
Tampa AI Applications Meetup Group 6:00 PM to 8:00 PM EDT
3D Printing Orientation: Models and Slicers
Wednesday, Apr 15 · 7:00 PM to 9:00 PM EDT
Tampa Hackerspace 6:00 PM
Casual Commander Wednesdays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 6:00 PM to 11:00 PM EDT
Board Game Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
CigarCitySec Meetup
Cigar City Brewing
Central Florida CitySec 7:00 PM to 10:00 PM EDT
Apopka Foliage Toastmasters
Online event
Apopka Foliage Toastmasters 7:00 PM to 8:30 PM EDT
ONLINE / SPANISH: EPICTETO DISERTACIONES POR ARRIANO
Online event
Orlando Stoics 7:00 PM to 8:30 PM EDT
Cardfight Vanguard!! OverDress Weekly
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:30 PM to 9:30 PM EDT
Game Night!
magnanimous
Tampa 20’s and 30’s Social Crew 7:30 PM to 9:30 PM EDT
Return to the top of the list

Thursday, April 16

Event name and location Group Time
1 Million Cups St. Petersburg
The Greenhouse
1 Million Cups 9:00 AM
1 Million Cups Tampa
Entrepreneur Collaborative Center
1 Million Cups 9:00 AM
Online: Streamlabs Basics for YouTube Live
Thursday, Apr 16 · 1:00 PM to 2:30 PM EDT
Orlando Video & Post Production Meetup 11:09 AM
Sarasota Speakers Exchange Toastmasters
Online event
Toastmasters District 48 12:00 PM to 1:00 PM EDT
Online: Adobe Premiere Level 1
Online event
Orlando Video & Post Production Meetup 3:00 PM to 4:30 PM EDT
Create a Third-Person Game 10 parts Class Series – Part 9
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 9
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
Omni Toastmasters Club 6861
Online event
Toastmasters Divisions C & D 5:45 PM to 7:00 PM EDT
Tampa SEO and Digital Marketing Meetup with Steve Scott
Online event
Tampa SEO and Digital Marketing Meetup with Steve Scott 6:00 PM to 8:00 PM EDT
Board Game Night
Conworlds Emporium
Tarpon Springs Community Fun & Games 6:00 PM to 9:00 PM EDT
Warhammer Night
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
Lean Beer for All Things Agile (Tampa)
Wild Rover Brewery
Tampa Bay Agile 6:30 PM to 8:30 PM EDT
START YOUR OWN SIDE GIG! Small Business Thursdays!
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 6:30 PM to 8:30 PM EDT
April Discussion and Q&A
Online event
Bitcoin Orlando (and Worldwide) 6:30 PM to 8:00 PM EDT
Sip and Share: Poetry
Online event
We Write Here Black and Women of Color Writing Group 6:30 PM to 8:30 PM EDT
FABulous Thursdays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 11:00 PM EDT
One Piece Thursdays
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 10:00 PM EDT
Pathfinder Society
Critical Hit Games
Critical Hit Games 7:00 PM to 10:00 PM EDT
Live streaming production and talent
124 S Ring Ave
Live streaming production and talent 7:00 PM to 9:00 PM EDT
Thursday Tacos & Tax Write Offs
Online event
Nerdbrew Events 7:30 PM to 10:30 PM EDT
Weekly Hacks
Online event
Hacktivate – Hackathon Meetup Group 8:00 PM to 9:00 PM EDT
Return to the top of the list

Friday, April 17

Event name and location Group Time
How did our nation end up in a civil war?
Friday, Apr 17 · 6:30 PM to 8:30 PM EDT
Pages and Plates Book Club 8:00 AM
Osceola Toastmasters Club
Kissimmee Utility Authority (KUA)
Toastmasters Division E 7:30 AM to 9:00 AM EDT
Computer Repair Clinic
2079 Range Rd
Tampa Bay Technology Center 8:30 AM to 12:30 PM EDT
Create a Third-Person Game 10 parts Class Series – Part 10
Online event
Orlando Unity Developers Group 4:30 PM to 6:00 PM EDT
Unity: Create a Third-Person Game 10 parts Class Series – Part 10
Online event
Orlando Game Developers Meetup 4:30 PM to 6:00 PM EDT
Age of Sigmar: Escalation League
Battlebrush Games
Battlebrush Games: Paint Minis & Play Warhammer/Warmachine 5:00 PM to 9:00 PM EDT
Friday night games!
Cozy dragon Games
Cozy Dragon Meetups! 5:00 PM to 10:00 PM EDT
Friday Board Game Night
Bridge Club
Tampa Gaming Guild 5:30 PM to 11:00 PM EDT
Everyday AI: Stranger to Companion | Apr 17–19 | 5 Sessions
EveryDay AI Learning & Social Meetup Group 6:00 PM to 8:00 PM EDT
MTG: Commander FNM
Critical Hit Games
Critical Hit Games 6:00 PM to 11:00 PM EDT
“On Anger” – Seneca, Finishing Book 3 & Closing
The Skills Center
Tampa Stoics 6:30 PM to 8:30 PM EDT
Taps & Drafts | EDH/MtG Night
1Up Entertainment, Tampa
Nerdbrew Events 7:00 PM to 9:00 PM EDT
Modern FNM
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 10:30 PM EDT
The Practicing Stoic: A 13-Week Online Discussion Series
Online event
Orlando Stoics 7:00 PM to 8:30 PM EDT
Friday Pokemon Tournament
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:30 PM to 11:30 PM EDT
Return to the top of the list

Saturday, April 18

Event name and location Group Time
April Book Club
Saturday, Apr 18 · 1:00 PM to 3:00 PM EDT
Tampa Bay Women’s Book Club Meetup Group 1:45 PM
Hunters Creek Toastmasters
Hart Memorial Library 2nd Floor
Toastmasters Division E 9:30 AM to 11:00 AM EDT
EZ Stock (Stock, Options, Market)
2079 Range Rd
Tampa Bay Technology Center 10:00 AM to 12:00 PM EDT
Mini Con (TCG Card show and Kawaii market)
The Castle Hotel
Gen Geek 10:00 AM to 4:00 PM EDT
Come and Hang Out
Panera Bread
Windermere Writers Group 10:00 AM to 12:00 PM EDT
Torchbearer One-Shot: Dread Crypt
Emerald City Comics 4902 113th Ave N, Clearwater, Florida 33760
St Pete and Pinellas Tabletop RPG Group 11:30 AM to 3:30 PM EDT
Welding basics
Makerspaces Pinellas Meetup Group 12:00 PM to 2:00 PM EDT
Youth Dungeons & Dragons Saturdays (Ages7-12) At Conworlds Emporium
Saturday, Apr 18 · 2:00 PM to 5:00 PM EDT
Tarpon Springs Community Fun & Games 1:00 PM
FREE Fab Lab Orientation
Faulhaber Fab Lab
Suncoast Makers 1:30 PM to 2:30 PM EDT
D&D (5e) @ Black Harbor Gaming (FULL)
Black Harbor Gaming
St Pete and Pinellas Tabletop RPG Group 1:30 PM to 5:30 PM EDT
Saturday Chess @ Cozy Kava St. Pete
Cozy Kava
Chess Republic 2:00 PM to 5:00 PM EDT
1776 by David McCullough
New World Tampa
Tampa Book Club – Award-Winning Books 3:00 PM to 5:00 PM EDT
Playing Nintendo Games (Nintendo Switch and Switch 2)
Online event
Nintendo Meetup Central Florida 3:25 PM to 5:25 PM EDT
Tech in Full Effect | Presented By Lite Technology Solutions x CiviWave
Tech in Full Effect 3:30 PM to 6:00 PM EDT
Game Project Therapy (Virtual)
Online event
Tampa Games Developer Guild 4:00 PM to 6:00 PM EDT
Warmachine Journeyman League
Battlebrush Games
Battlebrush Games: Paint Minis & Play Warhammer/Warmachine 5:00 PM to 9:00 PM EDT
Dave and Busters Game Night
Dave & Busters
Gen Geek 6:00 PM to 11:00 PM EDT
Yu-Gi-Oh Evening Tournament
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 7:00 PM to 11:00 PM EDT
From Zero to Crypto: Trading & Digital Business Meetup
Online event
Crypto Visionaries Meetup 9:00 PM to 11:00 PM EDT
Return to the top of the list

Sunday, April 19

Event name and location Group Time
CorelDraw Academy
MakerSpace Pinellas
Makerspaces Pinellas Meetup Group 12:00 PM to 3:00 PM EDT
Sunday Gaming
Tampa Bay Bridge Center
Tampa Gaming Guild 1:00 PM to 11:00 PM EDT
Sunday Chess at Wholefoods in Midtown, Tampa
Whole Foods Market
Chess Republic 2:00 PM to 5:00 PM EDT
D&D Adventurers League
Critical Hit Games
Critical Hit Games 2:00 PM to 7:30 PM EDT
Traveller – Science Fiction Adventure RPG
Black Harbor Gaming
St Pete and Pinellas Tabletop RPG Group 3:00 PM to 6:00 PM EDT
Sunday Pokemon League
Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh!
Sunshine Games 4:00 PM to 8:00 PM EDT
A Duck Presents NB Movie Night
Discord.io/Nerdbrew
Nerd Night Out 7:00 PM to 11:30 PM EDT
Return to the top of the list

About this list

How do I put this list together?

It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.

In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.

What goes into this list?

I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:

    • Programming, DevOps, systems administration, and testing
    • Tech project management / agile processes
    • Video, board, and role-playing games
    • Book, philosophy, and discussion clubs
    • Tech, business, and entrepreneur networking events
    • Toastmasters and other events related to improving your presentation and public speaking skills, because nerds really need to up their presentation game
    • Sci-fi, fantasy, and other genre fandoms
  • Self-improvement, especially of the sort that appeals to techies
  • Anything I deem geeky
Categories
Artificial Intelligence Current Events Editorial

You’ve got 41 days before chip prices skyrocket

If you read my post from a few days ago, you know I’ve been sounding the alarm about how Operation Epic Fury and the closure of the Strait of Hormuz are going to wreck your tech budget. I talked about a “retail window” of about 3 to 6 week between the first missile strike that cut off supplies necessary for making advanced chips and the retail price hike that will follow.

Well, the clock just got a lot more specific.

Nate B. Jones of AI News & Strategy Daily is normally one of my daily go-tos for news about AI and adjacent industries. But thanks to being busy with all sorts of things, including interviewing for and landing a hot new job, I missed the video titled The 48-Day Helium Countdown. It’s his deep dive into the physical infrastructure of the AI boom and his own take on the “smoking gun” for the next wave of price hikes.

Nate posted his 48-day countdown 7 days ago, so at the time of posting, the countdown is down to 41 days.

By the way, this post is dated Monday, April 6, 2026. 41 days from now is Sunday, May 17th.

The Qatari connection

While the fighting is centered on Iran, there’s a “splash zone” in the surrounding area:

In response to the attacks by the U.S. and Israel, Iran hit Ras Laffan Industrial City in Qatar. Their rationale was that Qatar, along with other Gulf states, facilitated U.S./Israeli airstrikes on Iranian energy sites.

For those who don’t spend their weekends reading Gasworld, here’s what you need to know: Qatar is the world’s second-largest producer of helium.

As I wrote in my earlier post:
  • Helium on Earth is the result of radioactive decay.
  • As radioactive elements in the earth’s crust decay, they release alpha particles, which are made up of 2 protons and 2 neutrons. 
  • An alpha particle is a helium nucleus, and because it’s positively charged, it picks up stray electrons and becomes helium gas.
  • Helium gas gets trapped in the same rock structures that hold natural gas, and ends up mixed with it.

Helium is the “Unicorn Blood”of computing

Nate B. Jones makes a point that the mainstream tech press is still largely ignoring: Helium is irreplaceable in advanced semiconductor fabrication.

  1. Thermal Conductivity: Chips are made by using ultraviolet light to “draw” circuitry on silicon treated with a light-sensitive material.

    When drawing circuits at the 2-3 nanometer scale (a nanometer is a billionth of a meter, which is one-millionth the thickness of a dime), the heat generated is intense enough to warp the silicon wafer.

    That’s where the helium comes in. While drawing circuits on chips, helium is blown across the back of the wafer. Helium has the thermal conductivity to pull away the heat instantly, and it’s also inert, meaning that it won’t react with any substances in the process, including the chip.

    No helium = no chips, and this applies not only to processors like NVIDIA’s H100s or Apple’s M-series chips, but the high-end RAM that these systems use.

  2. The “Priority” Problem: Helium’s used for all sorts of things, and fortunately MRIs and chip fabs are at the top of the list for the current supply. But as Nate points out, “first in line” doesn’t matter if the warehouse is empty. China is currently sitting on a strategic helium reserve that the West simply doesn’t have, giving them a massive geopolitical advantage as the 41-day countdown ticks away.

41 days until the “ratchet”

According to Nate’s analysis of current global stockpiles and burn rates at major fabs (TSMC, Samsung, Intel), we have roughly 48 days (at the time he published his video; it’s 41 days as I publish this post) before the strategic reserves hit “critical low” levels.

When that happens, we aren’t just looking at expensive chips. We’re now looking at unavailable chips.

  • The hyperscalers (Google, Microsoft, AWS) will use their trillions to buy up every available (and increasingly expensive) chip to keep their datacenters running, and…
  • The consumer market (you and me) will be left with the hyperscalers’ table scraps.

The bottom line for nerds

If you’ve been vibe coding or running local models and are waiting for the next big release to upgrade your workstation, stop waiting. Your window of opportunity is closing faster than we thought.

Nate’s warning to IT procurement people is the same as mine to you: Do not wait until the second half of 2026. The structural costs are about to ratchet upward. Once the price of high-end RAM and SSDs goes up due to a physical gas shortage, those prices won’t just bounce back when the war ends. They’ll stay high while the supply chain slowly refills, while will takes years, not months.

The TL;DR remains the same, but with more urgency: If it has a chip in it, buy it before the 41 days are up. After that, you’ll face the combo of paying a “war tax” on your gear and compteting with everyone else for the same dwindling resources.

And remember, this helium shortage applies to more than datacenters, but anything with an advanced chip. That includes laptops and phones. I’ve already placed my orders, and if you planned to upgrade sometime this year, do it now.

Good luck out there.

Here’s Nate’s video, The 48-Day Helium Countdown. And remember, it’s 41 days now:

Categories
Picdump

Saturday picdump for Saturday, April 4

Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!


1774850300133

charityasaservice-v0-q2sip2owxhsg1

662307385_2920997144771020_1181339152301377223_n

660342061_1368025352012473_4714597092901515332_n

660444375_1562829962517260_3544792200199717114_n

658670769_1368245141990494_2477430781393655877_n

658239098_1368558291959179_4529285417349037410_n

660336747_1367363338745341_8198020837061385128_n

659658786_1368403948641280_368877122807047534_n

660292948_1368915015256840_6121736742378994264_n

658830945_2918986151638786_6373385186267393733_n

659824267_1368880748593600_5107445633790856042_n

657386617_10162021246665836_4383811213165409279_n

656999919_1363310065817335_2609020470313690824_n

658415047_10174976710787037_1576237339113208462_n

654822214_10173997731615438_6696990019169060183_n

656439004_18317101453261452_4002015672889098038_n

Screenshot
Screenshot

654554143_10232847187270030_7669897669817216226_n

658833337_1367680035380338_374577358145249589_n

654872910_10167002922754867_9170041088316877310_n

665261420_1369185481896460_6212087542639898552_n

657069954_1364401582374850_4666788488917447249_n

1774380496079

657139525_10238473887001382_571129737988065714_n

658065199_1365900495558292_2690262039109407167_n

658560952_18310842013272157_8005830719502647080_n

661022001_1368813491933659_7822871675714270775_n

621214792_18307577224261452_2955852963355258086_n

660427007_10238267028507820_533322352759274092_n

658881598_1367613892053619_6733355448512452632_n

n1zznnmx83tg1

Screenshot
Screenshot

663244507_1368285141986494_4371146876258414001_n

1775132963245

653709865_10173997141405438_6218365181985525080_n

662867537_1368501578631517_4694456009427625917_n

not_an_upgrade

661712642_122177478920888020_7567546861657483223_n

Screenshot
Screenshot

ge0x4k61misg1

659192522_1571106291243174_3682286144257789000_n

660240487_1368285215319820_2371536263565838668_n

660773145_122135256327024975_588203681467577950_n

662505729_1368949758586699_7782625268595766129_n

659886573_1368711715277170_173154469604781474_n

661145475_1504915401161367_1630036335109235758_n

anthropic_outage

620483524_18307577269261452_5989444659721282175_n

653868641_10232847260671865_695314256164302500_n

656736341_1469592095196779_2976421059367649620_n

1774819843152

662591352_1368285071986501_7783386081470714868_n

1775111118099

musk_fsd_promise

658141581_10174397597860372_4173100836937595758_n

657702522_18310841974272157_6259143487910445116_n

657355329_2918896951647706_4817198623459849335_n

658156300_1367408832074125_2405814395045066525_n

657860083_1345640660943118_6365312430395556490_n

656905081_2917241281813273_5315629349733319268_n

clean_code

IMG_1884

657400932_1364605129021162_6699667077089787941_n

1

657344471_1507591194746208_8231979089867193817_n

IMG_1906

660283980_1368880788593596_4449116641984030195_n

660404422_122168987486918926_5830160293699279944_n

659879582_1340831451403097_6792449121035814156_n

655986432_18310841986272157_7969669298720218745_n

658261970_1367992358682439_4123472017182978698_n

653445824_10173997723350438_2450719358429978207_n

620781877_18307577215261452_4231557296313736104_n

Screenshot
Screenshot

654804235_10237344675187783_3734205116973204080_n

Screenshot
Screenshot

656676039_18317101411261452_7155808858000783174_n

IMG_1970

IMG_1895

656968297_807106849124188_7594828566949919219_n

nBObKbc

657320801_1365356675612674_34151645094809188_n

Screenshot
Screenshot

659635924_1368025408679134_238247570080231311_n

657257110_10162019449750836_5595287956038510911_n

stem_needs_humanity