Categories
Artificial Intelligence Conferences

Venkat Subramaniam’s Arc of AI afternoon keynote: “Influencing the Irrational AI-Clouded Minds”

Hello from the Arc of AI conference, happening as I write this from Austin, Texas! I’m currently enjoying a pre-new-job “vacation” in true geek style by attending, speaking at, and playing accordion at an AI conference. Yeah, that tracks.

Yesterday (Tuesday, April 14, 2026), Arc of AI’s ringleader, Venkat Subramaniam, gave one of his “big picture” talks with the insight, warmth, and humor that are his stock in trade. I took notes and pictures, my phone took a recording, I used an LLM to pull it all together, and the end result are these notes from Venkat’s post-lunch keynote, titled Influencing the Irrational AI-Clouded Minds.

Venkat’s session went beyond concerns about code and into the idea of protecting our field from the “breakneck speed” that threatens to break our collective necks. Right now is both the best of times and the worst of times. As tech professionals, we find ourselves on a daily rollercoaster of twists and turns, bouncing between fascination with new capabilities and a very real fear of potential (career? existential?) threats.

Venkat noted that while AI is an incredible tool, it’s  currently functioning as the most powerful “vomit engine” ever created. It can puke out more code than you can handle, but that doesn’t mean it’s code you should trust.

It takes time for tech to catch on and fit into our lives

As the saying goes, history doesn’t always repeat itself,  but it often rhymes. Venkat reminded us that technology maturity is rarely instantaneous. We often take for granted that the world wasn’t always “plugged in.”

  • Electricity: First made available 1878, it was initially very expensive and difficult to generate. It took until the 1940s for the majority of U.S. homes to get electricity, which was a 60-year journey.
  • Bicycles: Designs emerged in the 1830s (and wow, were they ridiculous and downright dangerous), but they didn’t become commonplace until the 1890s. 60 years again.
  • Cars: Emerged in the late 1800s, and took until the 1930s for 60% of U.S. households to own one. 60-ish years.
  • Flight: I was in the “keener row” (Canadian slang for the front row of a classroom) at the keynote, and Venkat knows where I live, so he called on me and asked if I knew where and when the first commercial flight took place. I didn’t know.It turns out that it happened in 1914, cost $400 ($13,000 in today’s money), and was a 22-minute jaunt from Tampa to St. Pete. It would take until 1972 for 50% of Americans to have flow

There is something both magical and sobering about that 60-year window. Every technology starts out shaky, costly, and unsafe. AI is no different.

Redefining AI: It’s Not Intelligence

One of the most grounding points of the keynote was the definition of AI itself. This is something that Venkat brought up at his talk in Tampa back in December.

We admire “intelligence” as original thinking and innovation. What AI does often looks more like what we call “plagiarism” in school:

“I call AI as Accelerated Inference, because that’s what AI really does. AI is an inference engine, not a machine [of intelligence].”

AI analyzes patterns based on available data. If the data’s garbage, the inference is garbage. And let’s be honest: we’ve trained AI on the code we’ve been writing for decades (and hey, it’s not all good). We’re effectively feeding it garbage and being surprised when it shows us what it’s got.

Where AI shines, and where it ends up where the sun don’t shine

Venkat shared a powerful anecdote about an expert C++ developer struggling to write automated tests for an enormously complex library. He suggested handing the tass over to AI. The developer, after some coaxing, did that, and the AI generated a suite of tests with extensive mocking.

But the initial result didn’t even compile. Venkat began to think that his suggestion was a mistake…until the developer took a closer look at the test code.

Upon closer inspection, the developer noted that while the tests didn’t work, they were close to working. He said that he could get them up and running in two hours. It turned out that it took even less time than his original estimate.

The developer came to the realization that what the AI did in seconds would have taken him three months of full-time effort to figure out. AI got the developer 70% of the way there, but it required a developer with enough expertise to do the remaining 30%.

AI strengths vs. weaknesses

I need to take a moment to thank Gemini for turning my hastily-typed notes into the table below, which summarizes AI’s strengths and weaknesses, as enumerated by Venkat at his talk:

AI is Great At… AI is Terrible At…

Handling Cognitive Load: It doesn’t have a “mind” to get overwhelmed by complex, intertwined code.

Reliability: It cannot yet create reliable code or documentation you can release without a human check.

Detecting Issues: It can snap-analyze a design and find bugs or architectural flaws like a missing game loop.

Contextual Truth: It can be “unbelievably smart” while being factually wrong, as seen in the Lua unpack example.

Generating Ideas: It is a powerful tool for ideation where correctness isn’t the primary burden.

Correctness: It might give itself a 9/10 for correctness while admitting its quality is poor due to “mutating sin”.

He also presented this slide:

The items listed:

  • Generate ideas (thumbs up)
  • Detect issues (thumbs up)
  • Analyze design (thumbs up)
  • Explain complex code (thumbs up)
  • Vomit code (thumbs up)
  • Create tests (maybe)
  • Create reliable code (thumbs down)
  • Provide reliable documentation (thumbs down)
  • Be an authentic learning tool (thumbs down)
  • Test your patience (double thumbs up)

The “Vasa” lesson: You can disagree with physics, but it bites back

To illustrate the danger of “irrational AI-clouded minds” in management trying to mis-apply AI, Venkat pointed to the Vasa, a Swedish warship built in the 1600s. The King of Sweden, powerful and arrogant, demanded a massive, overly-ornate ship. The king understood the power of bling, but not trivialities such as “seaworthiness” and “center of gravity”.

The shipwright didn’t have the courage (but an excellent sense of self-preservation) to tell the King he was wrong. The outcome was predictable: the ship sailed only 1,600 yards before sinking, where it stayed for 300 years until it was raised, restored, and now resides in a museum in Stockholm as an (admittedly beautiful) object lesson.

In 2026, we have “Kings of Sweden” at our workplaces, and they want AI solutions at any cost for the same ornamental reasons as the Vasa. But while you might disagree with the “physics” of software development, they’re often as hard to fight as real-world physics. If the cost of failure is high (loss of data/money/life) you can’t blindly jump in and build an AI Vasa.

Programming is thinking, not typing

Venkat pointed us to Wikipedia’s list of obsolete occupations. Will “programmer” end up on this list of over 100 obsolete jobs, like the town crier or the leech collector?. He argues the opposite:

  • Our job isn’t about typing characters; that is the least enjoyable part.
  • We’re thinkers, not typists. We write applications in our heads, not on the keyboard. The keyboard’s just a user interface.
  • Compilers didn’t kill development; they accelerated it. AI will do the same.

The real threat cognitive decline. The level of abstraction takes us far from the code that we spent time developing hard-earned critical thinking skills. If we lose the ability to think critically because we are delegating our thinking to machines, we are in trouble. As Venkat put it, the “illiterate” of the future are those who cannot think critically.

 

New rules of engagement

So, how do we handle the “Black Swan” event of AI?. Venkat suggests we look to the rules recently laid down by Linus Torvalds for the Linux kernel:

  • No AI Sign-offs: AI agents cannot add “sign-off by” tags.
  • Mandatory Attribution: It must be clear if code was assisted by AI.
  • Full Human Liability: The human submitter bears 100% responsibility for bugs, security flaws, and license compliance.

“Don’t trust and verify the heck out of it” is the new norm.

A framework for success

To remain relevant and responsible, Venkat outlined a five-step process for working with AI:

  • Understand the Problem: Don’t jump to AI immediately. Take the time to grasp the requirements.

  • Ideate: Spend time thinking about possible solutions and their consequences.

  • Activate AI: Only after ideation should you engage the tool.

  • Iterate: Work through the solutions provided.

  • Evaluate Critically: Verify everything before it goes anywhere near production.

We can’t go faster with ignorance. We need competency to gain speed and protect our reputations. AI is a powerful tool, but in the hands of a fool, it’s a dangerous one.

Let’s stop the fear-mongering. There is real work to be done, and it requires our minds more than ever.

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 4: Own your career, learn how to learn, and don’t become a dependent

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.

Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part four of several notes from the all-day session; you might want to get a coffee for this one.

Here are links to my previous notes:


The afternoon session of this workshop shifted away from the technical and toward the personal: career management, professional skill-building, how to actually learn things in an industry that never stops changing, and how to stay sane while it’s all happening. Nathaniel Nate carried most of this section alongside Dan Dan, with some sharp contributions from the audience. It was a good room for this kind of conversation: people who’d been in the industry a while, who’d seen waves come and go, trying to figure out what the current wave means for them specifically.

You are your own career manager, and that’s non-negotiable

Dan opened by acknowledging what a lot of people in the room were probably thinking: the career path they imagined when they started — get good at coding, keep getting better at coding, code until retirement — is not the only path, and for a lot of people it turned out not to be the right one either.

His framework for figuring out what direction to go: pay attention to what actually energizes you when you’re working. What problems do you want to solve? Do you prefer building interfaces or working with data and algorithms? Does debugging a gnarly problem feel like a puzzle you want to crack, or a tax you want to stop paying? Do you like the creative side of software, or the precision and correctness side? Side projects, he argued, are one of the best ways to run these experiments without quitting your job to do it.

The paths he outlined go well beyond the traditional “developer or manager” binary: software architect, staff engineer, engineering manager, technical product manager, developer advocate (his own role), sales engineer, and the increasingly relevant entrepreneur. Each has a different center of gravity, and none of them requires you to stop being technical.

His advice for navigating toward one of these: walk backwards from where you want to be. If you want to be an architect in five years, figure out what that role actually requires, then map it back to what you should be doing in years three to five, and years one to two. You’re already doing the mental motion of decomposing complex problems. Apply it to your own career.

Nate added the practical mechanics: use your personal development budget. A lot of people don’t, often out of a quiet fear of standing out or seeming like they’re trying too hard. He was blunt about this: “If you’ve got it and you’re not using it, you’re leaving part of your comp on the table. Any good manager should be thrilled you want to get better at your job.”

The technology radar: a personal framework for staying current without losing your mind

One of the more immediately actionable tools the workshop introduced was the Technology Radar concept. It’s familiar to a lot of people from Thoughtworks’s public-facing version, but here applied personally rather than organizationally.

The idea: organize technologies and techniques into four buckets. Adopt (things you’re currently using and mastering). Trial (things you’re actively experimenting with). Assess (things you’re watching but not diving into yet). Hold (things you’re deliberately not learning right now, even if people keep telling you to).

The audience exercise around this got interesting quickly. People shared their lists. “Rust on hold because Go is a higher priority at my company” was one contribution — and that’s exactly the right way to think about it. Your radar isn’t the same as someone else’s radar. Boris at Anthropic running five parallel Claude Code instances in his terminal doesn’t mean that’s the right workflow for you. Dan was emphatic: “Don’t see what someone else is doing and feel like you’re behind. You’re not.”

The schedule layer Nate added was useful: once you’ve identified something you want to learn, think through the cadence. Weekly, maybe a podcast or a short video. Monthly, maybe a meetup. Quarterly, maybe a deeper hands-on session. Annually, maybe a conference. Small, consistent investment over time beats cramming every time.

Record your wins, and be specific about the numbers

This was a section I wish someone had told me about fifteen years ago, and I suspect most people in the room felt similarly.

Dan’s recommendation: maintain a running wins document. Not elaborate. Not ceremonial. Just a note in Apple Notes or Google Docs where you record things you accomplished, feedback you received, skills you built, presentations you gave. The point is to have the material when you need it — annual reviews, promotion conversations, job searches, award nominations.

The key, and this is where most people go wrong: be specific, and attach numbers wherever possible.

“I improved performance in our flagship application” is forgettable. “I improved performance by 25% by implementing virtual threads” is a data point. “I reduced memory usage across a thousand instances over 300 apps” is a business case. The person making decisions about your raise or your promotion can’t make that case for you if you don’t give them the ammunition. Your manager is not necessarily keeping track of your contributions with the same level of care you are.

Nate extended this with a point about visibility: you want your manager to be able to walk into a room and tell a specific story about you. Not “Nate’s a solid engineer,” but “Nate’s Azure lunch and learn series pulled 200 people in the first session and our Chief Strategy Officer shared the metrics upward.” When your name comes up in rooms you’re not in, you want there to be a story attached to it — and that story needs to be true, specific, and ideally tied to a dollar amount or a measurable outcome.

His framing: “If your boss can say ‘Dan saved us 1.8 million dollars last year in Cloud costs,’ it’s a lot harder to put Dan on the non-regrettable attrition list.”

How we actually learn things (and why most approaches don’t work)

Nate took over for the learning science portion, and it was some of the best material of the day.

The core claim: in order to remember something, it needs to be elaborate, meaningful, and have context. Which is why story is so powerful — stories create context and meaning around facts that would otherwise evaporate. He mentioned that an AV technician once stopped him after a talk specifically to say she noticed he told stories, because most speakers just recite facts, and the stories were why she stayed engaged. He took that as confirmation of what he already believed: stories are the actual unit of memory, not information.

Spaced repetition matters. Brute-forcing your way through something until you think you’ve got it and then never returning to it is how you lose it. The Forgetting Curve is real. Little bits over time beats big chunks all at once. This is why blocking regular learning time on your calendar — Friday afternoons, Tuesday lunches, fifteen minutes of morning coffee before your day explodes — actually works where “I’ll get to it eventually” does not.

He was also honest about the limits of memory: forgetting is normal, not a personal failing. He now uses Gemini to re-explain things like OSI layers that he learned thirty years ago and hasn’t needed day-to-day. “I don’t freaking deal with it constantly. Getting a nice, concise refresher is fine, as long as I verify when it matters.”

The Dreyfus model of skill acquisition came up here, and it’s worth understanding. Five stages: novice (needs explicit recipes, follow the steps exactly), advanced beginner (can start combining recipes), competent (can troubleshoot, begins to self-correct), proficient (can self-correct in the moment), expert (operates on intuition, can’t always explain what they’re doing). The punchline: most developers don’t have ten years of experience; they have one year of experience ten times. And LLMs are permanently stuck somewhere around advanced beginner. They can combine recipes. They will never have intuition, the felt sense that something is wrong before you can articulate why.

Rules are essential for novices. Rules kill experts. A slightly different thing, checklists, are powerful across all levels, as the aviation and surgery examples illustrated. The distinction matters for how you think about AI-assisted development: AI needs guardrails because it can’t develop the intuition to know when to break the rules. You set the guardrails. That requires knowing the rules well enough to encode them.

You cannot read it all. Stop trying.

The death of a thousand subscriptions. Nate described the pile of unread magazines accumulating on his kitchen island and his wife’s gentle suggestion that most of it should go in the recycling as a near-perfect metaphor for the state of our industry’s information environment.

His rough estimate: the amount of content added to YouTube while the workshop was running would take more than a week to watch straight through, even without eating or sleeping. The amount of content added to the internet while they were in the room is unfathomable. Heat death of the universe is going to happen before you read it all.

His solution: cultivate a network of trusted people who read different things and share the signal. He and Glenn, he mentioned, exchange texts constantly, each person watching a different slice of the landscape, forwarding things worth attention. If something is genuinely important, it will hit you from multiple directions regardless. You don’t need to be first to every wave.

This connects back to the Technology Radar: FOMO is real, but you cannot surf every wave. Being a fast follower, letting other people take version 1.0 and joining at 1.1 once the shakeout has happened, is a completely legitimate strategy. The people who are struggling right now, Nate suggested, are the ones saying “nope, not my thing, not engaging,” and not the ones who are choosing deliberately where to focus.

On AI, anxiety, and not feeling like you’re behind

Dan closed with a section that felt necessary: acknowledgment that the current moment is genuinely overwhelming, and that AI fatigue is real even if nobody talks about it.

He referenced an Andrej Karpathy tweet about feeling like a powerful alien tool had been handed to everyone simultaneously without a manual, while a magnitude-9 earthquake is rocking the profession. Nobody knows how to hold it yet. The expectation that developers should now be 10x as productive is not a reality for most people. They’re still learning the tools, still figuring out what works, still dealing with the new cognitive load of evaluating AI output on top of doing the actual work.

His practical guidance on where to start, because the list of things you’re “supposed to know” (MCP, evals, prompt chaining, vibe coding, function calling, embeddings, constitutional AI, token sampling, and so on) is legitimately intimidating:

Start with playing with multiple models. Try the same prompt in Claude, Gemini, GPT. Notice the differences. That alone builds intuition. Then understand context and memory. What are the limitations of these systems, and how do you work within them? Then tools: the idea that you can give an LLM access to actions in the world. Then MCP servers as a way of packaging that capability. Then, eventually, agents and agentic workflows. But not before the foundational layers make sense.

And critically: don’t let someone else’s advanced workflow make you feel behind. The Boris-at-Anthropic-running-five-Claude-Code-instances workflow exists in a context you don’t share. Build your own relationship with these tools from wherever you actually are.

The closing argument

Nate closed the day, and I want to quote him here as directly as I can from my notes, because the framing was right:

“Fundamentals will always serve you well. I am adamantly of the opinion that they are even more important now than they were five years ago, and I thought they were pretty damn important five years ago when we started this book.”

Two mindsets available to you: define yourself by what you’ve done in the past, or define yourself by the problems you’re going to solve in the future. Reactive or proactive. Either way, change is coming. It always has been. He’s been doing this for almost thirty years and has not yet seen an instance where the industry just… stopped. The pendulum swings, the landscape shifts, and the people who navigate it best are the ones who maintain the fundamentals while staying curious enough to pick up the new tools.

He admitted he’s nervous about the cohort of people entering the industry right now: the steep drop in junior hiring, the Stanford placement numbers, the companies that have convinced themselves AI obsoletes entry-level work. But he thinks the snapback is coming. We need juniors to become seniors. Seniors don’t appear from nowhere. At some point, that math becomes undeniable.

His last line stuck with me: “I’d rather be the lead sled dog, because at least the view changes.”

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 3: Clean code, influence skills, and why your legacy code pays the bills

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.

Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part three of several notes from the all-day session; you might want to get a coffee for this one.

Here are links to my previous notes:


Start with the big picture before you touch anything

After lunch, Nate and Dan shifted gears from the big themes of reading code and navigating unfamiliar systems into something more granular: what actually makes code good, how to work with the humans around that code, and why the people problems in software are harder than the technical problems. If Part 1 was the philosophical case for fundamentals and Part 2 was about reading and navigating code, Part 3 was the craft and culture of actually writing it well – and getting your organization to care.

Dan opened this segment with a point that gets skipped constantly: before diving into a codebase, understand why it exists. Who are the stakeholders? What does this project mean to the business? Who are the actual humans using it?

He made a point I appreciated: LLMs can’t produce empathy. They can describe a system, but they can’t tell you that the insurance claims processing app you think is boring is the thing that determines whether a family gets their house repaired after a flood. That kind of context changes how carefully you work.

On documentation: read it, but don’t treat it as gospel. Dan spent three days once trying to understand a complex system by carefully reading what he thought was current documentation, then discovered it was two major versions out of date. The code had been completely rewritten. His rule: documentation can lie, but code never does. Read both, verify what’s actually running, and don’t be afraid to ask a colleague for three minutes of context before burning three days spinning your wheels.

He also made a point about documentation as an opportunity: if there isn’t much of it, that’s your chance to contribute right away. Your fresh perspective on an underdocumented system is genuinely valuable; you’ll notice things longtime contributors have stopped seeing.

Navigating unfamiliar code: entry points and mental models

Dan walked through his framework for getting oriented in a large, unknown codebase. The key concept: find the entry points. In Java, that’s the main method. But more broadly, it’s anything that answers “how does something get into this system?” – public APIs, web UIs, event handlers, message consumers, scheduled tasks, lifecycle hooks.

If you don’t know what questions to ask, you can’t ask them, whether of a teammate, or of an AI. That’s the part that requires actual knowledge. Once you know you’re looking for entry points, you can use AI tools to help find them. Without that conceptual frame, you’re just asking “what does this do?” and hoping for a useful answer.

From there, he talked about building mental models. Not necessarily elaborate UML diagrams, but some kind of internal representation of how the system works. A sketch on paper. A flow chart from entry point to output. Something that externalizes the structure so you can reason about it and share it with someone else who can tell you what’s missing.

Nate added something I want to highlight: AI tools can tell you what code is doing, but they still can’t tell you why it’s doing it. That gap between the code’s behavior and the intent behind it is where human expertise lives. The code may be technically correct and historically wrong, a deliberate workaround that made sense in 2014 that nobody documented.

Make changes carefully, incrementally, and reversibly

Nate was emphatic on this: when you’re modifying existing code, especially under time pressure, make small, reversible changes. Not 3,000-line PRs. Not agents running loose making sweeping modifications. Atomic commits, each representing one logical change, that can be understood, reviewed, and reverted independently.

His version control points were basic but worth restating:

  • Commit frequently, not in massive batches
  • Write meaningful commit messages (this is, he admitted, something he now largely delegates to AI – letting it summarize what he changed before committing)
  • You are accountable for every PR you submit, regardless of whether you or an agent wrote the code

That last point deserves emphasis. Dan was clear: “If I have questions about a PR, you better be able to answer them. You can’t just say ‘my AI did it.’ You have to understand these decisions.”

He also raised a thought experiment worth sitting with: imagine your boss tells you to take Friday off, and over the long weekend, an AI agent will be let loose on your most critical production system: fixing bugs, adding features. You’ll review what it did on Monday. Are you excited about the three-day weekend, or terrified?

If your answer is “terrified,” that’s the correct answer. And the reason you’re terrified points directly to the value of the fundamentals: documentation, tests, diagrams, clear architecture. Those are the things that make an AI’s work reviewable rather than a mystery you have to reverse-engineer.

What makes code good (and bad)

This section was dense. The key ideas, in rough sequence:

  • The Ikea effect and code ownership. Nate: “Every one of you has looked at some code and uttered some variant of ‘what idiot wrote this,’ only to realize you were the idiot who wrote it a couple months ago.” We value our own code more than we should. Code reviews exist partly as a corrective for this.
  • Languages are tools, not identities. Both Nate and Dan are Java Champions, and both were clear: Java is just a tool, not a religion. The Blub Paradox (from Paul Graham) explains why developers get dogmatic: you can’t easily see the limitations of your chosen language because it’s your baseline for normal. AI tools are helping break this a bit; they’re using more languages and frameworks than they used to, and that breadth makes them better programmers.
  • The lazy programmer ethos is real and good. Before writing code, spend 20 minutes making sure someone else hasn’t already solved this. Use language features before reaching for a library. Use a library before writing your own. Dan told a great story about being new to a project, discovering a utility function that took 14 parameters just to capitalize a string, and quietly using the built-in string method instead, then watching the entire senior team’s heads explode when he revealed this in a meeting. The built-in had been there for years. Nobody had looked.
  • Lines of code is a terrible metric. Dan said this directly: shipping 37,000 lines of code is not an accomplishment. Code is a liability. More code means more surface area for bugs, more maintenance, more complexity for the next person (including future you). The vibe coding community’s tendency to measure apps by lines of code is backwards. Code deleted is almost always the better outcome.
  • Cyclomatic complexity matters. This came up repeatedly. Nate’s heuristics: low single digits is good, high single digits means you should be actively refactoring, double digits means it’s time to leave the project. He mentioned encountering real production code – written by a human – with a cyclomatic complexity of 82. The brackets were labeled “start for loop one / end for loop one” just to keep track. Not good.The punchline about cyclomatic complexity as a guardrail for AI agents was sharp: if you don’t give an agent a directive like “cyclomatic complexity must stay below four,” it won’t apply that constraint. And if you don’t know what cyclomatic complexity is, you won’t know to ask. Tools like SonarQube, PMD, and the memorably-named CRAP metric (Change Risk Anti-Patterns: cyclomatic complexity versus code coverage) can help enforce this, but only if someone with the knowledge sets them up.
  • Short methods, high cohesion, low coupling. Nate: “A method should do one thing and do it very, very well. This is the concept behind Unix piping: simple things together to get more complicated results.” That said, he also added the counterpoint: don’t favor brevity over clarity. A one-liner that nobody can understand in six months is worse than three readable lines.
  • AI tends toward verbosity and complexity. Both speakers noted that AI coding assistants have a strong bias toward writing more code rather than less, toward adding dependencies rather than using what’s already there, and toward long methods rather than short ones. They will solve the problem – but they won’t necessarily solve it simply. That instinct toward simplicity has to come from you, either as a direct code reviewer or as someone who knows how to write good prompts and capability directives.
  • Composition over inheritance. Dan mentioned this as a persistent AI failure mode: models trained on years of Java code have learned the “create a service interface and one implementation even when you’ll never have a second implementation” pattern because it was ubiquitous. That doesn’t mean it’s good. It just means it’s common in the training data.
  • Copies of copies degrade. Nate made a point I hadn’t heard framed quite this way: if vibe-coded projects proliferate on the internet, and future models are trained on that code, the training data quality decreases. Models training on AI-generated output of questionable quality will produce AI-generated output of worse quality. We’re already seeing this in written content on LinkedIn and elsewhere. We should expect to see it in code.

Heritage code, not legacy code

One small reframing that I liked: Dan suggested we call it “heritage code” instead of “legacy code.” Legacy has a negative connotation. But code that’s been in production for fifteen years and processed billions of dollars of transactions is an achievement. It deserves some respect.
That said, Nate was clear: all code eventually becomes legacy. Sometimes immediately after you commit it. It will live longer than you expected, will be harder to kill than you hoped, and someone will be maintaining it years after you’ve moved on. Write with that person in mind.
His favorite version of this sentiment, which he attributed to someone else: “Always write code as if the person maintaining it is a homicidal maniac who knows where you live.”

The influence skills nobody taught you

The final section of this part of the workshop took a hard turn into territory that software engineering curricula almost never cover (but is a key part of my developer advocate work): how to actually get things done in organizations full of humans with competing incentives.

Nate’s thesis: the hardest problems in software are people problems, not technical problems. And the skills to navigate people problems: influence, empathy, listening, finding common ground; all of these don’t come with a CS degree.

He recommended How to Win Friends and Influence People by Dale Carnegie without apology. “It is older than everyone in this room. It is Evergreen. I guarantee it will help your career.” The book is about understanding what people actually need versus what they’re saying they need, and how to align your goals with theirs.

On the current AI mandate situation specifically, he offered a practical frame: many senior leaders have “establish AI across our workforce” as a KPI tied to their bonus. They don’t necessarily care how you use AI. They need to be able to say you’re using it. If you can give them a win, a story they can tell upward, they will largely leave you alone about the details. Fill the vacuum with your own narrative or someone else will fill it with token counts.

Two approaches to influence:

  1. The hammer approach: brute-force people into agreeing with you. Works occasionally, burns trust, creates enemies.
  2. The ninja approach: make it their idea. Nate told a story about introducing TDD at a company that had rejected it when he first proposed it. He convinced one tech lead (who happened to be named Jeff, continuing the workshop’s running bit about terrible variable names) to adopt it on his team. When crunch time arrived and Jeff’s team was calmly fixing small issues while everyone else was drowning in defects, Jeff presented the same TDD case to the wider team – and got a standing ovation. Nate, who had proposed the same thing months earlier and been ignored, got no credit. But the practice got adopted. That was the goal.

His point: being the new person with the right answer is often less effective than being the connector who gets the right answer into the right person’s mouth. Letting go of the credit is a skill. It’s not a natural skill. Practice it anyway.

Code reviews: the underrated force multiplier

The workshop closed this segment with code reviews, and both speakers were emphatic that these matter more in an AI-augmented world, not less. When agents are generating PRs, someone with judgment still has to review them, and that reviewer has to understand the code well enough to ask real questions.

Some norms they pushed:

  • No snarky comments. Ever. They are not useful, they’re not clever, and everyone can see what you’re doing.
  • No 3,000-line PRs. Reviewers should refuse to engage with them.
  • Assume positive intent. You don’t know what’s happening in someone’s life. The code that looks lazy might have constraints you’re unaware of.
  • Ask questions instead of making proclamations. “Did you consider what happens when user load ramps up?” is better than “this won’t scale.” Especially when you haven’t done the math.
  • You are not your code. Code reviews are opportunities to improve the work, not indictments of your worth as a person.

Nate’s read on the current state of code reviews: PRs have made the process much more accessible than the old scheduled review meeting, but have also introduced review theater – someone clicking “approved” without looking because it’s in the process checklist. The form without the substance.

Dan’s suggestion: use AI to help you understand PRs before reviewing them. Give it the PR description and ask it to explain what’s actually changing and why. You’ll ask better questions.

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 2: Reading code is a superpower, and we were never taught it

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.
Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part two of several notes from the all-day session; you might want to get a coffee for this one. You can read the previous set of notes here.


How you got here doesn’t matter. That you got here does.

Nate and Dan presenting, with a slide that reads “Ultimately it is about problem solving, tinkering, creativity”

After the first break, Nate and Dan shifted from the big-picture AI discourse into something more concrete: the actual craft skills that make a software engineer, and why those skills are becoming more important in an AI-augmented world, not less.

Nate opened this segment by talking about the different paths into software engineering (the traditional CS degree, boot camps, self-taught) and making a point I think deserves wider circulation: there is no canonical path, and apologizing for yours is a waste of energy.
What matters, in his view, isn’t the credential. It’s whether you have the tinkering mindset. Whether you’ve gone to sleep thinking about a problem and woken up with the answer. Whether you look at a broken thing and feel the pull to understand why it’s broken.

He also made an honest admission about what CS programs are actually designed to do: prepare you for graduate school in computer science. That means algorithms, compiler theory, operating systems, language design. Practically useful for building production software? Debatable. Practically useful for becoming a researcher? Yes. Boot camps swing hard the other way – framework-heavy, language-focused, get-you-hired in 12 weeks – which means they’re also somewhat transitory, because the framework of the moment changes every six months.

Neither path gives you everything. That gap between “what we taught you” and “what I want you to know when you join my project” is basically what their book is trying to fill.

The skill we teach least is the one we use most: reading code

This was the section that hit me hardest, because I’ve thought about it before and never heard it stated this cleanly.

Nate’s observation: we teach people to write code almost exclusively. We spend essentially zero time teaching people to read code. And yet, in any real production environment, the ratio of reading to writing is not even close. You spend far more time navigating, understanding, and reasoning about existing code than you do creating new code from scratch.

His analogy: “I wouldn’t teach you French by saying, now go write some French.”
Reading code is hard for a few compounding reasons. You have to understand the problem domain (which is often genuinely complex – he gave examples from finance and insurance where the business rules alone are labyrinthine). You have to see the code through another person’s mental model. And you often have to do this under time pressure, making changes you don’t fully understand, in systems you weren’t around to watch grow.

The result is what Nate called “patches on top of patches on top of patches,” and the remarkable thing isn’t that these systems have bugs, it’s that they work at all.

There’s also the cognitive bias dimension. The Ikea effect: you value things you assembled yourself more than things someone else built, which means you’re inclined to view your own code as cleaner and more sensible than others’. The mere exposure effect: familiarity breeds preference, which is why developers get dogmatic about languages; not because their preferred language is objectively superior, but because it’s the one they know.

Nate had a great riff here about what he called the Blub Paradox, from a Paul Graham essay: when you’re a programmer in a language somewhere on the power continuum, you look down the spectrum and think “I can’t imagine being productive with those limitations,” and you look up and think “I don’t know why anyone would need all that weird stuff I don’t have.” The language you know well becomes your baseline for what’s normal. AI tools, interestingly, may be helping break this a bit. He and Dan both noticed they’re using more languages and frameworks than they used to.

The Lab: Reading an unfamiliar codebase without AI first

Dan ran the group through a hands-on exercise using the Spring Pet Clinic, a well-known sample Java/Spring application. The instructions were deliberately old-school: no AI tools yet. Just open the repo and start reading.

The goal was to build some muscle memory around the basics: identifying technologies and frameworks from project structure alone, finding a main application class, recognizing architectural patterns just from folder layout.

It’s a more sophisticated skill than it sounds. Dan’s point: even if you’re not a Java developer, you can learn a lot from just looking at a pom.xml. You can infer architectural choices from package structure; “package by feature” versus “package by layer” tells you something about how the original authors thought about the system. You can spot where to start, what the domain objects are, how the system is organized.

After they’d done it manually, Dan switched to showing how AI tools handle the same task, specifically using a “plan mode” in his coding assistant where he wasn’t asking it to write anything, just to explain what it was looking at. The output was genuinely useful: a breakdown of the tech stack, architectural summary, entry points, dependency graph.

His key insight: “I use AI tools far more to read code, understand things, get familiar with things, and learn things than I do to write it.”

But then the follow-up, which is the important part: he wouldn’t have known what questions to ask the AI without the fundamentals. Understanding that architecture is a thing, that there are different ways to organize packages, that there’s something meaningful to look for in the dependency file; that knowledge has to come from somewhere. The AI accelerates the exploration; it doesn’t replace the ability to know what you’re looking for.

AI can tell you what code is doing. It still can’t tell you if that’s right.

This is where the conversation got interesting. Nate made a distinction that I think is underappreciated:

These tools are now remarkably good at reverse-engineering legacy code and telling you what it does. Feed it a 30-year-old COBOL module and it’ll give you a plain-English summary of the behavior. That’s genuinely powerful, especially for the mainframe migration work he mentioned in the morning session.

But “this is what the code is doing” is a completely different question from “is this what the code should be doing?”

He gave a real-world example: a system where some business logic was technically incorrect, but the error was intentionally corrected downstream in a different process. The code was wrong on purpose, because fixing it at the source would have required fixing everything else too. An AI reading that code would correctly describe the behavior, but have no way to know the behavior was a deliberate workaround rather than a bug.

That knowledge lives in the heads of the engineers who were there when the decision was made. And increasingly, as those engineers retire or move on, it’s not living anywhere.

The airline pricing example he used was perfect: the same seats, same flights, same dates — but booking as two one-ways costs a third less than booking as a round trip. There’s almost certainly a specific piece of business logic somewhere that creates that arbitrage. An AI can describe that code. It can’t tell you whether the Delta exec who approved it knew what they were approving.

The sentinel knowledge problem, part two

Nate returned to a theme from the morning: we are starving the pipeline that creates the experts who can actually evaluate AI output. But in this session, he made it more concrete.

Senior engineers look at AI-generated code and immediately spot the issues: the approach that’ll work in a demo but fall over at scale, the pattern that was idiomatic three major versions ago, the security implication nobody mentioned. Junior engineers look at the same code and think it looks fine, because they don’t yet have the experience to know what “fine” looks like.

The concerning dynamic: juniors are increasingly using AI to learn, but learning by accepting AI output without the ability to critique it isn’t learning. It’s cargo cult programming. You’re learning to produce things that look like code without developing the underlying judgment about whether those things are good.

Nate’s line: “AI is the very eager junior developer, and you need to monitor their output closely.”

The economics sidebar: tokens, budgets, and the reality of scale

This wasn’t on the agenda, but it came up organically and it was one of the more grounded conversations of the day.

Nate described a real situation: an organization’s head of AI was approached by a developer who wanted the unlimited Claude Code tier. When asked how many tokens he needed, the answer was 60,000 a day. Response: show me that you’re generating not $300K of business value weekly, but a million dollars. Can you do that? No? Then no.

The scaling math is uncomfortable. A room full of developers (say, 5,000 at a larger company) each burning hundreds or thousands of dollars of tokens per week is a significant line item. And the current pricing reflects a subsidized market. When investors start demanding returns, those prices go up.

He drew an analogy to the Uber model: lose money for years, drive out competition, then raise prices. Except Uber’s “product” (a car ride) is a commodity. The switching costs for enterprise AI tooling embedded into CI/CD pipelines, developer workflows, and institutional processes are not trivial.

His read on Anthropic’s and OpenAI’s revenue vs. profit numbers: revenue is real. Profitability is not. People are seeing value in the product, but the product is priced below cost. That’s not a sustainable business model, and the reckoning will come.

On whether we’ve hit a plateau

Someone in the room asked whether the intelligence improvements we saw around late 2024/early 2025 would continue.

Nate’s take: we’re probably hitting a plateau on pure scaling. The exponential gains from “just make the model bigger” appear to be diminishing. Gary Marcus’s position that we’re approaching the limits of what scaling alone can achieve, strikes him as reasonable.

The “Mythos is so dangerous we can’t release it yet” announcements that keep appearing? He’s skeptical. Follow the incentives: the companies making those claims need their valuations justified.

He was slightly more philosophical about the longer tail – the sci-fi scenarios, the alignment concerns, the “what if it’s already smarter than it’s letting on” thread. He takes it seriously without catastrophizing. The honest version of his view: we don’t know what the motivations of these systems are, because the people who built them don’t fully understand how they work either. That warrants humility, not panic, but also not dismissal.

Bottom line from this session and the previous one

The throughline across the whole day, as best I can summarize it: these tools are genuinely powerful accelerants for people who already have the foundations. They are not a replacement for the foundations. They are an amplifier, and what you get out depends heavily on what you put in.

The code reading skills, the domain understanding, the architectural instincts, and the ability to ask the right questions. All of that still has to come from somewhere. What’s changed is that once you have it, you can go faster, do more, and explore more territory than you could alone.

That’s good. The part that’s bad is that we’re making decisions right now (who to hire, what to teach, what to outsource) based on the assumption that the foundations don’t matter anymore.

They matter. Probably more than they used to.

Categories
Artificial Intelligence Conferences

Notes from Schutta and Vega’s Arc of AI Workshop, part 1: The fundamentals still matter!

I caught the Fundamentals of Software Engineering in the Age of AI workshop yesterday at the Arc of AI conference’s workshop day, led by Nathaniel Schutta (cloud architect at Thoughtworks, University of Minnesota instructor) and Dan Vega (Spring Developer Advocate at Broadcom, Java Champion).

Nate and Dan are the co-authors a book on the subject, Fundamentals of Software Engineering, and they’re out here workshopping the ideas with developers who are living through the same AI-saturated moment we all are.
Fair warning: this post is long. The session was dense, the conversation was good, and I took a lot of notes.

Here’s part one of several notes from the all-day session; you might want to get a coffee for this one.


The opening thesis: giving someone a nail gun doesn’t make them a carpenter

Nate opened with a confession: he’s not handy. At all.

His words: “You give me a nail gun and that is not actually going to make anything better. The cat’s gonna have a nail in its tail.”

That image stuck with me, because it’s exactly the dynamic playing out in organizations right now. Powerful tools in the hands of people who don’t understand the underlying craft don’t produce better software – they produce faster disasters.

Both Nate and Dan were quick to acknowledge that yes, things changed. Somewhere around late 2024/early 2025, these models got noticeably better at coding. Neither of them is dismissing that. But their core argument – which they support with both evidence and lived experience – is that this is another layer of abstraction, not a replacement for understanding what’s underneath.

A brief history of “this will replace programmers”

Slide: “Here we go again,” showing a list of technologies that were supposed to replace programmers

Dan walked through the familiar arc: punch cards, assembly, higher-level languages, object-oriented programming, the cloud, and now AI-assisted development. Each step, someone announced the death of the programmer. Each step, the programmer survived and became more productive.

COBOL was going to let business people write their own programs. Java Beans were going to eliminate business logic development. No-code platforms were going to replace developers entirely. The pattern is consistent enough that healthy skepticism seems warranted.

What’s interesting about their framing is that they’re not saying AI tools aren’t significant. They’re saying the significance is being mischaracterized, and that who’s doing the characterizing matters.

Consider the source

This is where the talk got sharp. Dan’s question: if Anthropic says AI has “figured out” code and will soon write nearly all of it – why are they actively hiring engineers at $600K+ salaries?

Their breakdown of who’s claiming AI replaces developers:

  • The tool makers (Anthropic, OpenAI, etc.) – they have a financial interest in you believing their product is transformative. Grain of salt.
  • Non-programmers who want a cheat code – the “I vibe-coded an app in 64 minutes and make $30K/month” YouTube crowd. Grain of salt the size of a boulder.
  • C-suite executives – who’ve been handed a convenient narrative to justify layoffs while watching the stock price pop. Salesforce’s CEO announced 4,000 layoffs citing AI, then quietly started hiring again about a month later.

Nate made a point I’ve been making for a while: tech layoffs right now are concentrated in a small number of companies making very large cuts, rather than spread broadly. The psychological effect is outsized. Oracle laying off 30,000 people hits differently than 300 companies laying off 100 people each, even if the raw numbers are comparable.

Vibe coding: fun for weekend projects, terrifying for payroll

Slide: Andrej Karpathy’s original vibe coding tweet

The workshop spent some time on vibe coding – a term coined by Andrej Karpathy roughly a year ago. Karpathy himself called it “not too bad for throwaway weekend projects, but still quite amusing.”

Nate and Dan’s framing: the stakes matter. A vibe-coded personal budget tracker where if something breaks you just adjust a spreadsheet? Great. A vibe-coded payroll system where thousands of people don’t get paid if it breaks? Categorically different situation.

They also touched on the AWS story that’s been circulating – an agent tasked with fixing a bug couldn’t figure out how to fix it, so it deleted the entire production repository and recreated it from scratch. Which is, in a very literal sense, a solution. Just not one any human with experience would have suggested. As Dan put it: “Systems have no feelings. They have no experience of ‘wait, that doesn’t seem like a good idea.'”

The expertise gap problem

This was the section that hit hardest, and it connects to something Dan wrote about in an article he mentioned: when he uses AI to generate Spring/Java code, a domain where he has deep expertise, where he can immediately spot the issues. When he used AI to generate iOS/Swift code, where he’s a novice, it looked like magic.

The issue isn’t that the code quality was different. The issue is that his ability to evaluate it was different. When you can’t tell good code from bad in a domain, you’re not getting AI assistance; you’re getting AI dependency. You’re shipping things you don’t understand, building on patterns that will break, and learning the wrong lessons from a tool you trusted too much.
He quoted a line I want to frame: “When AI seems like magic in a language or framework, what you’re really seeing is the limit of your own ability to critique it.”

We’re choking off the pipeline that creates experts

Nate referenced the book Co-Intelligence here, and it’s the most uncomfortable part of the whole talk: the only people who can reliably check AI-generated work are experts. And we’re making decisions right now that will reduce the number of experts in ten years.

Companies are not hiring junior developers. Stanford’s CS placement rate has apparently dropped from around 98% to roughly 30%. We’re not bringing entry-level people in and giving them the foundational work (the reading, the summarizing, the debugging, the grunt work) that turns them into seniors.

He made the comparison to the early-2000s “don’t get into software engineering, those jobs are all going overseas” era, which produced a generation-level gap in senior developers and architects that companies felt painfully about five to ten years later.
And we’re doing it again. On purpose, this time, with AI as the cover story.

The mainframe migration moment

This was a tangent, but a good one. Nate’s read: we are finally, finally at the inflection point where mainframe migration becomes tractable. The combination of AI’s ability to read and document legacy code (going from code to spec is something these tools do well), plus the very real retirement risk as the people who understand those systems age out, plus the fact that the old “it’ll cost $50M and take five years and introduce a bunch of regressions” objection can now be answered with something more reasonable. All of that is converging.

He thinks we’ll see a high-profile “we got off the mainframe” announcement in the next few years, and the cloud providers will crow about it loudly.

The economics of AI tools deserve scrutiny

Nate got pointed here, and I think he’s right to. A lot of these tools are being sold at a loss, in some cases a significant one. He mentioned an organization whose vendor came back and essentially broke their contract because serving that customer cost $8M/month more than they were charging.

The concern isn’t that AI goes away. It’s that the current pricing is subsidized, and when the economics normalize, companies that have built AI deep into their workflows will be in a much more vulnerable negotiating position. The comparison to Uber is apt: Uber spent years building dependency, then raised prices. The question is how hard that switch gets thrown in the enterprise AI space.

The actual bottom line

Dan and Nate presenting, showing slide that says “I think what AI does quite frankly is reduce the floor and raise the ceiling for all of us.” — Satya Nadella

Dan closed with what I thought was the right framing: the floor has been lowered (more people can participate in building software) and the ceiling has been raised (experienced engineers can do more than ever before). Both of those things are true and good.

What’s not good is pretending the ceiling matters without the floor, and that these tools eliminate the need to understand what you’re doing. They don’t. They amplify what you already know. If you don’t know anything, they amplify that too.

Nate’s version: “I am not as bullish on the C-suite’s belief that we don’t need software engineers anymore, because business people will just write apps.”

He’s been watching business people almost-write-apps since COBOL. They haven’t quite gotten there yet.

Categories
Artificial Intelligence Conferences Programming What I’m Up To

I’m the opening musical act at the Arc of AI conference!

Here’s another way that Arc of AI is going to be an AI conference unlike any other: it’s going to have an opening musical act, namely…me!

Arc of AI organizer Dr. Venkat Subramaniam sent me a very nice email inviting me to help out with the after-dinner conference kickoff on Monday, April 17th at 7:00 p.m. with a couple of accordion numbers. I was honored (Dr. Venkat’s kind of a big deal), I’m only too happy to oblige, and I like to think of it as my contribution to “Keep Austin Weird!”

Here’s a sample from the last Collision conference in Toronto:

So in addition to my talk, AEO – Writing Docs and Code for Machines, I’ll have another onstage appearance at Arc of AI.

So far, the second quarter of 2026 is shaping up nicely!


Want to find out more about and register for Arc of AI?

Once again, Arc of AI will take place from Monday, April 13 through Thursday, April 16, with the workshop day taking place on Monday, and the main conference taking place on Tuesday, Wednesday, and Thursday.

Arc of AI tickets are BOGO!

From Arc of AI’s registration page:

You read that right! For each conference ticket you purchase, you get one free ticket. This applies only to conference tickets and not for workshops.

 

Categories
Conferences Tampa Bay

The ultimate guide for working the room at Tampa Bay Tech Week (April 7–12)

I’ve spent years doing developer relations, which means “working the room” is basically part of my job description. And since Tampa Bay Tech Week is happening next week, I thought I’d share this bit of knowledge with you:

Working the room isn’t about being an extrovert or a schmoozer. It’s about having a system to make the most of encounters you have at a gathering.

Whether you’re new to tech events and the idea of walking into a roomful of strangers makes you want to back slowly toward the exit, or you’re a seasoned conference-goer looking to sharpen your approach, this article is for you. I’m sharing everything I know about making the most of a week like this.

You don’t have to try everything in this article. Instead, scan through it, find the tips that fit your situation and your personality, and put those into practice. The goal is  to leave Tampa Bay Tech Week with connections that actually go somewhere or lead to opportunities or friendships down the line.


Contents

  1. Before Tech Week
    1. Plan your week like a choose-your-own-adventure
    2. Do some homework
    3. Arrive with goals
    4. Prepare your introduction
    5. Have some “pocket stories” handy
    6. Warm up before you walk in
  2. At the events
    1. Project good posture
    2. Engage with eye contact
    3. Connect instantly with your LinkedIn QR code
    4. How to join a conversation in progress
    5. Observe, ask, reveal (OAR)
    6. Translate your work for the room
    7. Be more of a host and less of a guest
    8. Consider volunteering
    9. Read the social energy of each event
    10. Watch out for “rock piles” and “hotboxing”
    11. Manage your phone
  3. After Tech Week
    1. Organize your new contacts quickly (the day of, if you can)
    2. Send a brief, specific follow-up within 3 days
    3. Connect on the right channels
    4. Play the long game and keep the relationship warm
  4. A note for introverts
    1. What introversion actually means in this context
    2. Schedule your recharge breaks in advance, non-negotiably
    3. Aim small with a number
    4. Find the quieter edges
    5. Use content as a bridge
    6. Your introvert superpowers are real

Before Tech Week

Plan your week like a choose-your-own-adventure

Most tech conferences are contained. You show up to a hotel or convention center, everything is in one building, the schedule is linear, and your biggest logistical decision is whether to take the stairs or the elevator. Tampa Bay Tech Week is nothing like that.

Instead, Tampa Bay Tech Week is spread across multiple neighborhoods: Ybor City, downtown Tampa, midtown Tampa, and St. Pete. There are dozens of events happening over its four days ranging from 7am coffee runs to after-parties going late into the evening. The event types are just as varied: morning fireside chats, afternoon panel discussions, evening mixers, hackathons, themed after-parties, and a boat event. The topics are all over the map too: fintech, health tech, AI, defense tech, music tech, cybersecurity, proptech, beauty commerce, entrepreneurship, and more.

If you try to attend everything, you’ll absolutely burn out. If you don’t plan at all, you’ll spend half the week figuring out where you’re supposed to be and whether you can get there in time.

So before the week starts, spend a little time perusing the Tampa Bay Tech Week events page and make some decisions:

Which events align with your work or interests? A fintech founder has different priorities than a software engineer, who has different priorities than a student trying to break into the industry. TBTW is unusual in that all three of those people are in the same building at the same time, but you’ll have the most natural conversations at events where you have genuine context and curiosity about the content.

Which signature events do you want to be at? The ticketed events, like Innovation on the Water on Wednesday evening, Havana Nights Tech Edition on Thursday night, and the official kickoff on Tuesday, are prime networking territory precisely because the barrier to entry filters out people who aren’t taking the week seriously. If you have the $150 pass or specific event access (as a speaker, sponsor, or through a connection like the lovely people at TBTW who were kind enough to give me access; thank you, Emily!), prioritize these.

How are you getting around? Rideshare or drive? Will you need to know where to park? Will you need to cross a bridge? Plan ahead, or you’ll spend the whole transit stressed out and arrive frazzled rather than ready to meet people.

Give yourself permission to skip things. A focused day at two events where you’re fully present and engaged is dramatically more valuable than a frantic day at six events where you’re tired, rushing, and half-listening. It’s an all-too-common mistake to think of the gap between events as wasted time. It’s actually breathing room, and that breathing room is what makes the next conversation good.

[ Back to the table of contents ]

Do some homework

The single biggest predictor of whether you’ll have good conversations at a tech event isn’t your personality, your job title, or your business cards. It’s whether you showed up prepared. Homework is how introverts in particular can stack the deck in their favor before they ever walk in the door.

Review the speaker lineups and sessions for the events you’ve chosen. Even a quick skim is enough. You want to walk into each event with at least one or two things you’re genuinely curious about, because genuine curiosity is the raw material of good conversation. “I saw you’re speaking about AI voice agents. I’ve been skeptical about the enterprise use case and I’m curious about what you’ve seen” is a wildly better opener than “So what do you do?” (and definitely better than Joey Tribbiani’s “How you doin’?”).

Research the speakers and panelists you’d like to meet. Look them up on LinkedIn. Read their bio. If they’ve written articles, given a talk, or have a company you can poke around on, check them out. Not so you can impress them by reciting facts about them back at them (please don’t do that), but so you can ask a question that shows you’ve engaged with their work. People who are asked good, specific questions remember the person who asked them.

Look into the sponsors and exhibiting companies. There’s almost certainly at least one company at TBTW that you’ve been curious about, want to partner with, or are considering as a potential employer or client. Having a reason to approach their table, such as “I read your blog post about autonomous customer support and I had a question about how you handle edge cases,” is so much better than “So, what does your company do?” which they have to answer fifty times a day.

Check the event hashtag and page the day before. People often post “Excited to be attending TBTW this week, come find me!” These are warm leads. Comment on a few posts from speakers or attendees you want to meet. When you see them in person, you’re no longer strangers. You have a built-in opener: “Hey, I replied to your post about the fintech panel — I’m [your name here].” Cold room, made warm.

[ Back to the table of contents ]

Arrive with goals

There’s a version of “arriving with goals” that’s annoying and transactional. You’ve probably seen it from a networking mercenary who’s running through the room checking names off a list. This isn’t what I mean.

What I mean is: before you leave the house, know what you’re hoping to get out of the specific events you’re attending. It doesn’t have to be elaborate. It might be:

  • Learning something specific. “I want to understand what Tampa Bay founders are actually worried about on the funding side.”
  • Reconnecting with people you’ve lost touch with. Tampa Bay Tech Week is going to draw a lot of people from the local tech community who you haven’t seen since “The Before Times.” Take advantage!
  • Making a specific number of new connections. And here’s the key: make that number small. Rather than “meet as many people as possible,” aim for 3–5 meaningful conversations per day.

That last one deserves some unpacking. The default mode for conference networking is to try to meet everyone, collect a stack of business cards (or LinkedIn connections), and feel like you “won” the event by volume. This doesn’t work. The connections you actually keep and build on are the ones where you had a real conversation, ones where you learned something about the other person that you can reference later, where you found genuine common ground, where something actually clicked.

Five meaningful conversations a day for six days is thirty relationships worth tending to. That’s a fantastic outcome. Two deep conversations with people you’ll genuinely stay in touch with beats fifteen forgettable exchanges, every single time.

If you set a goal of 3–5 real conversations a day and you hit 2, but they were real, don’t beat yourself up. Give yourself credit for having had the courage to talk to two strangers. Then do it again tomorrow.

[ Back to the table of contents ]

Prepare your introduction

A good one-line self-introduction is a single sentence that tells people who you are in a way that invites a follow-up question. It’s not your resume. It’s not your elevator pitch. It’s the conversational equivalent of a hook at the top of an article: something that makes the person you’re talking to want to know more.

This concept comes from Susan RoAne’s book How to Work a Room, and I’ve used it at every conference I’ve attended for the past several years. It works.

Here are the rules for a good one-liner:

Keep it short. Ten seconds or less. It is not your life story. It is not a paragraph. If you’re still talking after ten seconds, you’ve lost the thread. People’s brains are trained to expect a pause and a handoff after about that long.

Lead with the interesting thing, not your job title. Job titles are boring. “Software engineer” tells someone almost nothing about you as a person. “Financial analyst” makes people’s eyes glaze over before you’ve finished saying it. But the interesting version of what you do — the version that a curious person would want to ask a follow-up question about — is almost always available if you think about it for a moment.

Susan RoAne tells a story about meeting a financial analyst at a networking event whose one-liner was “I help rich people sleep at night.” That’s brilliant. It’s accurate, it’s memorable, and it makes you want to ask “How?” Which is exactly what a good one-liner should do.

Show the benefit, not the mechanism. “I help companies make their AI pipelines faster” lands better than “I do MCP server optimization.” “I connect Tampa Bay’s tech community” lands better than “I run meetups.“ “I help founders find their first customers” lands better than “I do B2B sales consulting.“ Think about what your work actually does for people, and lead with that.

Inigo Montoya from The Princess Bride has the greatest self-introduction in cinema history. You know it:

“Hello. My name is Inigo Montoya. You killed my father. Prepare to die.”

Now, obviously, you’re going to adjust slightly for the context of a tech conference. But the structure is perfect:

  1. Polite greeting
  2. Name
  3. Relevant personal link
  4. Manage expectations

Greet them. Give your name. Give one piece of context that anchors who you are or why you’re here. Then open with a question or statement.

“Hi! I’m Joey de Villa. I run the Tampa Bay AI Meetup — this is my first time at Tech Week and I’m trying to hit as many events as I can this week. What brought you out today?”

That’s it. Short, warm, complete, and ends with a question that puts the spotlight on them, which is what most people want anyway. People love talking about themselves. Give them an easy on-ramp to do it.

One small addition that can work really well: state something you’re looking forward to or curious about. It gives the other person immediate conversational material.

“Hi! I’m Joey. I run a couple of Tampa Bay tech meetups and I do developer relations. I’m genuinely curious about the music tech session this afternoon — I play accordion, so I have a lot of feelings about AI in the studio. What’s on your agenda today?“

Now you’ve given them: your name, what you do, a memorable detail (in my example, accordion), a specific interest, and a question. That’s a lot of conversational material in about fifteen seconds.

[ Back to the table of contents ]

One extra tip for TBTW specifically: the crowd here is unusually mixed. Engineers, founders, VCs, health tech operators, defense contractors, music producers, students, marketers. Don’t assume technical fluency. Have a non-technical version of what you do ready. “I make it easier for AI tools to communicate with each other“ is more useful at this event than “I’m optimizing an MCP server.“ “I help companies build AI workflows that actually hold up in production“ works. Find your translation before you walk in the door, not in the middle of a conversation where the other person is nodding politely while not understanding a word.

[ Back to the table of contents ]

Have some “pocket stories” handy

Pocket stories are short, memorable, ready-to-deploy anecdotes you keep in your back pocket for networking situations. They‘re the conversational equivalent of a great example in a talk, and they make abstract things concrete. They give people something to react to, and they make you more memorable than the person who just delivered a list of facts about themselves.

Good pocket stories are:

  • Brief. A minute to a minute and a half, tops. You’re not delivering a TED talk. You’re giving someone a thread to pull on.
  • Relevant to tech, business, community, or Tampa Bay. You want the story to feel at home in the conversation, not like a non-sequitur.
  • Open-ended. The best pocket stories end in a way that invites the other person to share their own perspective or experience. This transforms a monologue into a conversation.
  • Specific enough to be real. Vague stories are forgettable. “I once worked on a project that went sideways” is nothing. “I once built a caching layer that was so clever it confused our own monitoring system into thinking we were under a denial of service attack” is something.

Here are a few that would work well at TBTW:

A tech-flavored pocket story:

“I’ve been working with AI tools for a while now, and something genuinely strange happened on a project last month. The AI gave the customer exactly the right answer, but for completely the wrong reason. Which led to this fascinating rabbit hole about whether we actually understand why these models work, or whether we’re just measuring that they do.”

A Tampa Bay-flavored pocket story (always a good move at a celebration of the local tech scene):

“I’ve been running tech meetups in Tampa Bay since before the pandemic, and the difference between then and now is genuinely hard to overstate. The community here has grown up in a way I didn’t expect. That’s honestly a big part of why I’m excited that this event exists. It feels like the scene is finally getting the kind of visibility it deserves.”

A “first year” pocket story (specific to TBTW being a new event):

“This is my first Tampa Bay Tech Week, and I’ve been curious to see how it shakes out. First-year events are always interesting. There’s this energy that either comes from everything going perfectly or from everyone improvising together…and honestly, the second kind is often more fun.”

Practice your pocket stories out loud before the event. Not so they sound rehearsed, but so the shape of them is familiar enough that you can deploy them naturally when there’s a conversational opening.

[ Back to the table of contents ]

Warm up before you walk in

Here‘s a tip that sounds strange until you try it, and then you wonder why nobody told you about it sooner.

The night before your first TBTW event, find some text. It can be this article, a news piece, really anything; then read it out loud for three minutes.

That’s it. Three minutes of reading out loud.

This works as a social confidence booster for several reasons:

It gets you comfortable with your own voice. A lot of people have some version of “I hate the sound of my own voice.” Here’s my dirty little secret: I used to hate the sound of my voice. People tell me I have a “radio voice” now, but it wasn’t always that way until I started using the “reading out loud” trick.

Reading out loud regularly desensitizes you to your voice. It gets you accustomed to hearing yourself talk, which reduces the self-consciousness that makes social situations feel harder than they are.

It sharpens your articulation. The physical act of reading out loud forces you to form words carefully and speak at a steady pace. You’ll catch yourself mumbling, and you’ll self-correct. You’ll notice when your volume drops and bring it back up. These habits carry over into actual conversations, and over time, you’ll fine-tune the way you speak and the voice you use.

It warms up the social circuitry. Talking is a physical and cognitive activity. Like any physical and cognitive activity, it’s easier once you’ve done a little warmup. Walking into a room as the first conversation of your day is cold-start networking. Walking in having already spoken out loud for a few minutes means the machinery is already running.

Try to do this every day of Tech Week. It’s three minutes. You can spare three minutes.

[ Back to the table of contents ]


At the events

Project good posture

Posture advice sounds like something from an old-timely self-help book, but it keeps showing up in networking guides for a reason: it works, and most people in tech environments have genuinely bad posture from years of hunching over keyboards.

Good posture at a conference signals confidence, openness, and engagement. And because your body and your brain are a genuine feedback loop, you will actually feel more confident when you stand up straight. This is well-documented. Your brain picks up cues from your own body the same way it picks up cues from the world around it.

The simple mechanical version: imagine a string pulling gently upward from the crown of your head. Let your spine lengthen. Keep your knees soft; locked knees make you look rigid and uncomfortable. Roll your shoulders back slightly, enough to open your chest, not so far back that you look like you’re at attention.

When you do this, you appear approachable and engaged. People will walk toward you. Contrast that with the forward-rounded-shoulders-head-down look that reads as “I don’t want to be here and I definitely don’t want to talk to you.” Which one do you want people to walk toward?

The posture tip compounds nicely with the eye contact tip below. Together they add up to a version of you that people want to approach, even before you’ve said a word.

[ Back to the table of contents ]

Engage with eye contact

Eye contact is one of the fastest-working social signals humans have, and it’s chronically underused at tech conferences, where it’s extremely common to see people’s eyes drifting to their phones, to the name badges on people’s chests, or just slightly to the left of whoever they’re talking to.

When you make genuine eye contact with someone, really look at them. It creates an immediate sense of warmth and attentiveness. It makes people feel seen, which is a remarkably powerful thing in a noisy, crowded, overstimulating environment where it’s easy to feel like one anonymous face in a crowd.

Here’s how to do it without it feeling weird: when you meet someone, make eye contact and hold it for a “one thousand one, one thousand two” count. That’s long enough to register as genuine attention; not so long that it feels like a challenge or an interrogation. Then let your gaze move naturally, the way it would in any comfortable conversation.

A note on autism and eye contact: if you’re allistic (not on the autism spectrum), be aware that for some people — particularly autistic people — direct eye contact is genuinely uncomfortable and can even be aversive. If the person you’re talking to seems uncomfortable with eye contact or consistently looks away, don’t push it. Looking at the general area of their face (such as their forehead, nose, or cheek) conveys the same attentiveness without the discomfort. This is also a good fallback for people who find direct eye contact hard to maintain themselves.

[ Back to the table of contents ]

Connect instantly with your LinkedIn QR code

Tap to view at full size.

Let me say this clearly: business cards are mostly dead at tech events. They exist in a handful of industries and contexts where they’ve survived for cultural reasons (I’m looking at you, Japanese business culture), but at a tech event in 2026, handing someone a paper card is a slightly awkward ritual that results in a card that will live in their jacket pocket until the next time they do laundry, at which point it will become a soggy rectangle and go in the bin.

Business cards have been replaced by the LinkedIn ritual:

  1. You present someone with your LinkedIn QR code. To do this, follow these steps:
    1. Open LinkedIn on your phone
    2. Go to the Home screen
    3. Tap the Search bar
    4. Tap the QR icon, and your QR code will appear
  2. They scan your QR code. They can do this with the LinkedIn app, but it’s much simple if they open their camera app and point the camera at QR code.
  3. They tap the link that appears. This takes them to your LinkedIn profile, and they can request to connect with you.

Better still: if you’re getting a custom nametag for the week (and you should; more on “interesting things” in a moment), put your LinkedIn QR code on it. Now someone can connect with you just by pointing their phone at your chest, right in the middle of a conversation, without either of you having to stop and fumble for a device.

Pro tip: after you connect with someone on LinkedIn, immediately add a note about where you met and what you talked about. LinkedIn lets you do this from the connection request screen. Future-you will be very grateful when you’re going through your connections two weeks later wondering who “Sarah Chen” is.

[ Back to the table of contents ]

How to join a conversation in progress

For a lot of people, introverts especially, the hardest networking move isn’t starting a conversation with one person. It’s walking up to a group of people who are already mid-conversation and joining in.

The fear is understandable. You’re worried about interrupting. You’re worried about being unwelcome. You’re worried about not knowing what they’re talking about and standing there blankly while they look at you.

Here’s the thing: at a networking event, joining conversations is expected. The social contract is different from, say, interrupting someone’s dinner. People are here to meet people, including you.

Here’s the playbook:

Step 1: Pick a lively group. Look for a group of 3–4 people who are engaged and animated. They’ve already done social work for you! They’ve chosen a topic, they’re comfortable together, and there’s energy in the circle. Avoid groups of exactly two people who are leaning in toward each other, making sustained eye contact; that’s likely a focused one-on-one conversation that genuinely isn’t open to a third party right now.

Step 2: Stand at the edge of the group and look interested. Just stand there, angled toward the group, with a pleasant expression. Don’t force your way into the center. Don’t wave or make a big entrance. Just be present at the periphery. In most groups, someone will notice you within thirty seconds and either nod you in or shift slightly to make room. This is the universal body-language signal that says “you’re welcome here.”

Step 3: When acknowledged, step in and introduce yourself. You’re in! Now use your one-liner and introduce yourself to the group. Don’t try to introduce yourself to each person individually while they’re mid-conversation. Just say your name and let things flow naturally from there.

Step 4: Don’t try to change the subject. You just arrived. The group has a conversation going. Contribute to that conversation; let the topic evolve naturally over time. Showing up and immediately trying to redirect the discussion to what you want to talk about is the networking equivalent of sitting down at someone’s poker table and announcing that actually you’d prefer to play blackjack.

One more thing: if you see me in a conversation circle, come join in. I always keep an eye on the edges for people hovering who want to step in, and I’ll wave you over. Come find me!

[ Back to the table of contents ]

Observe, ask, reveal (OAR)

The OAR (Observe, Ask, Reveal) technique is another gem from Susan RoAne’s How to Work a Room, and it’s one of the most practically useful conversation frameworks I’ve ever used. OAR it works because it’s structured, which means you don’t have to improvise from scratch. It gives you a template to follow, but it feels spontaneous and natural when you do it well.

Three steps:

1. Observe. Notice something. That “something” could be about the person, the venue, the content of the event, the situation you’re both in. You’re looking for a specific, concrete observation rather than a vague generality. “It’s a nice event” is not an observation. “I see you’ve got the TBTW lanyard. Did you get the full week pass?” is.

Good things to observe at TBTW:

  • Their nametag (company, event, custom text)
  • Something they’re holding or wearing
  • Which session they just came from
  • The venue itself, especially for distinctive TBTW events like the boat event or the Ybor City evening

2. Ask. Follow your observation with an open-ended question. The goal is to get them talking. Open-ended questions — ones that can’t be answered with a “yes” or a “no” — are your friend here. “What did you think of the AI panel?” lands better than “Did you go to the AI panel?” “What’s bringing you to TBTW this week?” lands better than “Is this your first time?”

3. Reveal. Share something about yourself that’s relevant to what they just said. This is the step that makes the conversation feel like an exchange rather than an interrogation. You give a little; they give a little. The rhythm is listen, contribute, listen, contribute.

⚠️ Two pitfalls to avoid:

  • Over-revealing. Don’t follow their short answer with a five-minute monologue about yourself. A reveal should be roughly proportional to what they shared.
  • Over-asking. If you observe, ask, they answer, you observe, ask, they answer, you observe, ask, and so on… it becomes an interrogation. The reveal step is what prevents this. Use it.

Some OAR examples specifically calibrated for TBTW:

“I noticed you were nodding pretty hard during the AI panel — what was the moment that got you?”

“I saw your nametag says [Company] — I’ve been curious about what you all are doing in the health tech space. What’s the problem you’re trying to solve?”

“This is an amazing venue for an event like this. Have you been to Armature Works before, or is this your first time?”

“That Havana Nights party last night was something else. Did you make it out? I ran into about six people I’ve been meaning to catch up with for months.”

[ Back to the table of contents ]

Translate your work for the room

This is advice specific to TBTW, and it’s important enough to get its own section.

At a developer conference like DevNexus or KCDC, you can say “I’m working on an MCP server that optimizes file deduplication using sampled hashing” and the people around you will nod, maybe ask a follow-up question about your hash function choices, and you’ll be off to the races. That’s a room full of people who speak the language.

Tampa Bay Tech Week is not that room.

TBTW is simultaneously a developer conference, a startup event, a VC mixer, an industry showcase, a community celebration, and an after-party. You might be in the same conversation with a software engineer, a health tech founder, a venture capitalist who came up through finance, a music producer curious about AI, and a student who’s just trying to break in. Most of these people do not know what an MCP server is. Some of them are not sure what a developer does day-to-day.

This is not a problem. Instead, it’s actually an opportunity, because the people who can explain technical things clearly in non-technical terms are far more memorable and interesting to talk to than the ones who retreat into jargon.

Before you walk into each event, have a non-technical version of what you do ready:

  • Instead of “I optimize MCP servers,” say “I make AI tools faster and more reliable when they’re talking to each other.”
  • Instead of “I build LLM integrations,” say “I help companies actually use AI in their products, not just demo it.”
  • Instead of “I do DevRel,” say “I help developers understand new technologies and give companies honest feedback on their developer experience.”

The test: could a smart non-technical person understand what you said, why it matters, and what to ask you next? If yes, you’re there. If no, keep translating.

[ Back to the table of contents ]

Be more of a host and less of a guest

This is one of my favorite pieces of advice, and I’ve given it in every version of this article because it keeps being true.

Being a host at a networking event doesn’t mean you have to be on the organizing committee. It means doing some of the things that hosts naturally do:

Introduce people to each other. This is the single highest-leverage move in any networking room. When you know two people who should meet each other and you make that introduction, both of them remember you as the person who connected them. “Joey, this is Sarah. She’s building an AI system for healthcare intake, and you just mentioned you worked on something similar at your last company.” That introduction takes ten seconds and creates a connection that might last years.

Say hello to people standing alone. Every networking event has wallflowers: people who’ve arrived, don’t know anyone, and are standing at the edge of the room trying to look like they meant to be alone. Walk over and say hello. They are almost certainly delighted to see you. This is one of the most human things you can do at an event and it costs you nothing.

Be generally helpful. Know where the bathrooms are and be willing to direct people to them. Know the schedule well enough to tell someone what’s coming next. Help someone find a session they’re looking for. None of this is glamorous, but it accumulates into a reputation for being a person who makes things easier for others, which is genuinely valuable currency in any professional community.

I’ll tell you exactly how this worked for me: when I first moved to Tampa, I didn’t know anyone in the local tech scene. I started attending meetups and simply helped out wherever I could by setting up chairs, live-tweeting talks, talking to the person who looked lost by the snack table. I gained a reputation for being useful and plugged-in, which led to speaking invitations, which led to inheriting a couple of meetup groups, which led to the Tampa Bay AI Meetup that now has 2,200+ members. None of that was a grand strategy. It was just: show up, be helpful, repeat.

[ Back to the table of contents ]

Consider volunteering

Tampa Bay Tech Week is in its first year. The organizers are pulling off something genuinely ambitious: a multi-day, multi-venue, multi-track event across multiple neighborhoods. We normally don’t get events of this scale.

That means they almost certainly need help, and helping is something you can offer.

Reach out before the week starts and ask if there are volunteer opportunities. There will be registration desks, people will need to be directed between sessions, there will be setup and clean-up, and all sorts of jobs that need to be done to make an event work.

Even if they don’t need you for formal volunteering, you can offer to be a resource. You can help spread the word in your network, connect them with venues or speakers for future events, to write about what you attended (ahem).

Here’s why this matters for networking specifically: having a functional role removes the social friction of approaching strangers. When you’re a volunteer, you approach people because that’s your job for the next couple of hours. “Can I help you find the right room?” “Are you here for the fintech panel? It’s just down the hall.” “Let me know if you need anything.” These interactions are low-stakes, useful, and they create dozens of small positive interactions that often blossom into real conversations when you’re both in the session together afterward.

For introverts especially, this is a powerful move. You’re  no longer cold-approaching strangers; you’re serving a function. The conversations find you.

Want to volunteer or help out in other ways? Contact the organizers at info@tampabaytechweek.com.

[ Back to the table of contents ]

Read the social energy of each event

TBTW is not a single event. It’s a collection of events with wildly different social norms, energy levels, and purposes. Walking into each one with the same approach is like bringing the same energy to a job interview, a cocktail party, and a funeral. Technically you’re “being yourself” at all three, but you’re going to have a rough time at two of them.

Here’s a rough map of the different event types and how to approach each:

Morning panels and fireside chats (9am – noon): People are in learning mode. They came for content. The best networking here happens in the ten minutes before the session starts (introduce yourself to your neighbors, comment on why you chose this session) and in the ten minutes immediately after (while the content is fresh and everyone has something to react to). Don’t try to network during the session. It’s rude, annoying, and doesn’t work.

Afternoon sessions and workshops: The energy is a bit more relaxed than mornings. People have had lunch, they’ve gotten their feet under them, and they’re more open to sidebar conversations. Workshops in particular create natural bonding because you’re working on something together.

Evening mixers (Founders & Entrepreneurs Mixer, Founders & Pho, etc.): These are explicitly networking events. People are here to meet people. This is where you deploy your full toolkit of one-liner intros, pocket stories,  and the OAR technique. Nobody is going to think it’s weird that you walked up to them; that’s literally why everyone is there.

Signature events (Innovation on the Water, Havana Nights, Official After Party): These have their own character. “Innovation on the Water” is a boat event, which creates natural conversation through shared novelty: you’re both on a boat, which is inherently fun and memorable. “Havana Nights” is a late-night after-party in the best Ybor City style, which means it’s loud, festive, and better suited to lighter social connections than deep professional conversations. Adjust accordingly.

After-parties (after 9pm): These are best for casual connection with people you’ve already met during the day, and for having the fun, slightly-less-professional conversations that don’t happen in the sessions. Not the place for pitching; definitely the place for making someone laugh.

[ Back to the table of contents ]

Watch out for “rock piles” and “hotboxing”

Two body-language patterns that accidentally make you unapproachable, and how to fix them:

Rock piles are groups of people huddled together in a tight, closed circle. Everyone’s so close to each other that their shoulders are almost touching, and nobody at the edge is making eye contact with the room. The message this sends, unintentionally, is “this is a private conversation, go away.” If you find yourself in a rock pile, step back slightly and shift your angle. It opens the formation to allow others to join.

Hotboxing: this is a term I’ve picked up in the context of professional events rather than the other meaning you might be thinking of. In this case, it’s when is when two people square up directly face-to-face in a way that physically blocks anyone else from entering the conversation. It’s essentially a one-on-one rock pile. The fix is the same: angle yourself slightly, leave a gap, let someone step in.

Both of these patterns are entirely unconscious. You’re not trying to exclude anyone; it just happens when you’re absorbed in a good conversation. Knowing about it is usually enough to catch yourself and correct it.

[ Back to the table of contents ]

Manage your phone

Your phone is a social barrier when you’re holding it, scrolling it, or staring at it. Even if you’re doing something completely innocuous (such as checking the event schedule or selecting a rideshare), the visual signal you’re sending is “I am not available for conversation.” At a networking event, that’s exactly the signal you don’t want to send.

There are legitimate reasons to have your phone out: looking up someone’s LinkedIn to connect, showing someone something on your screen, checking what’s next on the schedule. These are fine; just narrate them lightly. “Let me pull up my LinkedIn QR code” or “Let me see what time the next session starts” tells people what you’re doing and doesn’t leave them wondering if you’ve mentally left the conversation.

Otherwise: phone in pocket, eyes up. Your emails will still be there after the event.

Similarly: if you put your bag down, you’re staying. When you pick it back up and start gathering your things, people can read that you’re about to move on. This is actually useful, since it gives you a natural, non-awkward exit from conversations that have run their course.

[ Back to the table of contents ]


After Tech Week

Organize your new contacts quickly (the day of, if you can)

Memory is perishable. The vivid sense of “oh yes, I remember exactly who that was and what we talked about” fades faster than you think, especially at a multi-day event where you might meet fifty new people over the course of the week.

The single most valuable thing you can do after each event — ideally the same day, on the rideshare home or before you go to sleep — is to make a quick note on every meaningful connection you made:

  • Who: name, company, role.
  • Where and how you met: “at the Founders & Pho event Thursday night, we bonded over the music tech panel earlier in the day.”
  • What you talked about: even one or two sentences is enough. “She’s building an AI system for mental health intake; skeptical about regulation timeline.”
  • Any follow-up you promised: “I said I’d send her the article I wrote about AI in healthcare.” “He said he’d connect me with someone at his firm.”

A note app on your phone is completely adequate for this. You don’t need a CRM (though if you do have one, use it). The goal is to capture enough that when you sit down to write follow-up messages two days later, you’re writing to a specific person about a specific conversation. You don’t want to send a generic “great meeting you!” to a name you can barely place.

[ Back to the table of contents ]

Send a brief, specific follow-up within 3 days

The timing matters. The warm period after a networking event is roughly 48–72 hours. Inside that window, people still remember who you are and what you talked about; your follow-up lands as a continuation of the conversation. Outside that window, you’re increasingly a stranger who’s sending them a cold message.

The message itself should be short. This is not the place for a five-paragraph email. It’s the place for a message that says:

  • I remember who you are and what we talked about (which is surprisingly rare and therefore memorable).
  • Here’s something useful related to our conversation.
  • Let’s keep this going.

“Great talking to you at the Founders & Pho event Thursday! I loved your take on the regulatory headwinds in mental health tech. Here’s the piece I mentioned about the FDA’s current stance on AI diagnostics: [link]. Would love to keep the conversation going.”

That’s three sentences and a link. That’s enough. If they want more, they’ll reply and you can go from there.

If you promised a specific follow-up, such as an introduction, an article, or a resource, lead with that. You said you’d do it; doing it promptly signals that you’re someone who follows through, which is a more valuable signal than most people realize.

[ Back to the table of contents ]

Connect on the right channels

Different platforms serve different kinds of ongoing relationships, and connecting on the wrong one can mean the relationship goes nowhere even if the initial spark was there.

LinkedIn is the default for professional connections. If you met someone in a professional context and want to stay vaguely in each other’s orbits, LinkedIn is the right channel.

GitHub is the right channel if you talked code, mentioned projects you’re working on, or might collaborate on something technical. Starring someone’s repo or following their account is a lightweight but genuine signal of interest.

Bluesky is where a significant chunk of the tech community has landed after leaving X/Twitter over the past couple of years. If you connected with someone over tech culture, industry opinions, or the kinds of conversations that used to happen on Twitter, Bluesky is probably where they’re having them now. Worth checking.

A group chat or Slack if there’s a specific project, community, or ongoing conversation that makes sense. Some events spin up a Slack workspace or Discord server; if TBTW does this, join it and be actually present rather than just a member.

[ Back to the table of contents ]

Play the long game and keep the relationship warm

The follow-up message is an opening, not a destination. The people you want in your network long-term are the ones where there’s genuine ongoing exchange. You learn from them, they learn from you, you help each other when you can.

The mechanics of this aren’t complicated:

  • Interact with their posts when something resonates. A thoughtful comment on LinkedIn is worth fifty passive likes.
  • Share relevant things with a one-line note. “Saw this and thought of what you said about proptech at TBTW. It seems relevant.” That’s a connection-maintenance act that takes thirty seconds and reminds them you’re a person who pays attention.
  • Make introductions when you can. “I know someone you should meet” is one of the most valuable things you can say to anyone in a professional network, and delivering on it cements you as a connector.
  • Show up to the same events over time. Relationships deepen through repeated encounters. If you meet someone at Tech Week and then again at a Tampa Bay AI Meetup a month later, you’re now more than an acquaintance. You’re starting to become a known quantity in each other’s worlds.

[ Back to the table of contents ]


A note for introverts

I want to spend a bit more time on this section than I usually do, because I think most networking advice for introverts is either patronizing (“It’s okay to be nervous!”) or not actually calibrated for how introversion works in practice.

So let me be specific.

What introversion actually means in this context

Introversion doesn’t mean shyness, though they often co-occur. It means that social interaction, particularly with strangers, costs you energy rather than giving you energy. Extroverts (like myself) leave a great networking event feeling more energized than when they arrived. Introverts often leave the same event feeling depleted, even if they had a genuinely good time.

This is not a personality flaw. It’s just a different energy profile. And it has real implications for how you should approach a week like TBTW.

[ Back to the table of contents ]

Schedule your recharge breaks in advance, non-negotiably

This is the highest-leverage change most introverts can make to their conference approach, and it’s almost never in the standard advice.

Before the week starts, look at your event calendar and block 90-minute recharge windows the same way you block sessions you want to attend. These aren’t open slots that you’ll fill with more events if something interesting comes up. They’re protected time for you to be alone, be quiet, and recover.

What does recharging look like? Whatever works for you: sitting in your car in silence, going for a walk without headphones, sitting in a coffee shop that’s far enough from the venue that nobody you know will walk in, going back to your hotel room if you’re from out of town. The specific activity is less important than the solitude and the recovery.

Remember that you don’t have to attend everything. If there’s a session that’s not particularly interesting or useful to you, skip it and treat it as built-in decompression time. Use the break to let your nervous system come back to baseline before the next event.

[ Back to the table of contents ]

Aim small with a number

The standard “meet as many people as possible!” networking advice is actively counterproductive for introverts, because it creates a success criterion that’s both exhausting and incompatible with the way introverts build connections.

Introverts typically form better connections through fewer, deeper interactions. A ninety-minute conversation with one person that covers real ground is often worth more than ten five-minute conversations. The problem is that ninety-minute conversations are also more energetically expensive.

The reframe: aim for 3 meaningful conversations per event, or 3 to 5 per day. Write this down. Make it your actual goal. If you get to the end of an event having had 2 real conversations where you actually connected with the person, learned something about them, and exchanged something genuine, give yourself full credit. That’s a successful event.

If you hit your number and you have energy left, keep going. If you hit your number and you’re exhausted, you’re done. Leave. Go recharge. Show up tomorrow.

[ Back to the table of contents ]

Find the quieter edges

At any loud evening event, there are almost always quieter zones: an outdoor patio, a hallway near the exit, a corner of the bar that’s slightly removed from the main gathering. Introverts instinctively find these spots, and other introverts find these spots too.

Some of the best conversations I’ve had at conferences have happened in a hallway outside the main event space, where two people who needed a break from the noise ended up talking for forty-five minutes because the environment was finally calm enough to actually think. Go find those spots. You won’t be alone there.

[ Back to the table of contents ]

Use content as a bridge

Sessions, panels, and fireside chats give introverts something to talk about that isn’t themselves. This is huge. Instead of “So, what do you do?”, which requires you to perform and the other person to do the same, you can open with “What did you make of the AI panel?” or “I thought the moderator’s question about regulation was interesting. What’s your take?”

You’re talking about ideas rather than pitching identities. For introverts who prefer substantive conversations to small talk, this is a feature, not a workaround.

Aim to attend the sessions that are most relevant to your work or interests, and immediately after each one, position yourself to have a post-session conversation with someone nearby. “What did you think?” is about the easiest conversation opener there is.

[ Back to the table of contents ]

Your introvert superpowers are real

Most networking advice is written for extroverts, which means it emphasizes the skills extroverts naturally have: working the room, projecting warmth, holding court, keeping energy high. These are real skills. But they’re not the only skills that matter in professional networking.

Introverts tend to:

Listen more carefully. In a room full of people trying to be heard, the person who’s genuinely paying attention is rare and memorable. People will tell you things they don’t tell the person who’s already formulating their next sentence while you’re still talking.

Ask better follow-up questions. Because you actually heard what they said.

Have more substantive conversations. Introverts tend to gravitate toward depth when they’re engaged. The person who had a thirty-minute conversation with you about something you both care about is going to remember you far longer than the person who had a four-minute exchange with fifty people.

Follow up more thoughtfully. Because you took in more during the conversation, your follow-up can be specific and personal in a way that generic “Great to meet you!” messages aren’t.

These are genuine advantages. They don’t show up in the standard conference networking playbook, which is oriented toward volume and energy, but they show up in the quality of the relationships you build.

[ Back to the table of contents ]


I’ll be at Tampa Bay Tech Week all week. Come say hi — I’m not hard to find. I’m usually the one with the accordion.

— Joey