Categories
Artificial Intelligence Programming

Grind coding, vibe coding, and everything in between

If we have a term like “vibe coding,” where you build an application by describing what you want it to do using natural language (like English) and an LLM generates the code, we probably should have an equal opposite term that’s catchier than “traditional coding,” where you build an application using a programming language to define the application’s algorithms and data structures.

I propose the term grind coding, which is short, catchy, and has the same linguistic “feel” as vibe coding.

Having these two terms also makes it clear that there’s a spectrum between these two styles. For instance, I’ve done some “mostly grind with a little vibe” coding where I’ve written most of the code and had an LLM write up some small part that I couldn’t be bothered to write — a regular expression or function. There’ve also been some “most vibe with a little grind” cases where I’ve had an LLM or Claude code do most of the coding, and then I did a little manual adjustment afterwards.

Categories
Artificial Intelligence Meetups Tampa Bay

Notes from CTO School Tampa Bay’s fireside chat with Leon Kuperman, CTO of CAST AI

Yesterday evening, I headed to spARK Labs to attend the CTO School Tampa Bay meetup to catch their session, Lessons in Scaling Engineering Teams with Leon Kuperman of CAST AI, where organizer Marvin Scaff conducted a “fireside chat” with CAST AI’s CTO.

Leon Kuperman has over 20 years of experience, having been Vice President of Security Products for Oracle Cloud Infrastructure (OCI). This role followed Oracle’s 2018 acquisition of Zenedge, a cybersecurity and Web Application Firewall (WAF) company where Kuperman was Co-Founder and CTO. Prior to that, he had leadership roles at IBM and Truition. He’s widely recognized for his expertise in cloud computing, web application security, and engineering leadership.

CAST AI is a Miami-based cloud automation platform that optimizes Kubernetes clusters using AI, founded back in the pre-GPT (and pre-COVID) era, in 2019. They recently launched OMNI Compute, a unified control plane that allows organizations to access compute resources (specifically GPUs for AI workloads) across different cloud providers and regions seamlessly. Just this month, they joined the “three comma club” and hit a valuation of over $1 billion.

My original plan was to arrive early, but a combination of last-minute calls and making the cross-bay journey led to my missing the first half hour of the talk.

Still, I took some notes, and I’m sharing them here. I hope you find them useful!

The fireside chat with Leon Kuperman

Marvin and Leon’s chat was a compressed master class in managing a software engineering organization. They walked through what it takes to scale engineering without losing velocity, drawing on lessons from building CAST AI (now  at around 160 engineers) and Leon’s earlier BigCo experience, including at IBM and Oracle.

I caught three-quarters of the talk, which included:

I also have a quick summary at the end.


Engineering leadership and team structure

Scaling needs structure, especially especially distributed teams. Leon framed “scale” as a move from informal coordination to explicit systems. Rather than adding bureaucracy for its own sake, his approach is to add just enough structure to prevent chaos when the team is remote and distributed.

The “two-pizza team” model: Amazon popularized the “two-pizza rule,” a general guideline that teams work best when they’re small enough that two pizzas will feed them. This typically means a team size of 10 people or fewer. CAST AI teams are “two-pizza” teams, and most teams are dedicated to a specific scope.

A deliberately flat hierarchy: Leon described a simple reporting chain leading up to him: directors → VP Engineering → Leon. Despite scale, he aims to stay close to reality by interacting with every team at least every two weeks, and often weekly.

Peter Principle was (the younger ones in the room didn’t, probably because it’s an idea that was popular in the 1970s-80s). He talked about how people get promoted into roles they’re not suited for, and then get stuck because nobody ever goes back to IC.

CAST AI’s answer is a “manager candidate” program, where a prospective manager is assigned a small pod, where they get the chance to “do the job before you get the job” for about six months. If the candidate is a fit, they retain the manager role, otherwise, they return to an IC role with “zero repercussions” and no stigma.

Common leadership failure modes: Leon highlighted the usual suspects, including micromanaging, weak delegation, and not building motivation through mentoring. He also stressed that trust is built through honesty and vulnerability, as people won’t fully commit to a leader who presents as a “robotic individual.”


Unifying product and engineering

“If I fail, I fail.”

CTOs must be both “product people” and “customer people.” Even for introverts, he argued this is non-negotiable: a CTO needs the customer context to make good product/engineering tradeoffs.

Planning: vision is long-term; execution is short-loop. He rejected long-range roadmaps as fantasy (“estimates are always wrong”) and described a system of:

  • Quarterly OKRs
  • Frequent priority reviews (about every two weeks) to stay aligned with customer needs
  • An overall bias toward time-to-market as the top validation lever

Hiring, culture, and performance management

Dunbar’s Number), you need explicit performance management to avoid letting mediocrity hide in the system.

don’t want, while also being willing to move on from sustained underperformance.

Hiring rigor; culture “deep dive.” CAST AI’s hiring loop includes:

  • Five interviewers
  • A technical exercise (preferably collaborative/live)
  • A culture check where each interviewer probes one value deeply

He gave a concrete example using “customer obsession” as a trait: asking for times someone pushed back internally to fight for the customer, and treating “not really” as a signal of poor fit.

Foundational CS > language trivia. In the Q&A session, Leon emphasized hiring for fundamentals, such as distributed systems and concurrency, because those are hard to fake (and languages can be learned fairly quickly).


DevX and DevOps

DevEx is a scaling strategy, not a perk. Leon explicitly dismissed vanity metrics and refocused on developer experience quantities that matter: friction in onboarding, docs, local dev, and pipelines is what slows teams down. CAST AI has a dedicated DevEx team of four focused on removing that friction.

Measure friction with DevEx + DORA-style signals. (by the way, DORA is short for “DevOps Research and Assessment”). He described using GetDX to produce quarterly “heat maps” of where developers are least happy, then prioritizing platform work to make those pain points “not suck.”

“The antidote to change risk is more change.” A highlight of the evening: Leon pushed back hard on enterprise “change approval board” thinking. Enterprises lock down change because change causes incidents; his view is that the remedy is smaller changes released faster, backed by automation so that rollback is quick and boring.

Automated quality, modern delivery, and canaries. At their release cadence, manual QA doesn’t scale. Leon said they do zero manual testing (no QA team), rely on automated checks (including AI “first-pass” checks), and called out Argo CD for Kubernetes delivery plus canary testing as the next level of release management.

Blameless root cause analysis and “5 Whys”: When things break, Leon described a blameless postmortem discipline, where they enforce psychological safety, run an honest “5 Whys,” produce near-term action items, and ensure everyone is heard. No finger-pointing!

He reinforced the mindset later: “It’s the process that broke, not the guy.”


AI tools and the future of coding

Tool adoption is becoming a performance separator. Leon’s framing: engineers who don’t adopt strong tools and absorb best practices will get outpaced, even by “average” engineers who do.

Claude Code, expense, and velocity: In the founder funding discussion, Marvin referenced that tools like “Claude code” are “expensive… a couple hundred bucks per person per month,” but enable teams to ship quickly.

GSD (short for “Get Shit Done”) as a workflow aid and “context manager” style approach—breaking work into phases to reduce context-window pain and keep momentum.

Writing skill as a proxy for critical thinking. One of the spiciest takes: Leon said he “over-indexes” on writing. Bullet points aren’t enough; if you’re writing something for him, he wants narrative because it reveals whether someone can truly reason and communicate. He also suggested using LLMs to critique your own documents (first-principles critique, “strawman” the argument) to find logic holes before presenting.


Advice for founders and  startups: moats, funding, and being lean

Competing isn’t about coding faster; it’s about differentiation. Leon argued that competitors are a symptom; the real challenge is building differentiation that holds even if someone else has more resources.

Peter Thiel’s (ugh, ugh, ugh) book, Zero to One (still worth a read, with some caveats), and described three defensibility paths:

  1. Network effects
  2. Economies of scale
  3. Data moat

He then explained CAST AI’s “data moat” concretely: a read-only agent collects cluster state/events continuously, creating a unique multi-cloud vantage point used to train algorithms. It’s something that individual hyperscalers can’t replicate as easily.

Raise funds to scale a working flywheel, not to “find it.” Leon advocated staying lean and bootstrapping where possible, warning against raising money “to compete.” Instead, raise when you’ve hit an inflection point and want to scale what’s already working.

first customer signal and getting validation before building for scale.


Summary: The “scaling engineering teams” playbook Leon kept returning to

As I said at the beginning, this talk was a compressed master class in managing a software engineering organization. Here’s the evening’s “tl;dr” that condenses Leon’s approach:

  • Small, service-owned teams, with a deliberately flat hierarchy
  • Safe leadership experimentation, with manager trials and no stigma for returning to IC
  • Product/engineering alignment by design, whereone escalation path, one accountability point)
  • Performance management as culture, not an HR afterthought
  • DevEx investment (measure friction, fix pipelines, improve onboarding/docs)
  • Ship faster to reduce risk (automation + small changes + rollback discipline; avoid CABs)
  • Modern delivery mechanics (Argo CD, canaries, automated checks, no manual QA bottleneck)
  • Tooling + writing discipline as force multipliers (LLMs + narrative thinking)

CTO School Tampa Bay

Marvin Scaff.

This was my first experience with CTO School Tampa Bay, which bills itself as “a group of CTOs, VP of Engineering, Tech Leads, and technologists who would like to become leaders.” It’s organized by Marvin Scaff, Cassandra Bernard, and Daniel James Scott.

This meetup group is a little more exclusive, with membership is by approval and organizer referral, and it’s limited to technical people only. If you’re a senior tech leader or on the path to becoming one (say, a tech lead or senior developer), you’re eligible.

Their goal? To provide a forum where tech leaders can exchange ideas and have discussions about tech, process, management, or whatever issues affect them during this highly accelerated period in the industry.

Categories
Artificial Intelligence Programming Reading Material

More notes

Because some people asked, and because I’m going to be busy for the next day (I’ll explain later), here are more shots from recently-added pages to my notebook. These are notes on RAG and LangChain, taken and condensed from a couple of books, a couple of online sources, and my own experimenting with code. Enjoy!

Categories
Artificial Intelligence What I’m Up To

Coming soon to a Tampa Bay AI Meetup near you

I’m working on both securing venues and planning topics for Tampa Bay Artificial Intelligence Meetup’s sessions for 2026. My notes above should give you an idea of at least one of the topics we’ll cover soon!

Categories
Artificial Intelligence Humor Programming

“Star Trek: Voyager” predicted vibe coding…and it’s cringe!

I remember cringing at this one line from an episode of the 1990s TV series, Star Trek: Voyager:

Computer, install a recursive algorithm!

I always thought that you would never program a computer that way…until now.

Categories
Artificial Intelligence Podcasts What I’m Up To

I was on the first “This Week in Tech” episode of 2026!

Here’s a promising start to the new year: thanks to a successful appearance on the Intelligent Machines podcast back in October, I was a guest on episode 1065 of Leo Laporte’s main podcast, This Week in Tech.

Leo, Blackbird.AI’s Dan Patterson, and I spent just under three hours on Sunday talking about the week’s tech news and having fun while doing so. The episode takes its title, AI Action Park, from Action Park, an insanely dangerous theme park that I mentioned while we were talking about DeepSeek’s Manifold-Constrained Hyper-Connections architecture.

Categories
Artificial Intelligence Editorial

Don’t feel bad; even the inventor of the term “vibe coding” is overwhelmed by all the AI-driven changes

You’ve probably seen this tweet, written by none other than Andrej Karpathy, founding member of OpenAI, former director of AI at Tesla, and creator of the Zero to Hero video tutorial series on AI development from first principles:

I’ve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue. There’s a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering. Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.

It would be perfectly natural to react to this tweet like so:

After all, this “falling behind” statement isn’t coming from just any programmer, but a programmer who’s been so far ahead of most of us for so long that he’s the one who coined the term vibe coding in the first place — and the term’s first anniversary isn’t unit next month:

Karpathy’s post came at the end of 2025, so I thought I’d share my thoughts on it — in the form of a “battle plan” for how I’m going to approach AI in 2026.

It has eight parts, listed below:

  1. Accept “falling behind” as the new normal. Learn to live with it and work around it.
  2. Understand that our sense of time has been altered by recent events.
  3. Forget “mastery.” Go for continuous, lightweight experimentation instead.
  4. Less coding, more developing.
  5. Yes, AI is an “alien tool,” but what if that alien tool is a droid instead of a probe?
  6. Other ideas I’m still working out
  7. This may be the “new normal” for Karpathy, but it’s just the “same old normal” for my dumb ass.
  8. The difference between an adventure and an ordeal is attitude.

1. Accept “falling behind” as the new normal. Learn to live with it and work around it.

Even before the current age of AI, tech was already moving at a pretty frantic pace, and it was pretty hard to keep up. That’s why we make jokes like the slide pictured above, or the Pokemon or Big Data? quiz.

As a result, many people make the choice between “going deep” and specializing in a few things or “going wide” and being a generalist with a little knowledge over many areas. While the approaches are quite different, they have one thing in common: they’re built on the acceptance that you can’t be skilled at everything.

With that in mind, let me present you with an idea that might seem uncomfortable to some of you: Accept “falling behind” as the new normal. Learn to live with it and work around it.

A lot of developers, myself included, accepted being “behind” as our normal state of affairs for years. We’re still in the stone ages of our field — the definition of “computable” won’t even be 100 years old until the next decade — so we should expect changes to continue to come at a fast and furious pace during our lifetimes.

Don’t think of “being behind” as being a personal failing, but as a sensible, sanity-preserving way of looking at the tech world.

It’s a sensible approach to the world that Karpathy describes, which is a firehose of agents, prompts, and stochastic systems, all of which lack established best practices, mature frameworks, or even documentation. If you’re feeling “current” and “on top of things” in the current AI era, it means you don’t understand the situation.

That feeling of playing perpetual “catch-up?” That’s proof you are actively playing on the new frontier, where the map is getting redrawn every day. It means you’ve got a mindset suited for the current Age of AI.

2. Understand that our sense of time has been altered by recent events.

I think some of Karpathy’s feeling comes from how the pandemic and lockdowns messed up our sense of time. Events from years ago feel like they just happened, and events from months in the past feel like a lifetime ago.

AI — and once again, I’m talking about AI after ChatGPT — sometimes feels like it’s been around for a long time, but it’s still a recent development.

“How recent?” you might ask.

Season 4 of Stranger Things was released in two parts — May/July of 2022 — while ChatGPT came out later, debuting on November 30 of that year.

Think of it this way: the non-D&D playing world was introduced to Vecna through season 4 of Stranger Things months before it was introduced to ChatGPT.

Need more perspective? Here are more things that “just happened” that also predate the present AI age:

(Just recalling about “the slap” made me think “Wow, that was a while back.” In fact, I get the feeling that the only person who remembers it as if it happened yesterday is Chris Rock.)

All these examples are pretty recent news, and they all happened before that fateful day — November 30, 2022 — when ChatGPT was unleashed on an unsuspecting world.

3. Forget “mastery.” Go for continuous, lightweight experimentation instead.

Archimedes’ discoveries come from small experiments. Comic by Thomas Leclercq. Click to see more!

Accepting “behind” as the new normal turns any anxiety you may be feeling into a strategic advantage. It changes your mindset to one where you embrace continuous, lightweight experimentation rather than mastery. You know, that “growth mindset” thing that Carol Dweick keeps going on about.

Put your energy into the skill of learning and critically evaluating new basic tools and skills that are subject to change over the gathering static domain knowledge that you think will be timeless. 

We’re emerging from an older era where code was scarce and expensive. It used to take time and effort (and as a result, money) to produce code, which is why a lot of software engineering is based on the concept of code reuse and why older-school OOP developers are hung up on the concept of inheritance. Now that AI can generate screens of code in a snap, we’re going to need to change the way we do development.

My 2026 developer strategy will roughly follow these steps:

  • Embracing ephemeral code: I’m adopting the mindset of “post code-scarcity” or “code abundance.” I’ll happily fire up $TOOL_OF_THE_MOMENT and have it generate lots of lines of code that I won’t mind deleting after I’ve gotten what I need out of it. The idea is to drive the “cost” of experimentation down to zero, which means I’ll do more experimenting.
  • Try new things, constantly, but not all at once: My plan is to dedicate a week or two to one thing and experiment with it. Examples:
    • First week or so: Prompts, especially going beyond basic instructions. Play with few-shot prompting, chain-of-thought, and providing context. Look at r/ChatGPTPromptGenius/ and similar places for ideas.
    • Following week or so: Agents. Build a simple agent using a guide or framework. Understand its core components: reasoning, tools, and memory.
    • Week or so after that: Tools and integrations. Give an agent the ability to search the web, call an API, or write to a file.
  • Learn by teaching and building: Active learning is the most efficient learning! Building something and then showing others how to build it is my go-to trick for getting good at some aspect of tech, and it’s why this blog exists, why the Tampa Bay Tech Events List exists, why the Global Nerdy YouTube channel exists, and why I co-organize the Tampa Bay AI Meetup and Tampa Bay Python.

4. Less coding, more developing.

I used to laugh at this scene from Star Trek: Voyager, but damn, it’s pretty close to what we can do now…

Your enduring value as a developer or techies is going to move “up the stack,” away from remembering the minutae API call parameter order and syntax and towards “big picture” things like system design, architectural judgment, and the critical oversight, judgement, and quality control required to pull together AI components that are  stochastic and fundamentally unpredictable.

That “behind” feeling? That’s the necessary friction for keeping your footing while climbing the much taller ladder that development. Your expertise is less about for loops and design patterns more about system design, problem decomposition, and even taste. You’re more focused on solving the users’ problems (and therefore, operating closer to the user) and providing what the AI can’t.

Worry less about specific tools, and more about principles. Specific tools — LangChain and LlamaIndex, I’m lookin’ right at you — will change rapidly. They may be drastically different or even replaced by something else this time next year! (Maybe next month!) Focus on understanding the underlying principles of agentic reasoning, prompt engineering, and (ugh, this term drives me crazy, and I can’t put my finger on why) — “workflow orchestration.”

Programming isn’t being replaced. It’s being refactored. The developer’s role is becoming one of a conductor or integrator, writing sparse “glue code” to orchestrate powerful, alien AI components. (Come to think of it, that’s not all too different from what we were doing before; there’s just an additional layer of abstraction now.)

5. Yes, AI is an “alien tool,” but what if that alien tool is a droid instead of a probe?

Let’s take a closer look at the last two lines of Karpathy’s tweet:

Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.

First, let me say that I think Karpathy’s choice of “alien tool” as a metaphor is at least a little bit colored by Silicon Valley mentality of disruption — that tech is for doing things to fields, industries, or people instead of for them. There’s also the popular culture portrayal of alien tools, and I think he feels he’s been getting the “alien tools doing things to me” feeling lately:

But what if that “alien tool” is something that can we can converse with? What if we reframe “alien too” to…“droid?”

Unlike an interpreter or compiler that gives one of those classic cryptic error messages or a framework or library with fixed documentation, you can actively interrogate AI, just like you can interrogate a Star Wars droid (or even the Millennium Falcon’s computer, which is a collection of droids). When an AI generates code that you can’t make sense of, you’re not left to reverse-engineer its logic unassisted. You can demand an explanation in plain language. You ask it to walk through its solution step by step or challenge its choices (and better yet, do it Samuel L. Jackson style):

This transforms debugging and learning from a solitary puzzle into a dialogue. This is the real-life version of droids in Star Wars and computers in Star Trek! Ask whatever AI tool you’re using to refactor its code for readability, make it give you the “Explain it is if I’m a junior dev” walkthrough of a complex algorithm, or debate the trade-offs between two architectural approaches. Turn AI into a tireless, on-demand pair programmer and tutor!

By reframing AI not as an alien tool, but as a Star Wars droid (or if you prefer, Star Trek computer), you can change the pace at which you can understand and manage systems. Unfamiliar libraries and cryptic errors are no longer major show-stoppers, but speed bumps that you can overcome with a Socratic dialogue to build  your understanding. The AI-as-droid approach allows you to rapidly decompose and reconstruct the AI’s own output, turning its stochastic suggestions into knowledge and understanding that you can carry forward.

In the end, you’re moving from merely accepting or rejecting its code to using conversation to get clarity, both in the AI’s output and in your own mental model. By treating AI not as an alien probe but as a droid, the “alien tool” becomes less alien through dialogue. The terra incognita of this new age won’t be navigated by a map, but by directed exploration with the assistance of a local guide you can question at every turn.

6. Other ideas I’m still working out

Like “Todd” from BoJack Horseman, I’m still working out some ideas. I’ve listed them here so that you can get an advance look at them; I expect to cover them in upcoming videos on the Global Nerdy YouTube channel.

These ideas, taken together, are a call not to blindly climb the AI hype curve, but a call to develop a sophisticated, expert-level understanding of its limits. I hope to make them the basis of a structured way to conduct that research without falling into the “Mount Dumbass” of overconfidence.

  1. Your tech / developer expertise is the guardrail. I believe that knowledge is the antidote to AI’s Dunning-Kruger effect. What you bring to the table in the Age of AI is critical evaluation, debugging, and architectural oversight. AI generates candidates; you approve or reject them, or, to quote Nick Fury…

  2. Adopt a skeptical, experimental stance. Let’s follow Karpathy’s own method: try the new tools on non-critical projects. When they fail (as he said they did for nanochat), analyze why they failed. This hands-on experience with failure builds the accurate mental model he described as lacking.

  3. Focus on understanding AI “psychology.” Understand the new stochastic layer provided by AI not to worship it, but to debug it (please stop treating AI like a god). Learn about prompts, context windows, and agent frameworks so you can diagnose why an AI produces bad code or an agent gets stuck. This turns a weakness into a diagnosable system.

  4. Prioritize team and talent dynamics: You’ll hear and read losts of stories and articles warning of talent leaving as a result of AI. If you’re in a leadership or decision-making role, focus on creating an environment where critical thinking about tools is valued over blind adoption. Trust your team, and protect their “state of flow” and their deep work.

7. This may be the “new normal” for Karpathy, but it’s just the “same old normal” for my dumb ass.

Maybe it’s because he’s made some really amazing stuff that he’s surprised that the wave of change that it brought about has come back to bite him. For most of the rest of us — once again, that includes me — we’ve always been trying our level best to keep up, and doing what we can to manage our tiny corners of the tech world.

The take-away here is that if the guy who helped make vibe coding a reality and coined the term “vibe coding” is feeling a bit overwhelmed, we can take comfort that we’re not alone. Welcome to our club, Andrej!

8. The difference between an adventure and an ordeal is attitude.

Yes, it’s a lot. Yes, I’m overwhelmed. Yes, I’m trying to catch up.

But it’s also exciting. It’s a whole new world. It’s full of possibilities.

I’m outside my comfort zone, but that’s where the magic happens.