Vibe coding, a term coined by Andrej Karpathy, is where where developers use natural language prompts to have LLMs or LLM-based tools generate, debug, and iterate on code. Vibe coding is declarative, because you describe what you want.
Grind coding, my term for traditional programming, where you specify how a program performs its tasks using a programming language. Grind coding is imperative, because you specify how the thing you want works.
I myself have been writing code for different purposes, on different parts of this spectrum (see the diagram at the top of this article for where they land on the spectrum):
The Tampa Bay Tech Events utility: This is the Jupyter Notebook I use to gather event info from online listings and build the tables that make up the event listings I post every week here on Global Nerdy. I wrote the original code myself, but I’ve called on Claude to take the tedious stuff, including analyzing the obfuscated HTML in Meetup’s event pages to find the tags and classes containing event information.
MCP server for my current client: This is a project that started before I joined, and was written using a code generation tool. The client is a big platform connected to some big organizations; my job is to be the human programmer in the loop.
Picdump poster: Every week, I post “picdump” articles on the Global Nerdy and Accordion Guy blogs. Over the week, I save interesting or relevant images to specific folders, and the picdump poster utlity builds a blog post using those images. It’s a low-effort way for me to assemble some of my most-read blog posts, and it’s more vibe-coded than not, especially since I don’t specialize in building WordPress integrations.
Copy as Markdown: Here’s an example of using vibe coding as a way to have custom software built on demand. I wanted a way to copy text from a web page, and then converting that copied text into Markdown format. This one was purely vibe-coded; I simply told Gemini what I wanted, and it not only generated the code for me, but also gave me instructions on how to install it.
I’ve often been asked “How do you keep up with what’s going on in the AI world?”
One of my answers is that I watch Nate B. Jones’ YouTube channel almost daily. He cranks them out at a rate that I envy, and they’re full of valuable information, interesting ideas, and perspectives I might not otherwise consider.
The conventional path for white-collar career advancement that’s been around since the end of World War II is being dismantled. It used to be that you’d land an entry-level role, learn through work that starts as simple tasks but gets more complex as you go, and gradually climb the corporate ladder. That’s not the case anymore. If you’ve been working for five or more years, you’ve seen it; if you’re newer to the working world, you might have lived it.
Entry-level hiring at major tech companies has dropped by over 50% since 2019
Job postings across the US economy have declined by 29%
The unemployment rate for recent college grads is now greater than the general unemployment rate
This isn’t a temporary freeze but a structural shift where the “training rung” of the ladder is being removed. Those repetitive, easier tasks that you assign to juniors (summarizing meetings, cleaning data, drafting low-stakes documents) are exactly what generative AI now handles, and it’s getting better at it all the time.
As a result, the “ladder” is being disassembled while people are still trying to stand on it. Entry-level roles now require experience that entry-level jobs no longer provide because AI has cannibalized the work that used to serve as the learning ground [00:55]. Jones argues that in a world where the passive route of “doing your time”to get promoted is vanishing, the only viable strategy left for career survival and growth is cultivating extreme high agency.
High agency and locus of control
High agency sounds like a feeling of confidence, self-assuredness, or empowerment. It’s best understood through the theory of Locus of Control, which psychologist Julian Rotter developed in the 1950s.
Jones proposes a mental exercise [1:55]: draw a circle and list all major life elements (promotions, skills, family, economy). For low-agency individuals, significant factors like promotions or learning requirements fall outside the circle, perceived as things determined by managers or the market. For high-agency individuals, absolutely everything falls inside the circle.
The high agency mindset dictates that while you cannot control external events, you can control the way you respond, and by extension, your trajectory (sounds like the modern stoicism that’s popular in Silicon Valley circles, as well as at my former company Auth0).
When a high-agency person encounters a barrier that seems outside their control, they reframe it with a four-word Gen Z expression: “That’s a skill issue” [03:23]. Whether it’s lacking a technical skill or not knowing how to navigate office politics, they view the obstacle not as an immovable wall, but as a gap in their own abilities that can be bridged through learning and adaptation.
While no one literally controls whether they get laid off, the high-agency mindset focuses on controlling the response: where to direct energy, what to learn next, and how to pivot.
A critical consequence of the AI era is the acceleration of the gap between high and low-agency individuals. Jones notes that while this difference used to play out over decades, AI now makes the separation visible in months [7:33]. High-agency people leveraging AI can accomplish 10 to 100 times more than their passive counterparts, compressing career trajectories that used to take twenty years into a fraction of the time (supposedly; consider the myth of the 10x developer). Conversely, career stagnation that once took a decade to notice (you sometimes see this in “company lifers”) now becomes apparent almost immediately.
Jones talks about what he calls the “Say/Do Ratio” as a measure of high agency. It’s the gap between saying you will do something and actually doing it.
Most people have a poor ratio, letting weeks or months pass between intention (“I’m going to learn this skill!” or “I’m going to hit the gym daily!”) and action. They’re either hit by “analysis paralysis” or waiting for perfection [12:37]. High-agency individuals shrink the distance between “say” and “do.” They start immediately, even when they feel unprepared or uncomfortable.
AI serves as a powerful accelerator for improving this ratio by helping users “ship halfway-done” work (think “Minimum Viable Product”) or get past the “blank page” problem instantly.
This orientation prioritizes contribution over extraction; instead of asking “What can I get?”, high-agency people ask “What can I create?”. Simply put, you get what you give.
This perspective shifts the focus from waiting for opportunities to making them. If you approach AI as a tool to expand your locus of control, you can systematically knock down barriers between you and your goals. Jones concludes that the future belongs to those who don’t wait for the old structures to return but instead use their agency to build, ship, and learn now, viewing the current disruption not as a threat, but as an unprecedented opportunity for growth [21:44].
If we have a term like “vibe coding,” where you build an application by describing what you want it to do using natural language (like English) and an LLM generates the code, we probably should have an equal opposite term that’s catchier than “traditional coding,” where you build an application using a programming language to define the application’s algorithms and data structures.
I propose the term grind coding, which is short, catchy, and has the same linguistic “feel” as vibe coding.
Having these two terms also makes it clear that there’s a spectrum between these two styles. For instance, I’ve done some “mostly grind with a little vibe” coding where I’ve written most of the code and had an LLM write up some small part that I couldn’t be bothered to write — a regular expression or function. There’ve also been some “most vibe with a little grind” cases where I’ve had an LLM or Claude code do most of the coding, and then I did a little manual adjustment afterwards.
Yesterday evening, I headed to spARK Labs to attend the CTO School Tampa Bay meetup to catch their session, Lessons in Scaling Engineering Teams with Leon Kuperman of CAST AI, where organizer Marvin Scaff conducted a “fireside chat” with CAST AI’s CTO.
Leon Kuperman has over 20 years of experience, having been Vice President of Security Products for Oracle Cloud Infrastructure (OCI). This role followed Oracle’s 2018 acquisition of Zenedge, a cybersecurity and Web Application Firewall (WAF) company where Kuperman was Co-Founder and CTO. Prior to that, he had leadership roles at IBM and Truition. He’s widely recognized for his expertise in cloud computing, web application security, and engineering leadership.
CAST AI is a Miami-based cloud automation platform that optimizes Kubernetes clusters using AI, founded back in the pre-GPT (and pre-COVID) era, in 2019. They recently launched OMNI Compute, a unified control plane that allows organizations to access compute resources (specifically GPUs for AI workloads) across different cloud providers and regions seamlessly. Just this month, they joined the “three comma club” and hit a valuation of over $1 billion.
My original plan was to arrive early, but a combination of last-minute calls and making the cross-bay journey led to my missing the first half hour of the talk.
Still, I took some notes, and I’m sharing them here. I hope you find them useful!
The fireside chat with Leon Kuperman
Marvin and Leon’s chat was a compressed master class in managing a software engineering organization. They walked through what it takes to scale engineering without losing velocity, drawing on lessons from building CAST AI (now at around 160 engineers) and Leon’s earlier BigCo experience, including at IBM and Oracle.
I caught three-quarters of the talk, which included:
Scaling needs structure, especially especially distributed teams. Leon framed “scale” as a move from informal coordination to explicit systems. Rather than adding bureaucracy for its own sake, his approach is to add just enough structure to prevent chaos when the team is remote and distributed.
The “two-pizza team” model: Amazon popularized the “two-pizza rule,” a general guideline that teams work best when they’re small enough that two pizzas will feed them. This typically means a team size of 10 people or fewer. CAST AI teams are “two-pizza” teams, and most teams are dedicated to a specific scope.
A deliberately flat hierarchy: Leon described a simple reporting chain leading up to him: directors → VP Engineering → Leon. Despite scale, he aims to stay close to reality by interacting with every team at least every two weeks, and often weekly.
Peter Principle was (the younger ones in the room didn’t, probably because it’s an idea that was popular in the 1970s-80s). He talked about how people get promoted into roles they’re not suited for, and then get stuck because nobody ever goes back to IC.
CAST AI’s answer is a “manager candidate” program, where a prospective manager is assigned a small pod, where they get the chance to “do the job before you get the job” for about six months. If the candidate is a fit, they retain the manager role, otherwise, they return to an IC role with “zero repercussions” and no stigma.
Common leadership failure modes: Leon highlighted the usual suspects, including micromanaging, weak delegation, and not building motivation through mentoring. He also stressed that trust is built through honesty and vulnerability, as people won’t fully commit to a leader who presents as a “robotic individual.”
Unifying product and engineering
“If I fail, I fail.”
CTOs must be both “product people” and “customer people.” Even for introverts, he argued this is non-negotiable: a CTO needs the customer context to make good product/engineering tradeoffs.
Planning: vision is long-term; execution is short-loop. He rejected long-range roadmaps as fantasy (“estimates are always wrong”) and described a system of:
Quarterly OKRs
Frequent priority reviews (about every two weeks) to stay aligned with customer needs
An overall bias toward time-to-market as the top validation lever
Hiring, culture, and performance management
Dunbar’s Number), you need explicit performance management to avoid letting mediocrity hide in the system.
don’t want, while also being willing to move on from sustained underperformance.
A technical exercise (preferably collaborative/live)
A culture check where each interviewer probes one value deeply
He gave a concrete example using “customer obsession” as a trait: asking for times someone pushed back internally to fight for the customer, and treating “not really” as a signal of poor fit.
Foundational CS > language trivia. In the Q&A session, Leon emphasized hiring for fundamentals, such as distributed systems and concurrency, because those are hard to fake (and languages can be learned fairly quickly).
DevX and DevOps
DevEx is a scaling strategy, not a perk. Leon explicitly dismissed vanity metrics and refocused on developer experience quantities that matter: friction in onboarding, docs, local dev, and pipelines is what slows teams down. CAST AI has a dedicated DevEx team of four focused on removing that friction.
Measure friction with DevEx + DORA-style signals. (by the way, DORA is short for “DevOps Research and Assessment”). He described using GetDX to produce quarterly “heat maps” of where developers are least happy, then prioritizing platform work to make those pain points “not suck.”
“The antidote to change risk is more change.” A highlight of the evening: Leon pushed back hard on enterprise “change approval board” thinking. Enterprises lock down change because change causes incidents; his view is that the remedy is smaller changes released faster, backed by automation so that rollback is quick and boring.
Automated quality, modern delivery, and canaries. At their release cadence, manual QA doesn’t scale. Leon said they do zero manual testing (no QA team), rely on automated checks (including AI “first-pass” checks), and called out Argo CD for Kubernetes delivery plus canary testing as the next level of release management.
Blameless root cause analysis and “5 Whys”: When things break, Leon described a blameless postmortem discipline, where they enforce psychological safety, run an honest “5 Whys,” produce near-term action items, and ensure everyone is heard. No finger-pointing!
He reinforced the mindset later: “It’s the process that broke, not the guy.”
AI tools and the future of coding
Tool adoption is becoming a performance separator. Leon’s framing: engineers who don’t adopt strong tools and absorb best practices will get outpaced, even by “average” engineers who do.
Claude Code, expense, and velocity: In the founder funding discussion, Marvin referenced that tools like “Claude code” are “expensive… a couple hundred bucks per person per month,” but enable teams to ship quickly.
GSD (short for “Get Shit Done”) as a workflow aid and “context manager” style approach—breaking work into phases to reduce context-window pain and keep momentum.
Writing skill as a proxy for critical thinking. One of the spiciest takes: Leon said he “over-indexes” on writing. Bullet points aren’t enough; if you’re writing something for him, he wants narrative because it reveals whether someone can truly reason and communicate. He also suggested using LLMs to critique your own documents (first-principles critique, “strawman” the argument) to find logic holes before presenting.
Advice for founders and startups: moats, funding, and being lean
Competing isn’t about coding faster; it’s about differentiation. Leon argued that competitors are a symptom; the real challenge is building differentiation that holds even if someone else has more resources.
He then explained CAST AI’s “data moat” concretely: a read-only agent collects cluster state/events continuously, creating a unique multi-cloud vantage point used to train algorithms. It’s something that individual hyperscalers can’t replicate as easily.
Raise funds to scale a working flywheel, not to “find it.” Leon advocated staying lean and bootstrapping where possible, warning against raising money “to compete.” Instead, raise when you’ve hit an inflection point and want to scale what’s already working.
first customer signal and getting validation before building for scale.
Summary: The “scaling engineering teams” playbook Leon kept returning to
As I said at the beginning, this talk was a compressed master class in managing a software engineering organization. Here’s the evening’s “tl;dr” that condenses Leon’s approach:
Small, service-owned teams, with a deliberately flat hierarchy
Safe leadership experimentation, with manager trials and no stigma for returning to IC
Product/engineering alignment by design, whereone escalation path, one accountability point)
Performance management as culture, not an HR afterthought
This meetup group is a little more exclusive, with membership is by approval and organizer referral, and it’s limited to technical people only. If you’re a senior tech leader or on the path to becoming one (say, a tech lead or senior developer), you’re eligible.
Their goal? To provide a forum where tech leaders can exchange ideas and have discussions about tech, process, management, or whatever issues affect them during this highly accelerated period in the industry.
Because some people asked, and because I’m going to be busy for the next day (I’ll explain later), here are more shots from recently-added pages to my notebook. These are notes on RAG and LangChain, taken and condensed from a couple of books, a couple of online sources, and my own experimenting with code. Enjoy!
I’m working on both securing venues and planning topics for Tampa Bay Artificial Intelligence Meetup’s sessions for 2026. My notes above should give you an idea of at least one of the topics we’ll cover soon!