Categories
Artificial Intelligence Humor

Remember to occasionally sass back at LLMs

The screenshot above is another regular reminder from Yours Truly that the LLM isn’t always right, but the final decision is always yours. Sometimes, you need to sass back — not necessarily to get better results, but to remind yourself not to abdicate completely to AI.

In case the first line in my prompt sounds familiar, but you can’t place it, here’s the source:

Here’s the video version:

Categories
Artificial Intelligence Current Events Humor

Claude’s Super Bowl ads are so funny that Sam Altman’s crashing out over them

How upset is Sam Altman about Anthropic’s Super Bowl ads for Claude, which poke fun at ChatGPT and their inclusion of ads? Upset enough to call them “authoritarian,” in the same way a tween would call their parents “fascist” because they wouldn’t give them permission to go to a slumber party.

But daaaaamn, are they memorable and funny.

There are four such ads, each one featuring two actors, with one playing the part of the user, and the other playing the part of ChatGPT. The acting is perfect, with the user clearly in need of answers, and ChatGPT with slightly delayed responses delivered in a saccharine tone and a creepy smile at the end (“Give me your creepiest fake smile!” must’ve been part of the audition process). All the ads end with a snippet of the rap version of Blu Cantrell’s 2003 number, Breathe, which features Sean Paul and one of the best beats from that era.

I’ve posted the four ads below, from my least to most favorite. Each one features a common LLM use case.

Here’s Treachery, where a student is asking ChatGPT to evaluate her essay:

Deception features ChatGPT providing advice on the user’s business idea:

Violation’s user wants a six-pack — the muscle kind, not the beard kind — and is about to regret telling ChatGPT his height:

And my favorite, Betrayal, starts with the user trying to get closer to his mom, and ends on a cougar-riffic note:

OpenAI CEO and owner of the world’s most punchable voice Sam Altman is, as the kids say, crashing out over these ads, calling them “dishonest” (they’re more hyperbolic) and “authoritarian” (which is Altman himself being hyperbolic):

…and the most blunt headline of the bunch:

Categories
Artificial Intelligence Business Career Work

The key to thriving in the AI age is beating the bottlenecks

One of Nate B. Jones’ recent videos has the title Why the Smartest AI Bet Right Now Has Nothing to Do With AI (It’s Not What You Think). While the title is technically correct, I think it should be changed to In the Age of AI, You Have to Beat the Bottlenecks.

Bottleneck: a definition

Many Global Nerdy readers aren’t native English speakers, so here’s a definition of “bottleneck”:

A bottleneck is a specific point where a process slows down or stops because there is too much work and not enough capacity to handle it. It is the one thing that limits the speed of everything else.

Imagine a literal bottle of water.

  • The body of the bottle is wide and holds a lot of water.

  • The neck (the top part) is very narrow.

  • When you try to pour the water out quickly, it cannot all come out at once. It has to wait to pass through the narrow neck.

In business or technology, the “bottleneck” is that narrow neck. No matter how fast you work elsewhere, everything must wait for this one slow part.

Elon is often wrong, but you can learn from his wrongness

My personal rule is that when Elon Musk says something, and especially when it’s about AI, turn it at least 90 degrees. At the most recent World Economic Forum gathering in Davos, he talked a great “abundance” game, with sci-fi claims that AI would create unlimited economic expansion and plenitude for all:

Nate Jones watched the talk with Musk, but came to the conclusion that Musk’s take is the wrong frame for the immediate future. The current AI era will be one of bottlenecks, not abundance. I agree, as I’ve come to that conclusion about any grandiose statement that Musk makes; after all, he is Mr. “we’ll have colonies on Mars real soon now.

Here are my notes from Jones’ video…

Notes

Instead of abundance, Nate suggests that what we are entering is a “bottleneck economy.” While AI capability is growing, the actual value it produces won’t automatically flow everywhere and benefit everyone. Instead, it will concentrate around specific areas based on AI’s constraints and limitations [00:00].

Research from Cognizant claims AI could unlock $4.5 trillion in U.S. labor productivity (and yes, you need to take that figure with a huge grain of salt), and it comes with a massive caveat: businesses must implement AI effectively. Currently, there’s a wide gap between AI models and the hard work of integrating them into business workflows. This “value gap” means that the trillion-dollar impact won’t materialize until organizations figure out how to bridge the distance between models can do in general and what they can specifically do for a company’s operations [01:01].

Physical infrastructure is the first bottleneck. AI capability is increasingly constrained by things it needs from the physical world, specifically land, power, and skilled trade workers. Building the data centers required to train and run models takes years, and not just for the building process, but also permitting and connections to the power grid. This creates a wedge between the speed of software development and building infrastructure  [03:56].

Beyond just buildings and power, the hardware supply chain is the second bottleneck. Access to compute, high-bandwidth memory, and advanced chip manufacturing (controlled largely by TSMC) determines who gets a seat at the table. Companies that understand this are securing resources years in advance and treating regions with stable power and friendly permitting as strategic assets. This creates a market where value is captured by those navigating physical constraints in addition to building better algorithms [06:02].

The third bottleneck is one you might not have thought of: the cost of trust. As the cost of generating content collapses to near zero, the cost of trust is skyrocketing. Jone highlights what he calls a “trust deficit,” calling it a major coordination bottleneck. When any content can be fabricated, the ability to verify and authenticate information becomes expensive and crucial. Value will shift to institutions, platforms, or individuals who can mediate trust and provide a reliable signal in world rapidly filling with synthetic media slop [07:36].

For organizations, there’s the bottleneck of applying general AI to specific contexts. A general AI model won’t know a company’s private code base, board politics, or competitive dynamics. The bridge between “AI can do this” and “AI does this usefully here” requires tacit knowledge; that is, the practices and relationships that aren’t written down but live in the heads of the company’s employees. Companies that solve this integration problem will unlock productivity, while those that don’t will spend lots of money on tools they never use [09:55].

The fifth bottleneck is another one you might not have though of: the increasing value of taste. For individuals,  and especially for those in tech, the bottlenecks are shifting from acquiring skill to getting good at making judgment calls. AI is commoditizing hard skills like programming (it’s cutting down the time to proficiency from years to months), the really valuable skills are going to be taste and curation. The ability to distinguish between AI output that’s “good enough” versus AI output that’s extraordinary will become the differentiator. Developing taste takes experience, time, and observation. This is going to create a dangerous race for early-career professionals, whose entry-level work is being devalued [14:52].

The combination of problem-finding and execution are the sixth bottleneck. When problem-solving becomes automated, finding the problem and executing on the solution become the new moats. The market will reward those who can frame the right questions and navigate the ambiguity of implementing appropriate solutions. Jones emphasizes that while AI can generate a strategy or a plan, it can’t execute the “grinding work” of follow-through, holding people accountable, and navigating organizational politics. Success depends on identifying these new personal bottlenecks rather than optimizing for old skills that AI is turning into commodities [16:50].


Tips for techies and developers to beat the bottlenecks

  • Cultivate a sense for taste in addition to a skill for syntax. As coding moves from purely “grind” to at least partially “vibe” (see my vibe code vs. grind code post), your value shifts from writing code to reviewing AI-generated code. You need to refine your sense of what makes code good to differentiate yourself from the flood of AI output, which tends towards the average. [15:06]
  • Specialize! To beat the “good enough” standard of AI, pick a niche, and specialize in it. The window for being a generalist is closing, and extraordinary depth allows you to spot quality that AI (which once again, tends towards the average) misses. [16:16]
  • Pivot to problem finding. AI makes a lot of problem solving cheap, which makes problem finding the rare and precious thing. Stop defining yourself solely as a problem solver. Focus on defining the right problems to solve, framing the architecture, and determining direction. This management-level skill is harder for AI to replicate than execution. [16:50]
  • Value tacit knowledge and context. Tacit knowledge is the “soft” knowledge of how an organization works, and it’s almost never documented (at least directly), but lives in the heads of the people working there. Knowing why a legacy codebase exists or understanding specific stakeholder needs is a “context moat” that general AI models can’t easily infer. [17:36]
  • Focus on execution and follow-through. AI can generate the plan/code, but it can’t navigate the friction of deployment. The “grinding work” of implementation, such as convincing teams, fixing integration bugs, and finalizing products, is where the real value now lies. [18:47]
  • Build your tolerance for ambiguity. This has always been good for real life, but now it’s also good for tech work, which used to live in rigid, well-defined, unambiguous spaces… but not anymore! The tech landscape is shifting rapidly, and the ability to remain functional and productive while “metabolizing change” and dealing with uncertainty is a critical soft skill that separates leaders from people who freeze when things become ambiguous. [20:01]
  • Audit your personal bottlenecks: Be honest about what is actually constraining your career right now. It might not be learning a new framework (the old bottleneck). Instead, it might be your ability to integrate AI tools into your workflow or your ability to communicate complex ideas. Find those bottlenecks and come up with strategies to overcome them! [21:25]
Categories
Artificial Intelligence

A quick intro to OpenClaw (formerly MoltBot, formerly ClawdBot) and 7 tips for getting started

When I was asked about what AI tools I was trying out in my recent interview on the Enlightened Fractionals podcast, one of the tools I named was Clawdbot. But I was already out-of-date enough to have used the incorrect name, because it had been changed to Moltbot. Or maybe it had been re-renamed to its current name (at least at the time of writing), OpenClaw.

Clawdbot, Moltbot, OpenClaw: What is this thing?

OpenClaw is an open-source AI assistant that went from launch to viral sensation to full-on crisis management mode in just five days. It originally went by the name I used, Clawdbot, but then rebranded twice:

  1. From ClawdBot to Moltbot after Anthropic raised trademark concerns about the name’s similarity to Claude. Let’s face it, the name “ClawdBot” was a reference to Claude, and the misspelling was intentionally meant to prevent the kind of IP violation concern that they ended up running into. “Moltbot” is a reference to molting, which is when a lobster sheds its outer shell and emerges with a new, soft shell as its exoskeleton.
  2. From Moltbot to OpenClaw after creator Peter Steinberger simply decided he didn’t like the interim name.

Throughout the chaos, the project now know as OpenClaw has attracted over 144,000 GitHub stars, along with crypto scammers, handle-sniping bots, and a lot of cybersecurity practitioners’ attention.

What makes OpenClaw different?

  1. Unlike traditional AI chatbots that live on dedicated websites, OpenClaw integrates directly into to a number of messaging apps, and it’s pretty likely you already use at least one of them. You can interact with it using WhatsApp, Telegram, iMessage, Slack, Discord, or Signal. Using OpenClaw is like texting or messaging a friend, and it routes your messages to whichever LLM you choose while handling task automation locally.
  2. OpenClaw runs on a computer (real or virtual) that you control and gives the LLM access, allowing it to take action on your behalf.

The promise of a real AI assistant

OpenClaw offers three standout capabilities:

  1. Persistent memory: OpenClaw remembers from session to session and doesn’t forget everything when you close the app. It learns your preferences, tracks ongoing projects and actually remembers conversations you had and what you tell it.
  2. Proactive notifications: OpenClaw notifies you about important things, such as daily briefings, deadline reminders and email triage summaries. You can wake up to a text saying, “Here are your three priorities today,” without having to ask the AI first — it does so proactively.
  3. Real automation: Because you can grant OpenClaw read and write access to your local filesystem and browser access, it has been described as “an LLM with hands.” It can schedule tasks, read and re-organize your files, fill out forms, search  and reply to your email, generate reports, and control smart home devices. It’s been used for thinngs like achieving “inbox zero” to handling research threads that run for days, habit tracking, and providing automated weekly recaps of what they shipped.

Real talk: Should you try OpenClaw or wait?

At this point, I feel the need to remind you that Clawdbot/Moltbot/OpenClaw is an open source project moving at AI speed that’s been in use by early adopters for only a week. And it that time, the project has faced the threat of cancellation via trademark lawyers, and some of its user base have fallen prey to crypto scammers while others have failed to grasp its security implications and have exposed their private information to the ’net at large.

If you need something that “just works” and has something like a one-click install, I suggest waiting. The things OpenClaw does are too cool and convenient to be ignored. If the OpenClaw people don’t make a safer, simpler version, someone else most definitely will (and get rich in the process).

Serious security considerations

Just Google “security” and “openclaw” (or “clawdbot” or “openmolt”) and you’ll see articles written by all manner of security experts who’ve flagged significant risks with OpenClaw’s architecture. It runs on your local computer and can interact with emails, files, and credentials on that computer. If you configure it the wrong way, you can unintentionally expose private data such as API keys.

Researchers have already discovered numerous publicly accessible OpenClaw instances that have little or no authentication. OpenClaw also creates what one security analyst called a “hybrid identity” problem, where it operates as you, using your credentials after you’ve logged off. This kind of “digital twinning” was largely in the realm of science fiction until last week, and ,ost security systems aren’t designed to handle it.

The current OpenClaw situation (which is subject to change very, very quickly)

Despite the initial hiccups (and there will be more),  OpenClaw continues to grow. It’s got an active Discord community, it keep collecting GitHub stars, and the  team appears to have learned some lessons about viral success and security practices. Expect to see more posts and stories about it over the next few weeks.

7 tips for getting started with OpenClaw

  1. If you’re feeling confident about trying it out, go to openclaw.ai and review the documentation thoroughly. Before installing anything, read through the official guides to understand the architecture, requirements, and how the message routing to LLM providers works. This will help you make informed decisions about your setup.
  2. Complete the security checklist before deployment. This is new software in a new field where we learn new things every day. Given the documented vulnerabilities in early deployments, prioritize authentication configuration, ensure your instance isn’t publicly accessible, and never expose API keys. Consider using a dedicated machine or virtual environment rather than your primary computer. (I’m currently using a Raspberry Pi 500 for this purpose.)
  3. Beware of Mac Mini scams. Speaking of dedicated machines, the Mac Mini, thanks to its fast Apple Silicon processors and fantastic memory bandwidth, has become the preferred AI development machine and the preferred OpenClaw platform. Enterprising con artists have found out how in-demand Mac Minis are and have been posting scam ads on places like Facebook Marketplace. I’ll write an article about my own experiences with such scammers soon.
  4. Choose and configure your LLM backend. Decide if you want to use one of the bigger paid services like Claude, ChatGPT, or Gemini, and understand the associated costs before connecting them to OpenClaw (you might want to consider DeepSeek). You can also go with a local model, which is what I’m doing.
  5. Start with a single messaging integration. Don’t go nuts. Pick one messaging platform to use with OpenClaw to test the waters (I suggest Discord). This limits your exposure while you learn how OpenClaw behaves and what permissions it actually needs.
  6. Limit its destructive capability and start by giving OpenClaw only read-only automation. Start by letting OpenClaw summarize emails or provide briefings before  giving it “write” access to send messages, modify files, or execute commands on your behalf. Begin slowly and safely, then gradually expand its permissions as you become more certain about your security configuration and how OpenClaw behaves.
  7. As a reminder of the dangers of letting an AI agent run wild on your behalf, I strongly recommend you watch the Sorceror’s Apprentice part of the Walt Disney animated film Fantasia. In case you don’t have a Disney+ account, I’ve posted it in the YouTube embeds below:

Categories
Artificial Intelligence Process Programming

Projects I’m vibe coding, projects I’m grind coding, and projects in-between

A couple of weeks back, I wrote about how coding happens on a spectrum whose opposite ends are:

  • Vibe coding, a term coined by Andrej Karpathy, is where where developers use natural language prompts to have LLMs or LLM-based tools generate, debug, and iterate on code. Vibe coding is declarative, because you describe what you want.
  • Grind coding, my term for traditional programming, where you specify how a program performs its tasks using a programming language. Grind coding is imperative, because you specify how the thing you want works.

I myself have been writing code for different purposes, on different parts of this spectrum (see the diagram at the top of this article for where they land on the spectrum):

  • The Tampa Bay Tech Events utility: This is the Jupyter Notebook I use to gather event info from online listings and build the tables that make up the event listings I post every week here on Global Nerdy. I wrote the original code myself, but I’ve called on Claude to take the tedious stuff, including analyzing the obfuscated HTML in Meetup’s event pages to find the tags and classes containing event information.
  • MCP server for my current client: This is a project that started before I joined, and was written using a code generation tool. The client is a big platform connected to some big organizations; my job is to be the human programmer in the loop.
  • Picdump poster: Every week, I post “picdump” articles on the Global Nerdy and Accordion Guy blogs. Over the week, I save interesting or relevant images to specific folders, and the picdump poster utlity builds a blog post using those images. It’s a low-effort way for me to assemble some of my most-read blog posts, and it’s more vibe-coded than not, especially since I don’t specialize in building WordPress integrations.
  • Copy as Markdown: Here’s an example of using vibe coding as a way to have custom software built on demand. I wanted a way to copy text from a web page, and then converting that copied text into Markdown format. This one was purely vibe-coded; I simply told Gemini what I wanted, and it not only generated the code for me, but also gave me instructions on how to install it.
Categories
Artificial Intelligence Career

Notes from Nate B. Jones’ video, “The People Getting Promoted All Have This One Thing in Common (AI Is Supercharging this Mindset)”

I’ve often been asked “How do you keep up with what’s going on in the AI world?”

One of my answers is that I watch Nate B. Jones’ YouTube channel almost daily. He cranks them out at a rate that I envy, and they’re full of valuable information, interesting ideas, and perspectives I might not otherwise consider.

If you haven’t seen this channel before, he recently published a great “starter video” titled The People Getting Promoted All Have This One Thing in Common (AI Is Supercharging this Mindset). It covers a topic that should be interesting to a lot of you: What to do when the traditional career ladder is getting dismantled, and yes, the answer involves AI.

Here’s the video, and below it are my notes. Enjoy!

Notes

Kiss the traditional career ladder goodbye

The conventional path for white-collar career advancement that’s been around since the end of World War II is being dismantled. It used to be that you’d land an entry-level role, learn through work that starts as simple tasks but gets more complex as you go, and gradually climb the corporate ladder. That’s not the case anymore. If you’ve been working for five or more years, you’ve seen it; if you’re newer to the working world, you might have lived it.

Jones opens the video with these worrying stats:

  • Entry-level hiring at major tech companies has dropped by over 50% since 2019
  • Job postings across the US economy have declined by 29%
  • The unemployment rate for recent college grads is now greater than the general unemployment rate

This isn’t a temporary freeze but a structural shift where the “training rung” of the ladder is being removed. Those repetitive, easier tasks that you assign to juniors (summarizing meetings, cleaning data, drafting low-stakes documents) are exactly what generative AI now handles, and it’s getting better at it all the time.

As a result, the “ladder” is being disassembled while people are still trying to stand on it. Entry-level roles now require experience that entry-level jobs no longer provide because AI has cannibalized the work that used to serve as the learning ground [00:55]. Jones argues that in a world where the passive route of “doing your time”to get promoted is vanishing, the only viable strategy left for career survival and growth is cultivating extreme high agency.

High agency and locus of control

High agency sounds like a feeling of confidence, self-assuredness, or empowerment. It’s best understood through the theory of Locus of Control, which psychologist Julian Rotter developed in the 1950s.

Jones proposes a mental exercise [1:55]: draw a circle and list all major life elements (promotions, skills, family, economy). For low-agency individuals, significant factors like promotions or learning requirements fall outside the circle, perceived as things determined by managers or the market. For high-agency individuals, absolutely everything falls inside the circle.

The high agency mindset dictates that while you cannot control external events, you can control the way you respond, and by extension, your trajectory (sounds like the modern stoicism that’s popular in Silicon Valley circles, as well as at my former company Auth0).

When a high-agency person encounters a barrier that seems outside their control, they reframe it with a four-word Gen Z expression: “That’s a skill issue” [03:23]. Whether it’s lacking a technical skill or not knowing how to navigate office politics, they view the obstacle not as an immovable wall, but as a gap in their own abilities that can be bridged through learning and adaptation.

High agency vs. systemic barriers

Jones took the time to address the valid criticism that this mindset ignores systemic unfairness or is that “bootstrap mentality” that ignores structural problems. He argued that high agency is actually most critical for those with the least privilege. He observes that people from disadvantaged backgrounds often display higher agency because they lack the safety nets that more advantaged people have, which often leads them to be more passive  [4:48]. When failure isn’t an option, you put in the effort not to fail.

While no one literally controls whether they get laid off, the high-agency mindset focuses on controlling the response: where to direct energy, what to learn next, and how to pivot.

However, Jones warns that an internal locus of control can be taken too far, leading to the tendency to blame yourself for everything that goes wrong. The goal isn’t to beat yourself up for every setback. Instead, it’s to channel that internal orientation into a “challenge” mindset. Instead of thinking “I failed because I’m inadequate,” the high-agency approach is “I haven’t found the right angle of attack yet, but I can figure it out” [5:41]. This distinction, which looks a lot like “growth mindset,” turns potential anxiety into a strategic focus on solving problems.

AI as the “jet engine” for agency

Jones’ thesis is that AI is the “greatest equalizer for agency that has ever existed” because it acts as a force multiplier for anyone willing to act [5:59]. Barriers that previously required years of expensive education or access to elite networks, such as coding a website, analyzing complex data, or launching a marketing campaign, can now be overcome by a single individual with a laptop and determination. AI doesn’t care about your pedigree; it simply responds to questions and executes commands.

This technological shift allows high-agency individuals to bypass traditional gatekeepers. Jones shares examples of people (including the creator of Base44) moving from dead-end situations to running scaling businesses not because of luck, but because they used AI to relentlessly patch their skill gaps [6:12]. In this new era, if you don’t know a programming language or a business concept, AI allows you to learn and implement it simultaneously, effectively turning “skill issues” into temporary speed bumps rather than dead ends.

Speed becomes what sets you apart

A critical consequence of the AI era is the acceleration of the gap between high and low-agency individuals. Jones notes that while this difference used to play out over decades, AI now makes the separation visible in months [7:33]. High-agency people leveraging AI can accomplish 10 to 100 times more than their passive counterparts, compressing career trajectories that used to take twenty years into a fraction of the time (supposedly; consider the myth of the 10x developer). Conversely, career stagnation that once took a decade to notice (you sometimes see this in “company lifers”) now becomes apparent almost immediately.

This acceleration means that waiting for permission or the next rung of the ladder to appear is a strategy for failure. The people currently being tapped for leadership are those who combine high agency with “AI-native” thinking, leading them to redefine roles instead of just filling them [8:11]. In an organizational structure that is inherently malleable and constantly disrupted by scaling intelligence, titles don’t matter. Instead, what really matters is generating value and outcomes.

The “Say/Do Ratio” and execution

Jones talks about what he calls the “Say/Do Ratio” as a measure of high agency. It’s the gap between saying you will do something and actually doing it.

Most people have a poor ratio, letting weeks or months pass between intention (“I’m going to learn this skill!” or “I’m going to hit the gym daily!”) and action. They’re either hit by “analysis paralysis” or waiting for perfection [12:37]. High-agency individuals shrink the distance between “say” and “do.” They start immediately, even when they feel unprepared or uncomfortable.

AI serves as a powerful accelerator for improving this ratio by helping users “ship halfway-done” work (think “Minimum Viable Product”) or get past the “blank page” problem instantly.

Jones cites Kobe Bryant as a prime example of this mindset. Bryant viewed nervousness not as an emotion to be managed, but as an information signal that he hadn’t prepared enough, which is a variable that he could control [11:38]. Similarly, in the AI age, preparation and execution are more accessible than ever, allowing those with high agency to move from idea to prototype without getting stuck in the “planning” phase.

Solo founders and lean unicorns

The combination of high agency and AI is reshaping the business landscape, and the surge in solo founders and “lean” billion-dollar companies. Jones points out that the share of startups with solo founders has nearly doubled since 2015, and we’re approaching the era of the one-person billion-dollar company [15:13]. He cites the example of solo founder Maor Shlomo, who built Base44 from a side project to an $80 million exit in six months without a full-time team or venture capital, simply by pushing code to production 13 times a day [16:20].

This trend proves that AI allows individuals to operate with the output capacity of entire teams. Founders and operators can now “speedrun” through obstacles that used to require hiring specialists, whether it’s understanding server-side architecture or generating marketing materials. The constraint on building a massive business is no longer headcount or capital, but the agency of the founder to utilize AI to extend their own capabilities and solve problems [16:47].

Don’t wait; generate!

In the end, the high-agency mindset is grounded in an obsession with pushing value into the world. Jones describes this as a belief that the world is “bendable”: if you generate enough value and contribute enough, the world will eventually respond in your favor [18:15].

This orientation prioritizes contribution over extraction; instead of asking “What can I get?”, high-agency people ask “What can I create?”. Simply put, you get what you give.

This perspective shifts the focus from waiting for opportunities to making them. If you approach AI as a tool to expand your locus of control, you can systematically knock down barriers between you and your goals. Jones concludes that the future belongs to those who don’t wait for the old structures to return but instead use their agency to build, ship, and learn now, viewing the current disruption not as a threat, but as an unprecedented opportunity for growth [21:44].

Categories
Artificial Intelligence Programming

Grind coding, vibe coding, and everything in between

If we have a term like “vibe coding,” where you build an application by describing what you want it to do using natural language (like English) and an LLM generates the code, we probably should have an equal opposite term that’s catchier than “traditional coding,” where you build an application using a programming language to define the application’s algorithms and data structures.

I propose the term grind coding, which is short, catchy, and has the same linguistic “feel” as vibe coding.

Having these two terms also makes it clear that there’s a spectrum between these two styles. For instance, I’ve done some “mostly grind with a little vibe” coding where I’ve written most of the code and had an LLM write up some small part that I couldn’t be bothered to write — a regular expression or function. There’ve also been some “most vibe with a little grind” cases where I’ve had an LLM or Claude code do most of the coding, and then I did a little manual adjustment afterwards.