How upset is Sam Altman about Anthropic’s Super Bowl ads for Claude?Upset enough to call them “authoritarian,” in the same way a tween would call their parents “fascist” because they wouldn’t give them permission to go to a slumber party.
But daaaaamn, are they memorable and funny.
There are four such ads, each one featuring two actors, with one playing the part of the user, and the other playing the part of ChatGPT. The acting is perfect, with the user clearly in need of answers, and ChatGPT with slightly delayed responses delivered in a saccharine tone and a creepy smile at the end (“Give me your creepiest fake smile!” must’ve been part of the audition process). All the ads end with a snippet of the rap version of Blu Cantrell’s 2003 number, Breathe, which features one of the best beats from that era.
I’ve posted the four ads below, from my least to most favorite. Each one features a common LLM use case.
Here’s Treachery, where a student is asking ChatGPT to evaluate her essay:
Deception features ChatGPT providing advice on the user’s business idea:
Violation’s user wants a six-pack — the muscle kind, not the beard kind — and is about to regret telling ChatGPT his height:
And my favorite, Betrayal, starts with the user trying to get closer to his mom, and ends on a cougar-riffic note:
OpenAI CEO and owner of the world’smostpunchablevoice Sam Altman is, as the kids say, crashing out over these ads, calling them “dishonest” (they’re more hyperbolic) and “authoritarian” (which is Altman himself being hyperbolic):
One of Nate B. Jones’ recent videos has the title Why the Smartest AI Bet Right Now Has Nothing to Do With AI (It’s Not What You Think). While the title is technically correct, I think it should be changed to In the Age of AI, You Have to Beat the Bottlenecks.
Bottleneck: a definition
Many Global Nerdy readers aren’t native English speakers, so here’s a definition of “bottleneck”:
A bottleneck is a specific point where a process slows down or stops because there is too much work and not enough capacity to handle it. It is the one thing that limits the speed of everything else.
Imagine a literal bottle of water.
The body of the bottle is wide and holds a lot of water.
The neck (the top part) is very narrow.
When you try to pour the water out quickly, it cannot all come out at once. It has to wait to pass through the narrow neck.
In business or technology, the “bottleneck” is that narrow neck. No matter how fast you work elsewhere, everything must wait for this one slow part.
Elon is often wrong, but you can learn from his wrongness
My personal rule is that when Elon Musk says something, and especially when it’s about AI, turn it at least 90 degrees. At the most recent World Economic Forum gathering in Davos, he talked a great “abundance” game, with sci-fi claims that AI would create unlimited economic expansion and plenitude for all:
Nate Jones watched the talk with Musk, but came to the conclusion that Musk’s take is the wrong frame for the immediate future. The current AI era will be one of bottlenecks, not abundance. I agree, as I’ve come to that conclusion about any grandiose statement that Musk makes; after all, he is Mr. “we’ll have colonies on Mars real soon now.”
Here are my notes from Jones’ video…
Notes
Instead of abundance, Nate suggests that what we are entering is a “bottleneck economy.” While AI capability is growing, the actual value it produces won’t automatically flow everywhere and benefit everyone. Instead, it will concentrate around specific areas based on AI’s constraints and limitations [00:00].
Research from Cognizant claims AI could unlock $4.5 trillion in U.S. labor productivity (and yes, you need to take that figure with a huge grain of salt), and it comes with a massive caveat: businesses must implement AI effectively. Currently, there’s a wide gap between AI models and the hard work of integrating them into business workflows. This “value gap” means that the trillion-dollar impact won’t materialize until organizations figure out how to bridge the distance between models can do in general and what they can specifically do for a company’s operations [01:01].
Physical infrastructure is the first bottleneck. AI capability is increasingly constrained by things it needs from the physical world, specifically land, power, and skilled trade workers. Building the data centers required to train and run models takes years, and not just for the building process, but also permitting and connections to the power grid. This creates a wedge between the speed of software development and building infrastructure [03:56].
Beyond just buildings and power, the hardware supply chain is the second bottleneck. Access to compute, high-bandwidth memory, and advanced chip manufacturing (controlled largely by TSMC) determines who gets a seat at the table. Companies that understand this are securing resources years in advance and treating regions with stable power and friendly permitting as strategic assets. This creates a market where value is captured by those navigating physical constraints in addition to building better algorithms [06:02].
The third bottleneck is one you might not have thought of: the cost of trust. As the cost of generating content collapses to near zero, the cost of trust is skyrocketing. Jone highlights what he calls a “trust deficit,” calling it a major coordination bottleneck. When any content can be fabricated, the ability to verify and authenticate information becomes expensive and crucial. Value will shift to institutions, platforms, or individuals who can mediate trust and provide a reliable signal in world rapidly filling with synthetic media slop [07:36].
For organizations, there’s the bottleneck of applying general AI to specific contexts. A general AI model won’t know a company’s private code base, board politics, or competitive dynamics. The bridge between “AI can do this” and “AI does this usefully here” requires tacit knowledge; that is, the practices and relationships that aren’t written down but live in the heads of the company’s employees. Companies that solve this integration problem will unlock productivity, while those that don’t will spend lots of money on tools they never use [09:55].
The fifth bottleneck is another one you might not have though of: the increasing value of taste. For individuals, and especially for those in tech, the bottlenecks are shifting from acquiring skill to getting good at making judgment calls. AI is commoditizing hard skills like programming (it’s cutting down the time to proficiency from years to months), the really valuable skills are going to be taste and curation. The ability to distinguish between AI output that’s “good enough” versus AI output that’s extraordinary will become the differentiator. Developing taste takes experience, time, and observation. This is going to create a dangerous race for early-career professionals, whose entry-level work is being devalued [14:52].
The combination of problem-finding and execution are the sixth bottleneck. When problem-solving becomes automated, finding the problem and executing on the solution become the new moats. The market will reward those who can frame the right questions and navigate the ambiguity of implementing appropriate solutions. Jones emphasizes that while AI can generate a strategy or a plan, it can’t execute the “grinding work” of follow-through, holding people accountable, and navigating organizational politics. Success depends on identifying these new personal bottlenecks rather than optimizing for old skills that AI is turning into commodities [16:50].
Tips for techies and developers to beat the bottlenecks
Cultivate a sense for taste in addition to a skill for syntax. As coding moves from purely “grind” to at least partially “vibe” (see my vibe code vs. grind code post), your value shifts from writing code to reviewing AI-generated code. You need to refine your sense of what makes code good to differentiate yourself from the flood of AI output, which tends towards the average. [15:06]
Specialize! To beat the “good enough” standard of AI, pick a niche, and specialize in it. The window for being a generalist is closing, and extraordinary depth allows you to spot quality that AI (which once again, tends towards the average) misses. [16:16]
Pivot to problem finding. AI makes a lot of problem solving cheap, which makes problem finding the rare and precious thing. Stop defining yourself solely as a problem solver. Focus on defining the right problems to solve, framing the architecture, and determining direction. This management-level skill is harder for AI to replicate than execution. [16:50]
Value tacit knowledge and context. Tacit knowledge is the “soft” knowledge of how an organization works, and it’s almost never documented (at least directly), but lives in the heads of the people working there. Knowing why a legacy codebase exists or understanding specific stakeholder needs is a “context moat” that general AI models can’t easily infer. [17:36]
Focus on execution and follow-through. AI can generate the plan/code, but it can’t navigate the friction of deployment. The “grinding work” of implementation, such as convincing teams, fixing integration bugs, and finalizing products, is where the real value now lies. [18:47]
Build your tolerance for ambiguity. This has always been good for real life, but now it’s also good for tech work, which used to live in rigid, well-defined, unambiguous spaces… but not anymore! The tech landscape is shifting rapidly, and the ability to remain functional and productive while “metabolizing change” and dealing with uncertainty is a critical soft skill that separates leaders from people who freeze when things become ambiguous. [20:01]
Audit your personal bottlenecks: Be honest about what is actually constraining your career right now. It might not be learning a new framework (the old bottleneck). Instead, it might be your ability to integrate AI tools into your workflow or your ability to communicate complex ideas. Find those bottlenecks and come up with strategies to overcome them! [21:25]
When I was asked about what AI tools I was trying out in my recent interview on the Enlightened Fractionals podcast, one of the tools I named was Clawdbot. But I was already out-of-date enough to have used the incorrect name, because it had been changed to Moltbot. Or maybe it had been re-renamed to its current name (at least at the time of writing), OpenClaw.
Clawdbot, Moltbot, OpenClaw: What is this thing?
OpenClaw is an open-source AI assistant that went from launch to viral sensation to full-on crisis management mode in just five days. It originally went by the name I used, Clawdbot, but then rebranded twice:
From ClawdBot to Moltbot after Anthropic raised trademark concerns about the name’s similarity to Claude. Let’s face it, the name “ClawdBot” was a reference to Claude, and the misspelling was intentionally meant to prevent the kind of IP violation concern that they ended up running into. “Moltbot” is a reference to molting, which is when a lobster sheds its outer shell and emerges with a new, soft shell as its exoskeleton.
From Moltbot to OpenClaw after creator Peter Steinberger simply decided he didn’t like the interim name.
Throughout the chaos, the project now know as OpenClaw has attracted over 144,000 GitHub stars, along with crypto scammers, handle-sniping bots, and a lot of cybersecurity practitioners’ attention.
What makes OpenClaw different?
Unlike traditional AI chatbots that live on dedicated websites, OpenClaw integrates directly into to a number of messaging apps, and it’s pretty likely you already use at least one of them. You can interact with it using WhatsApp, Telegram, iMessage, Slack, Discord, or Signal. Using OpenClaw is like texting or messaging a friend, and it routes your messages to whichever LLM you choose while handling task automation locally.
OpenClaw runs on a computer (real or virtual) that you control and gives the LLM access, allowing it to take action on your behalf.
The promise of a real AI assistant
OpenClaw offers three standout capabilities:
Persistent memory: OpenClaw remembers from session to session and doesn’t forget everything when you close the app. It learns your preferences, tracks ongoing projects and actually remembers conversations you had and what you tell it.
Proactive notifications: OpenClaw notifies you about important things, such as daily briefings, deadline reminders and email triage summaries. You can wake up to a text saying, “Here are your three priorities today,” without having to ask the AI first — it does so proactively.
Real automation: Because you can grant OpenClaw read and write access to your local filesystem and browser access, it has been described as “an LLM with hands.” It can schedule tasks, read and re-organize your files, fill out forms, search and reply to your email, generate reports, and control smart home devices. It’s been used for thinngs like achieving “inbox zero” to handling research threads that run for days, habit tracking, and providing automated weekly recaps of what they shipped.
Real talk: Should you try OpenClaw or wait?
At this point, I feel the need to remind you that Clawdbot/Moltbot/OpenClaw is an open source project moving at AI speed that’s been in use by early adopters for only a week. And it that time, the project has faced the threat of cancellation via trademark lawyers, and some of its user base have fallen prey to crypto scammers while others have failed to grasp its security implications and have exposed their private information to the ’net at large.
If you need something that “just works” and has something like a one-click install, I suggest waiting. The things OpenClaw does are too cool and convenient to be ignored. If the OpenClaw people don’t make a safer, simpler version, someone else most definitely will (and get rich in the process).
Serious security considerations
Just Google “security” and “openclaw” (or “clawdbot” or “openmolt”) and you’ll see articles written by all manner of security experts who’ve flagged significant risks with OpenClaw’s architecture. It runs on your local computer and can interact with emails, files, and credentials on that computer. If you configure it the wrong way, you can unintentionally expose private data such as API keys.
Researchers have already discovered numerous publicly accessible OpenClaw instances that have little or no authentication. OpenClaw also creates what one security analyst called a “hybrid identity” problem, where it operates as you, using your credentials after you’ve logged off. This kind of “digital twinning” was largely in the realm of science fiction until last week, and ,ost security systems aren’t designed to handle it.
The current OpenClaw situation (which is subject to change very, very quickly)
Despite the initial hiccups (and there will be more), OpenClaw continues to grow. It’s got an active Discord community, it keep collecting GitHub stars, and the team appears to have learned some lessons about viral success and security practices. Expect to see more posts and stories about it over the next few weeks.
7 tips for getting started with OpenClaw
If you’re feeling confident about trying it out, go to openclaw.ai and review the documentation thoroughly. Before installing anything, read through the official guides to understand the architecture, requirements, and how the message routing to LLM providers works. This will help you make informed decisions about your setup.
Complete the security checklist before deployment. This is new software in a new field where we learn new things every day. Given the documented vulnerabilities in early deployments, prioritize authentication configuration, ensure your instance isn’t publicly accessible, and never expose API keys. Consider using a dedicated machine or virtual environment rather than your primary computer. (I’m currently using a Raspberry Pi 500 for this purpose.)
Beware of Mac Mini scams. Speaking of dedicated machines, the Mac Mini, thanks to its fast Apple Silicon processors and fantastic memory bandwidth, has become the preferred AI development machine and the preferred OpenClaw platform. Enterprising con artists have found out how in-demand Mac Minis are and have been posting scam ads on places like Facebook Marketplace. I’ll write an article about my own experiences with such scammers soon.
Choose and configure your LLM backend. Decide if you want to use one of the bigger paid services like Claude, ChatGPT, or Gemini, and understand the associated costs before connecting them to OpenClaw (you might want to consider DeepSeek). You can also go with a local model, which is what I’m doing.
Start with a single messaging integration. Don’t go nuts. Pick one messaging platform to use with OpenClaw to test the waters (I suggest Discord). This limits your exposure while you learn how OpenClaw behaves and what permissions it actually needs.
Limit its destructive capability and start by giving OpenClaw only read-only automation. Start by letting OpenClaw summarize emails or provide briefings before giving it “write” access to send messages, modify files, or execute commands on your behalf. Begin slowly and safely, then gradually expand its permissions as you become more certain about your security configuration and how OpenClaw behaves.
As a reminder of the dangers of letting an AI agent run wild on your behalf, I strongly recommend you watch the Sorceror’s Apprentice part of the Walt Disney animated film Fantasia. In case you don’t have a Disney+ account, I’ve posted it in the YouTube embeds below:
Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!
Vibe coding, a term coined by Andrej Karpathy, is where where developers use natural language prompts to have LLMs or LLM-based tools generate, debug, and iterate on code. Vibe coding is declarative, because you describe what you want.
Grind coding, my term for traditional programming, where you specify how a program performs its tasks using a programming language. Grind coding is imperative, because you specify how the thing you want works.
I myself have been writing code for different purposes, on different parts of this spectrum (see the diagram at the top of this article for where they land on the spectrum):
The Tampa Bay Tech Events utility: This is the Jupyter Notebook I use to gather event info from online listings and build the tables that make up the event listings I post every week here on Global Nerdy. I wrote the original code myself, but I’ve called on Claude to take the tedious stuff, including analyzing the obfuscated HTML in Meetup’s event pages to find the tags and classes containing event information.
MCP server for my current client: This is a project that started before I joined, and was written using a code generation tool. The client is a big platform connected to some big organizations; my job is to be the human programmer in the loop.
Picdump poster: Every week, I post “picdump” articles on the Global Nerdy and Accordion Guy blogs. Over the week, I save interesting or relevant images to specific folders, and the picdump poster utlity builds a blog post using those images. It’s a low-effort way for me to assemble some of my most-read blog posts, and it’s more vibe-coded than not, especially since I don’t specialize in building WordPress integrations.
Copy as Markdown: Here’s an example of using vibe coding as a way to have custom software built on demand. I wanted a way to copy text from a web page, and then converting that copied text into Markdown format. This one was purely vibe-coded; I simply told Gemini what I wanted, and it not only generated the code for me, but also gave me instructions on how to install it.
Monday at 7 p.m. at Embarc Collective (Tampa): Tampa Devs presents Deploying at the Tactical Edge: Overcoming Air-Gap Constraints in Kubernetes, where certified cloud native associate Nathan Thrasher will break down the tools and workflows that make secure, repeatable Kubernetes deployments possible in isolated environments.
Tuesday at 10 a.m., online: Computer Coach presents The Job Seeker Journey Framework, a proprietary approach developed by Computer Coach and refined through 26 years of real-world job placement experience.
Instead of treating the job search like a collection of disconnected tasks, this framework maps out each phase job seekers move through, from preparation and positioning to interviewing and offer decisions. The session explains how employers think at each stage, where job seekers often get stuck, and how to move forward with intention using a clear, structured process.
Wednesday at 5:30 p.m. at Entrepreneur Collaborative Center (Tampa): Tampa Bay QA and Testing Meetup (along with Tampa Bay AI Meetup) presents Power Testmation (PT8N): AI-Enhanced Test Management.
Power Testmation (PT8N) is an Azure DevOps (ADO) Test Manager plugin built to cover core test management needs—now enhanced with an AI assistant named Alicia. Alicia helps teams create, execute, and improve test cases faster by generating tests from requirements and documents, analyzing existing coverage, updating tests when features change, and even running test cases through an MCP server.
This will feature walkthrough of a realistic, day-to-day workflow: how a QA engineer uses PT8N in a normal workday—from turning requirements into tests, to keeping suites clean and up to date, to executing and refining coverage.
Wednesday at 5:30 p.m. at spARK Labs (St. Pete): AI Salon St. Pete / Tampa presents a fireside chat with Nithesh Gudipuri (Associate Director of Technology, Raymond James) and John Adams (SVP of AI Architecture, VideoAmp) on the topic “How do you build — and lead — teams in the age of AI?”
They’ll talk about how AI is reshaping engineering, product, and data teams, what skills actually matter as AI moves from hype to infrastructure, how leaders are balancing humans + automation at scale, what builders should be preparing for next as AI changes how teams work.
Wednesday at 6 p.m. at Hays (Tampa): Tampa Bay Product Group presents The Sharkitect Pitch Lab: Master Your Message. Multiply Your Results.
This experience brings real-world, battle-tested pitchology into a hands-on lab where you’ll build and refine your actual message in real time. It will be led by Tony Greene, Co-Founder of The Sharkitect Group, the CMO and strategy team trusted by Shark Tank stars, like Mark Cuban and Kevin Harrington.
Designed to help entrepreneurs, founders, and business leaders engineer a clear, compelling, and confidence-driven pitch using the same strategic frameworks that power high-growth brands behind the scenes.
Thursday at 6:30 p.m. at Cigar City Brewing (Tampa): Tampa Bay New-In-Tech is holding Thursday Connect & Cheers — Whether you’re new to the tech industry, transitioning into tech, or just looking to expand your network, this event is designed to help you make genuine connections in a relaxed, low-pressure setting.
Thursday at 7 p.m. at Neon Temple (Tampa): Come join Neon Temple’s own Binary_badg3r as he walks us through an OSINT journey with a ton of Flair!
Starting with just a single image, this presentation demonstrates the power, and danger of open-source intelligence (OSINT), by tracking down wrestling legend Ric Flair’s location from just a single image using freely available and commercial tools. Through this practical example, attendees will see firsthand how OSINT analysts piece together clues from a variety of sources to pinpoint exactly where someone is. This isn’t just about finding a celebrity; it’s about understanding how these same techniques can be used against anyone, from executives to everyday individuals who unknowingly broadcast their locations through digital breadcrumbs. Whether you’re stylin’ and profilin’ or just posting vacation photos, this session reveals the gap between perceived online privacy and actual exposure, while providing defensive strategies to protect yourself and your organization from location-based tracking.
It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.
In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.
What goes into this list?
I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:
Programming, DevOps, systems administration, and testing
Tech project management / agile processes
Video, board, and role-playing games
Book, philosophy, and discussion clubs
Tech, business, and entrepreneur networking events
Toastmasters and other events related to improving your presentation and public speaking skills, because nerds really need to up their presentation game
Sci-fi, fantasy, and other genre fandoms
Self-improvement, especially of the sort that appeals to techies