Here’s your start-of-the-year reminder that you don’t have to accept your LLM’s answers as gospel. In fact, you do what I do — talk back to them Samuel L. Jackson / Nick Fury from The Avengers-style.
The screenshot above comes from an exchange I had with Gemini earlier today. I like using LLMs as a sounding board for ideas — as I like to say “the one thing you can’t do, no matter how creative or clever you are, is come up with ideas you’d never think of.”
Gemini suggested a course of action that I completely disagreed with, so I decided to respond with one of my favorite lines from the first Avengers film, and it responded with “touché.” Keep thinking, and don’t completely outsource your brain to AI!
And just as a treat, here’s that scene from The Avengers:
A lot of the drudgery behind assembling the “Tampa Bay Tech Events” list I post on this blog every week is done by a Jupyter Notebook that I started a few years ago and which I tweak every couple of months. I built it to turn a manual task that once took the better part of my Saturday afternoons into a (largely) automated exercise that takes no more than half an hour.
The latest improvement was the addition of AI to help with the process of deciding whether or not to include an event in the list.
In the Notebook, there’s one script creates a new post in Global Nerdy’s WordPress, complete with title and “boilerplate” content that appears in every edition of the Tech Events list.
Then I run the script that scrapes Meetup.com for tech events that are scheduled for a specific day. That script generates a checklist like the one pictured below. I review the list and check any event that I think belongs in the list and uncheck any event that I think doesn’t belong:
Click to view the screenshot at full size.
In the previous version of the Notebook, all events in the checklist were checked by default. I would uncheck any event that I thought didn’t belong in the list, such as one for real estate developers instead of software developers, as well as events that seemed more like lead generation disguised as a meetup.
The new AI-assisted version of the Notebook uses an LLM to review the description of each event and assign a 0 – 100 relevance score and the rationale for that score to that event. Any event with a score of 50 or higher is checked, and anything with a score below 50 is unchecked. The Notebook displays the score in the checklist, and I can click on the “disclosure triangle” beside that score to see the rationale or a link to view the event’s Meetup page.
In the screenshot below, I’ve clicked on the disclosure triangle for the Toastmasters District 48 meetup score (75) to see what the rationale for that score was:
Click to view the screenshot at full size.
For contrast, consider the screenshot below, where I’ve clicked on the disclosure triangle for Tampa LevelUp Events: Breakthrough emotional eating with Hyponotherapy. Its score is 0, and clicking on the triangle displays the rationale for that score:
Click to view the screenshot at full size.
One more example! Here’s Tea Tavern Dungeons and Dragons Meetup Group, whose score is 85, along with that score’s rationale:
Click to view the screenshot at full size.
I don’t always accept the judgement of the LLM. For example, it assigned a relevance score of 40 to Bitcoiners of Southern Florida:
Click the screenshot to view it at full size.
Those of you who know me know how I feel about cryptocurrency…
…but there are a lot of techies who are into it, so I check the less-scammy Bitcoin meetups despite their low scores (there are questionable ones that I leave unchecked). I’ll have to update the prompt for the LLM to include certain Bitcoin events.
Speaking of prompts, here’s the cell in the Notebook where I define the function that calls the LLM to rate events based on their descriptions. You’ll see the prompt that gets sent to the LLM, along with the specific LLM I’m using: DeepSeek!
Click to view the screenshot at full size.
So far, I’m getting good results from DeepSeek. I’m also getting good savings by using it as opposed to OpenAI or Claude. To rate a week’s worth of events, it costs me a couple of pennies with DeepSeek, as opposed to a couple of dollars with OpenAI or Claude. Since I don’t make any money from publishing the list, I’ve got to go with the least expensive option.
If you watch just one AI video before Christmas, make it lecture 9 from AI pioneer Andrew Ng’s CS230 class at Stanford, which is a brutally honest playbook for navigating a career in Artificial Intelligence.
Worth reading: AI and ML for Coders in PyTorch, Laurence Moroney’s latest book, and Andrew Ng wrote the foreword! I’m working my way through this right now.
The class starts with Ng sharing some of his thoughts about the AI job market before handing the reins over to guest speaker Laurence Moroney, Director of AI at Arm, who offered the students a grounded, strategic view of the shifting landscape, the commoditization of coding, and the bifurcation of the AI industry.
Here are my notes from the video. They’re a good guide, but the video is so packed with info that you really should watch it to get the most from it!
The golden age of the “product engineer”
Ng opened the session with optimism, saying that this current moment is the “best time ever” to build with AI. He cited research suggesting that every 7 months, the complexity of tasks AI can handle doubles. He also argued that the barrier to entry for building powerful software has collapsed.
Speed is the new currency! The velocity at which software can be written has changed largely due to AI coding assistants. Ng admitted that keeping up with these tools is exhausting (his “favorite tool” changes every three to six months), but it’s non-negotiable. He noted that being even “half a generation behind” on these tools results in a significant productivity drop. The modern AI developer needs to be hyper-adaptive, constantly relearning their workflow to maintain speed.
The bottleneck has shifted to what to build. As writing code becomes cheaper and faster, the bottleneck in software development shifts from implementation to specification.
Ng highlighted a rising trend in Silicon Valley: the collapse of the Engineer and Product Manager (PM) roles. Traditionally, companies operated with a ratio of one PM to every 4–8 engineers. Now, Ng sees teams trending toward 1:1 or even collapsing the roles entirely. Engineers who can talk to users, empathize with their needs, and decide what to build are becoming the most valuable assets in the industry. The ability to write code is no longer enough; you must also possess the product instinct to direct that code toward solving real problems.
The company you keep: Ng’s final piece of advice focused on network effects. He argued that your rate of learning is predicted heavily by the five people you interact with most. He warned against the allure of “hot logos” and joining a “company of the moment” just for the brand name and prestige-by-association. He shared a cautionary tale of a top student who joined a “hot AI brand” only to be assigned to a backend Java payment processing team for a year. Instead, Ng advised optimizing for the team rather than the company. A smaller, less famous company with a brilliant, supportive team will often accelerate your career faster than being a cog in a prestigious machine.
Surviving the market correction
Ng handed over the stage to Moroney, who started by presenting the harsh realities of the job market. He characterized the current era (2024–2025) as “The Great Adjustment,” following the over-hiring frenzy of the post-pandemic boom.
The three pillars of success To survive in a market where “entry-level positions feel scarce,” Moroney outlined three non-negotiable pillars for candidates:
Understanding in depth: You can’t just rely on high-level APIs. You need academic depth combined with a “finger on the pulse” of what is actually working in the industry versus what is hype.
Business focus: This is the most critical shift. The era of “coolness for coolness’ sake” is over. Companies are ruthlessly focused on the bottom line.Moroney put a spin on the classic advice, “Dress for the job you want, not the job you have,” and suggested that if you’re a job-seeker, that you “not let your output be for the job you have, but for the job you want.” He based this on his own experience of landing a role at Google not by preparing to answer brain teasers, but by building a stock prediction app on their cloud infrastructure before the interview.
Bias towards delivery: Ideas are cheap; execution is everything. In a world of “vibe coding” (a term he doesn’t like — he prefers something more line “prompting code into existence” or “prompt coding”), what will set you apart is the ability to actually ship reliable, production-grade software.
The trap of “vibe coding” and technical debt: Moroney addressed the phenomenon of using LLMs to generate entire applications. They may be powerful, but he warned that they also create massive “technical debt.”
The 4 Realities of Modern AI Work
Moroney outlined four harsh realities that define the current workspace, warning that the “coolness for coolness’ sake” era is over. These realities represent a shift in what companies now demand from engineers.
Business focus is non-negotiable. Moroney noted a significant cultural “pendulum swing” in Silicon Valley. For years, companies over-indexed on allowing employees to bring their “whole selves” to work, which often prioritized internal activism over business goals. That era is ending. Today, the focus is strictly on the bottom line. He warned that while supporting causes is important, in the professional sphere, “business focus has become non-negotiable.” Engineers must align their output directly with business value to survive.
2. Risk mitigation is the job. When interviewing, the number one skill to demonstrate is not just coding, but the ability to identify and manage the risks of deploying AI. Moroney described the transition from heuristic computing (traditional code) to intelligent computing (AI) as inherently risky. Companies are looking for “Trusted Advisors” who can articulate the dangers of a model (hallucinations, security flaws, or brand damage) and offer concrete strategies to mitigate them.
3. Responsibility is evolving. “Responsible AI” has moved from abstract social ideals to hardline brand protection. Moroney shared a candid behind-the-scenes look at the Google Gemini image generation controversy (where the model refused to generate images of Caucasian people due to over-tuned safety filters). He argued that responsibility is no longer just about “fairness” in a fluffy sense; it is about preventing catastrophic reputational damage. A “responsible” engineer now ensures the model doesn’t just avoid bias, but actually works as intended without embarrassing the company.
4. Learning from mistakes is constant. Because the industry is moving so fast, mistakes are inevitable. Moroney emphasized that the ability to “learn from mistakes” and, crucially, to “give grace” to colleagues when they fail is a requirement. In an environment where even the biggest tech giants stumble publicly (as seen with the Gemini launch), the ability to iterate quickly after a failure is more valuable than trying to be perfect on the first try.
Technical debt
Just like a mortgage, debt isn’t inherently bad, but you must be able to service it. He defined the new role of the senior engineer as a “trusted advisor.” If a VP prompts an app into existence over a weekend, it is the senior engineer’s job to understand the security risks, maintainability, and hidden bugs within that spaghetti code. You must be the one who understands the implications of the generated code, not just the one who generated it.
The dot-com parallel: Moroney drew a sharp parallel between the current AI frenzy and the Dot-Com bubble of the late 1990s. He acknowledged that while we are undoubtedly in a financial bubble, with venture capital pouring billions into startups with zero revenue, he emphasizes that this does not imply the technology itself is a sham.
Just as the internet fundamentally changed the world despite the 2000 crash wiping out “tourist” companies, AI is a genuine technological shift that is here to stay. He warns students to distinguish between the valuation bubble (which will burst) and the utility curve (which will keep rising), advising them to ignore the stock prices and focus entirely on the tangible value the technology provides.
The bursting of this bubble, which Moroney terms “The Great Adjustment,” marks the end of the “growth at all costs” era. He argues that the days of raising millions on a “cool demo” or “vibes” are over. The market is violently correcting toward unit economics, meaning AI companies must now prove they can make more money than they burn on compute costs. For engineers, this signals a critical shift in career strategy: job security no longer comes from working on the flashiest new model, but from building unglamorous, profitable applications that survive the coming purge of unprofitable startups.
The industry is splitting into two distinct paths:
“Big AI”: The AI made by massive, centralized players such as OpenAI, Google, and Anthropic, who are chasing after AGI. This relies on ever-larger models hosted in the cloud.
“Small AI”: AI systems that are based on open-weight (he prefers “open-weight” to “open source” when describing AI models), self-hosted, and on-device models. Moroney also calls this “self-hosted AI.”
Moroney urged the class to diversify their skills. Don’t just learn how to call an API; learn how to optimize a 7-billion parameter model to run on a laptop CPU. That is where the uncrowded opportunity lies.
Intent: Understanding exactly what the user wants.
Planning: Breaking that intent down into steps.
Tools: Giving the model access to specific capabilities (search, code execution).
Reflection: Checking if the result met the intent. He shared a demo of a movie-making tool where simply adding this agentic loop transformed a hallucinated, glitchy video into a coherent scene with emotional depth.
Conclusion: Work hard
I’ll conclude this set of notes with what Ng said at the conclusion of his introduction to the lecture, which he described as “politically incorrect”: Work hard.
While he acknowledged that not everyone is in a situation where they can do so, he pointed out that among his most successful PhD students, the common denominator was an incredible work ethic: nights, weekends, and the “2 AM hyperparameter tuning.”
In a world drowning in hype, Ng’s and Moroney’s “brutally honest” playbook is actually quite simple:
Use the best tools to move fast
Understand the business problem you’re trying to solve, and understand it deeply.
Ignore the noise of social media and the trends being hyped there. Build things that actually work.
And finally, to quote Ng: “Between watching some dumb TV show versus finding your agentic coder on a weekend to try something… I’m going to choose the latter almost every time.”
When I saw a screenshot of GPT 5.2’s answer to the question “How many ‘r’s in garlic?” I thought it was a joke…
…until I tried it out for myself, and it turned out to be true!
At the time of writing (2:24 p.m. UTC-4, Friday, December 12, 2025), if you ask ChatGPT “How many ‘r’s in garlic?” using GPT 5.2 in “auto” mode, you’ll get this response:
There are 0 “r”s in “garlic”.
(The screenshot above is my own, taken from my computer.)
…but this time, it responded with the correct answer. I suspect the correct answer to the infamous “strawberry” question is the result of fine-tuning aimed specifically at that question.
Long story short: If you think prices are bad today, wait a few months. If you can, buy RAM now, but Consumer RAM Winter is coming.
In case you need some context for the meme above, here’s the scene it came from — probably the best scene in Iron Man 2, apart from “Sir, I’m gonna have to ask you to exit the donut.”
Lately, a lot of friends have been telling me that they were listening to an interview with Cory Doctorow about his latest book, Enshittification, and heard him attribute this quip to me:
“When life gives you SARS, you make *sarsaparilla*.”
The YouTube short above tells the story behind the quote (which also appears in this old blog post of mine), which also includes a tip on using AI to find specific moments or quotes in videos, and a “This DevRel for hire” pitch to hire an awesome developer advocate.