Pictured above: a screenshot from a conversation I had with Claude last night. I was using it to craft an email in response to an interview that could be better described as an ambush.
I use LLMs as a double-check for when I’m trying to remain professional when greatly annoyed and for when I want to spend the minimum amount of time on something or someone. The party I was communicating with met both criteria.
I’m pretty sure that the way I sass back at LLMs doesn’t affect the quality of their revision when I harshly tell them that I disagree with their original answer. But as a catharsis, creative outlet, and excuse for an amusing screenshot and blog post, it’s oh-so-good.
From April 13th through 16th — and a couple of days before, because it’s in Austin — I’m going to be at the Arc of AI conference! Over the next little while, I’m going to be posting articles about Arc of AI, in case you’re wondering what the conference is about and whether you should go.
In this article, I’ll talk about my favorite title from all the talks on the Arc of AI agenda.
The talk: We’re all using AI, But We’re Not Enjoying It
When your talk happens on the last time slot at the end of a three-day conference (four days, if you’re also going to do one of the workshops), you need to put in some extra effort to get the attendees to show up and not disappear for the local sights (Arc of AI’s in Austin) or make a beeline for the airport.
Brent Laster, President and Founder of Tech Skills Transformations, is giving a number of talks — and a workshop! — at Arc of AI, and he has one of the closing talks. He has a talk in one of those last speaking slots on the Thursday at 4:00 p.m., and it has what I think is the most interesting title on the agenda:
We’re all using AI, But We’re Not Enjoying It
Here’s the abstract:
We’re All Using AI, But We’re Not Enjoying It takes an honest look at a growing gap in the workplace: AI adoption is skyrocketing, yet frustration, confusion, and uneven results are just as common. This talk explores why AI so often feels harder than it should—poorly integrated tools, unclear workflows, unrealistic expectations, cognitive overload, and the pressure to “keep up.” Looking at patterns seen across teams learning to use AI effectively, we’ll break down the practical barriers that make everyday AI work feel tedious instead of empowering. More importantly, we’ll outline a set of achievable shifts—better task design, lighter mental models, context-first prompting, workflow pairing, and small but meaningful guardrails—that can restore a sense of control and clarity.
I need to figure out how I can attend both Brent’s talk and my former Tucows coworker Leonid Igolnik’s talk (which he’s giving with Baruch Sadogursky), Back to the Future of Software: How to Survive the AI Apocalypse with Tests, Prompts, and Specs…
Great Scott! The robots are coming for your job—and this time, they brought unit tests. Join Doc and Marty from the Software Future (Baruch and Leonid) as they race back in time to help you fight the machines using only your domain expertise, a well-structured prompt, and a pinch of Gherkin. This keynote is your survival guide for the AI age: how to close the intent-to-prompt chasm before it swallows your roadmap, how to weaponize the Intent Integrity Chain to steer AI output safely, and why the Art of the Possible is your most powerful resistance tool. Expect:
• Bad puns
• Good tests
• Wild demos
The machines may be fast. But with structure, constraint, and a little time travel, you’ll still be the one writing the future.
Decisions, decisions…
Want to find out more about and register for Arc of AI?
Once again, Arc of AI will take place from Monday, April 13 through Friday, April 16, with the workshop day taking place on Monday, and the main conference taking place on Tuesday, Wednesday, and Thursday.
Feeling nostalgic for the 2000s? Need a little amusement? I’ve got the agentically-coded thing you need: Eternal Grind!
Experience it now! Point your browser at accordionguy.github.io/eternal-grind/, then sit back and enjoy the adventure as the game plays itself for you. No effort required, and no time lost to the grind that other online role-playing games bring.
A screenshot of Eternal Grind, later on in the game. Click to view at full size.
Eternal Grind is my version of Progress Quest, a parody of the popular 2000s game (and devourer of nerd lives) EverQuest. Unlike EverQuest, which was a multiplayer, Dungeons and Dragons-inspired role-playing game with a cluttered dashboard that required your full attention…
A screenshot from EverQuest. Click to view at full size.
…Progress Quest was a zero-player Dungeons and Dragons-inspired role-playing game that required no attention at all. It did keep one key aspect of EverQuest: the with a cluttered dashboard. Here’s a screenshot of the game in all its Windows XP glory:
Progress Quest! Click to go to the official Progress Quest site.
Eternal Grind is my homage to Progress Quest. Like Progress Quest, it aims to be the ultimate “zero-player” RPG experience, providing all the dopamine of a legendary quest, but with absolutely none of the effort.
In the spirit of today’s best workflows, Eternal Grind automates the entire heroic journey, from slaying fantastical creatures like Literal Metaphors to hoarding fabulous artifacts such as the Scissors of Regret.
The game automatically creates characters like Kevin from Accounting (a Low-Carb Orc and Spreadsheet Warrior by trade), after which your only job is to sit back and watch the progress bars fill. It’s a witty, Windows XP styled commentary on the nature of the “grind,” where the numbers always go up, the loot is perpetually absurd, and your lack of agency is the greatest feature of all.
There are two notable differences between Eternal Grind and Progress Quest, the game to which it pays homage:
While Progress Quest was a Windows-only desktop game, Eternal Grind is a single-page web game that runs on any device with a browser. Feel free to play it on your internet fridge!
Progress Quest was written the old-school way: using a programming language — namely, Delphi (Borland’s version of Pascal). Eternal Grind was written the new-school way: agentically, using Zencoder’sZenflow AI coding tool.
That second point is an important one. Progress Quest was the product of traditional coding: the manual, instruction-based process where the developer acts as both architect and builder, meticulously and painstakingly writing instructions that specify how the program should do its work. Success depends on that developer’s ability to translate complex ideas into perfect syntax.
Eternal Grind is a different beast, since it’s the result of agentic coding, where the approach is intent instead of instruction. Instead of dictating the “how,” I provided a high-level specification — the “what” — to Zenflow, which can autonomously plan, write, and even self-correct the code.
(I’ll include the aforementioned specification at the end of this article.)
When using Zenflow to build Eternal Grind, I was no longer the contractor laying every brick. I was now the supervisor, providing the blueprints and overseeing an AI crew that did the bricklaying.
I plan to keep tweaking Eternal Grind using Zenflow. Be sure to visit its page often!
Eternal Grind started with a specification that I wrote into file named spec.md. This file served as the definitive “source of truth” that described the kind of application I wanted created. While traditional specs are often treated as a “nice-to-have” for human developers, AI agents needs such a spec to act as a “North Star” as well as to keep them from developing the wrong thing.
By clearly defining the application’s logic, layout, and data in a structured format, I provided Zenflow with the basic context for building Eternal Grind. It turns a vague, hand-wavey request into a structured mission, ensuring that the code generated not only just works, but also provides the application I expected, working in the way I expected.
Here’s the complete specification file I initially wrote:
# Functional Specification: Eternal Grind (ZPRPG)
## 1. Project Overview
"Eternal Grind" is a "Zero-Player RPG" (ZPRPG) inspired by the classic parody *Progress Quest*. The game automates all traditional RPG elements—questing, combat, looting, and leveling. The user's role is purely observational.
---
## 2. UI Layout (Three-Column Dashboard)
The application shall use a fixed-height, full-width dashboard layout using Flexbox or Grid.
### A. Character Sheet (Left Column - 25% Width)
* **Identity:** Displays Character Name (from `NAMES`), Level, Race, and Class.
* **Stats Table:** A vertical list of numerical values for the 10 core stats (e.g., Strength, Existential Dread).
* **Equipment:** A list of 6-10 equipment slots showing absurd gear.
* **Spells/Abilities:** A scrolling list of learned "skills" that grows upon leveling up.
### B. The Engine of Progress (Center Column - 50% Width)
* **Location Header:** Displays the current location from the `LOCATIONS` list.
* **Primary Task Bar:** A large progress bar indicating the current action (e.g., "Contemplating the void").
* **Plot Bar:** A slower-moving bar tracking progress toward the next "Act."
* **Experience Bar:** A bar tracking progress toward the next Level.
* **Portrait:** A central area for a static character icon or simple CSS animation.
### C. Data Feed (Right Column - 25% Width)
* **Inventory (Top Half):** A scrolling list of items collected. Maximum capacity: 15 items.
* **Quest Log (Bottom Half):** A vertical scrolling log of events. It must automatically scroll to the bottom as new lines are appended.
---
## 3. Core Mechanics & Logic
### 3.1 Initialization
When the application starts:
1. **Name Selection:** A name is chosen randomly from the `NAMES` list and remains permanent.
2. **Character Build:** A `RACE` and `CLASS` are randomly assigned.
3. **Starting Stats:** Each stat in the `STATS` list is assigned a random base value between 3 and 18.
### 3.2 The Game Loop
The application runs on a continuous timed loop:
1. **Questing:** The "Task Bar" fills over a period of 3–8 seconds.
2. **Completion:** Once the bar hits 100%:
* A random **Monster** is "defeated."
* A random **Item** (Adjective + Noun) is added to the Inventory.
* A line is added to the **Quest Log** (e.g., "Executed a Low-Level Bugbear. Found: Rusty Sock of Mystery").
* The **Experience Bar** increments.
3. **Market Mode:** When the Inventory reaches 15 items:
* The current task changes to "Heading to market to sell junk."
* After a short delay, the Inventory is cleared and the character returns to questing.
4. **Leveling Up:** When the Experience Bar reaches 100%:
* The Character Level increments.
* A random **Stat** increases by 1.
* A new **Spell** is randomly selected and added to the spell list.
* The Experience Bar resets.
---
## 4. Technical Requirements
* **State:** The application must maintain a state object containing the character's profile, stats, inventory list, and log history.
* **Styling:** A "Retro Win95" or "Classic MMO" aesthetic with high-contrast borders.
* **Performance:** The log should prune entries older than 100 lines to maintain performance.
---
## 5. Data Appendix
### Character Names
* Kevin from Accounting, Sir Tap-A-Lot, The Great Barnaby, User_772, Mistake #4, Sir Not-Appearing-In-This-Game, A Literal Bag of Flour, Lord Helvetica, Chadwick the Unready, Karen of the Suburbs, Glitchy McGlitchface, The Placeholder, Grommet the Slightly Agitated, Barb the Librarian, Sir Sells-Everything, Kyle the Monster Energy Enthusiast, Grandmaster Procrastinator, The Unpaid Intern, Sir Buffering..., Standard Hero 01.
### Races
* Sentient Toaster, Depressed Elf, Low-Carb Orc, Middle-Management Dwarf, Glitch in the Matrix, Half-Empty Human, Sentimental Slime, Vague Shadow, Procrastinating Pixie, Bureaucratic Beholder, Existential Ghost.
### Classes
* Spreadsheet Warrior, Chronic Procrastinator, Underpaid Mage, Professional Mourner, Existentialist Rogue, Lunch Knight, Intermittent Faster, Coffee Warlock, Passive-Aggressive Paladin, Technical Support Druid, Tax Accountant.
### Tasks
* Debating a fence post, Polishing a rusty nail, Contemplating the void, Waiting for a sign, Filing a 1040-EZ, Staring into the middle distance, Organizing a sock drawer, Explaining the internet to a rock, Searching for a lost remote, Counting ceiling tiles, Simulating a personality, Buffing out a scratch in reality.
### Locations
* The Forest of Mild Inconvenience, The Cave of Echoing Sighs, Downtown Boredom, The Desert of Dry Humor, Mount Mediocrity, The Swamps of 'I'll Do It Tomorrow', The Suburbs of Despair.
### Item Adjectives
* Dull, Polished, Forbidden, Rusty, Lamentable, Insignificant, Glowing, Slightly Damp, Overpriced, Mediocre, Legendary-ish.
### Item Nouns
* Scissors of Regret, Pebble of Mediocrity, Scone of Power, Lint of Destiny, Paperclip of Hope, Broken Twig, Expired Coupon, Sock of Mystery, Unfinished Novel, Jar of Pickled Thoughts.
### Monsters
* A Literal Metaphor, The Concept of Ennui, A Low-Level Bugbear, An Imaginary Friend, A Confused Salesman, A Dust Bunny of Doom, The Ghost of a Dead Pixel, A Sentient Terms of Service Agreement.
### Spells
* Aggressive Sighing, Metaphysical Poke, Summon Minor Annoyance, Greater Procrastination, Flash of Inadequacy, Power Word: 'Whatever', Cloud of Confusion, Internal Monologue.
### Stats
* Strength, Constitution, Dexterity, Intelligence, Wisdom, Charisma, Patience, Luck, Caffeine Level, Existential Dread.
Zenflow generated the application, and I also had it use a different agent to review its own code.
I ran the application, saw things I wanted changed, and then specified those changes:
One of my change requests in Zenflow. Click to view at full size.
Zenflow made the changes, then I had the review agent review those changes. This process of refinement continued for a couple more steps, and the result is the game located at accordionguy.github.io/eternal-grind/.
As I mentioned before, Eternal Grind is a work in progress. I’ll continue adding tweaks and improvements using Zenflow. Watch this space!
From April 13th through 16th — and a couple of days before, because it’s in Austin — I’m going to be at the Arc of AI conference! Over the next little while, I’m going to be posting articles about Arc of AI, in case you’re wondering what the conference is about and whether you should go.
In this article, I’ll talk about the workshop day and one of the workshops in particular.
Monday, April 13: The workshop day
Click to see the workshops at full size.
Prior to the main conference days (Tuesday, April 14 through Thursday, April 16), Arc of AI will hold its Workshop Day on Monday, April 13, where they’ll have six AI workshops:
Fundamentals of Software Engineering In the age of AI (Dan Vega and Nathaniel Schutta)
Building a Production-Grade RAG Pipeline (Wesley Reisz)
AI-Driven API Design (Mike Amundsen)
Creating AI Assisted Applications Using LangChain4j (Venkat Subramaniam)
Developing AI Applications with Agents, Rag, and MCP using Python (Brent Laster)
Tech Leadership in the Time of AI (Brian Sletten)
The Fundamentals of Software Engineering in the Age of AI workshop
One of the workshops I’m interested in is Nathaniel Schutta’s and Dan Vega’sFundamentals of Software Engineering in the age of AI, which will be based on their recently-published (November 2025) O’Reilly book, Fundamentals of Software Engineering, but with the application of AI.
Here’s an excerpt from their workshop’s abstract:
This intensive workshop bridges the critical gap between what early-career developers learn in formal education and what they need to thrive in professional environments where human expertise and artificial intelligence increasingly collaborate. Based on our book “Fundamentals of Software Engineering,” we guide participants through a comprehensive journey from programmer to well-rounded software engineer equipped to leverage AI tools effectively while maintaining engineering fundamentals.
Participants will develop both technical capabilities and professional skills that remain relevant regardless of changing languages, frameworks, and AI capabilities. Through a balanced mix of conceptual teaching, collaborative discussions, and hands-on exercises with both traditional and AI-assisted approaches, attendees will work on realistic scenarios that reinforce practical application of these fundamental principles while developing discernment about when and how to integrate AI tools into their workflow.
Learnings:
Understanding the programmer to engineer transition and mindset shift
Developing advanced code reading techniques and comprehension strategies
Crafting maintainable, readable code that communicates intent
Applying software modeling concepts to visualize and plan complex systems
Effective techniques for working with legacy codebases and existing systems
Benefits:
Students will understand the concepts and how to apply them right now cutting through the hype surrounding AI. With practical tips and guidance, they can jumpstart their use of AI across the software development lifecycle.
Who should attend:
Primarily developers and architects but ultimately anyone that’s struggling to understand how to apply AI to their world today while avoiding the pitfalls and rabbit holes.
I’m intrigued by this workshop, as it’s about the application of AI tools to the way software is built, which is pretty new turf for all of us. When I learned software development, there were already plenty of lessons from decades of developers’ experiences, and in my career, I and the rest of the industry picked up a couple decades’ more tips and tricks. But all that learning is from the “before times.” Right now, we’re not even five years into the post-ChatGPT era, and we’re only beginning to figure out how to write applications in the era of vibe coding (and remember, Andrej Karpathy coined the term barley over a year ago).
Since the workshop is based on the book, this video might give you an idea of what it’ll be like:
Want to find out more about and register for Arc of AI?
Once again, Arc of AI will take place from Monday, April 13 through Friday, April 16, with the workshop day taking place on Monday, and the main conference taking place on Tuesday, Wednesday, and Thursday.
Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!
Last week, Anitra and I attended both the Dev/Nexus conference and its companion conference, Advantage, an AI conference for CTOs, CIOs, VPs of Engineering, and other technical lead-types, which took place the day before Dev/Nexus. My thanks to Pratik Patel for the guest passes to both conferences!
I took copious notes and photos of all the Advantage talks and will be posting summaries here. This set of notes is from the fourth talk, Shift to Agentic Software Engineering, presented by Dave Parry.
Here’s Dave’s bio:
David Parry is an accomplished Director of Architecture with over 20 years of experience in Software Development. It all began in 1996 when he discovered the fascinating world of programming, with a particular focus on Java applets. Throughout his illustrious career, David Parry has been involved in various noteworthy projects. He has successfully built and implemented content management systems for a wide range of clients, including the esteemed Johny Walker and its renowned keepwalking.com. Additionally, as a consultant at a Big 4 firm, David played a pivotal role in solving critical issues for numerous customers, demonstrating his expertise in handling complex and high-traffic web platforms. Never one to shy away from innovation, David Parry has expanded his skills to work on cutting-edge technologies such as mobile and embedded Android TV systems. Leveraging his expertise, he has delivered top-notch streaming services to customers, ensuring they have an exceptional viewing experience. Currently, David holds the position of Developer Advocate and Consultant overseeing strategic planning and execution of architectural designs for customers. With a deep understanding of software development principles and extensive experience in Java programming, he excels at providing valuable insights and guidance to his team. Having witnessed the evolution of Java development from its early days to its current state, David Parry’s wealth of experience and strategic perspective, combined with his consulting work at a Big 4 firm, make him an invaluable asset in any project or organization he is a part of.
And here’s the abstract of his talk:
AI is redefining how engineering organizations operate, shifting from traditional development to agentic development, where intelligent, context-aware agents partner with teams to drive measurable business outcomes. This presentation gives leaders a clear framework for understanding how agentic development improves cycle time, reduces operational risk, enhances quality, and scales organizational capacity without adding headcount. We will examine how to move beyond pilots, achieve meaningful adoption, embed governance and security controls, and connect engineering effort directly to enterprise KPIs. Leaders will leave with a strategic roadmap for guiding their organizations through this transformation with clarity, confidence, and control.
My notes from Dave’s talk are below.
The shift from AI-assisted to agentic is real, and most organizations aren’t ready
Dave opened by drawing a line between two distinct eras of AI in software development. The first era, the era of AI-assisted coding/the GitHub Copilot model, still has a human in the loop at every step. A developer reviews suggestions, accepts or rejects them, and retains full decision-making authority. This is the model most development teams have actually adopted, and it‘s valuable. The second era, agentic software engineering, is something categorically different: autonomous systems that execute multi-step workflows without continuous human supervision.
Dave was candid that most organizations are still figuring out how to use AI-assisted tools well, even as the industry conversation has moved on to agents. The gap between where the hype is and where most teams actually are is significant, and leaders who try to leapfrog directly to full autonomy without establishing the right foundations tend to end up with agents that are expensive, unpredictable, and politically toxic inside the engineering organization. The smarter path, in Dave‘s experience, is to build the scaffolding — governance, measurement, structured experimentation — before letting agents loose on anything consequential.
Governance can‘t be bolted on after the fact
The governance message in Dave‘s talk was clear: security and access controls must be architected into agentic systems from the beginning, not added as an afterthought once the agent is already running. He illustrated this with a client story about a company whose repositories were so strictly siloed that individual developers weren‘t even allowed to know other repos existed, let alone access them. An agent given broad permissions in that environment would immediately violate carefully constructed security boundaries that humans had been respecting for years, simply because nobody thought to encode those constraints into the agent‘s operating parameters.
The practical implication is that every constraint your human engineers operate under (such as access controls, data isolation, permission scoping) needs to be explicitly defined for any agent working in the same environment. Agents don‘t have professional judgment or social awareness; they will access whatever they‘re technically permitted to access. If you onboard a new human developer, you scope their access carefully before they write a single line of code. Agents require the same rigor. Dave‘s recommendation was to look for frameworks that make these governance constraints first-class concepts rather than optional configurations, and to be deeply skeptical of any agentic solution that treats security as something you layer on later.
Enterprise-readiness also extends to the technology choices themselves. Dave pushed back against agentic frameworks built in languages or runtimes that don‘t fit naturally into enterprise operational environments. A security team asked to approve an agent that spins up an npx process that re-downloads dependencies on every run is going to say no…and they should! The same agent behavior built on Spring Boot, running in a container with Prometheus observability already wired in, is a fundamentally different conversation.
One of Dave‘s most pragmatic points was that the business case for any given agent needs to be proven, not assumed. The pressure from above to “do AI” is real, but implementing an agent that costs more in compute and maintenance than it would cost a developer to do the same task manually is not a win — it‘s a liability that will eventually get noticed and used to discredit the entire program. Leaders who can‘t quantify what their agents are actually delivering are in a precarious position when budget scrutiny arrives.
His recommendation was to tie every agent deployment to concrete, measurable KPIs from the start. For a PR risk agent, the relevant metrics might include change failure rate, time to production, and whether bug rates are actually going down or inadvertently going up as junior developers blindly accept AI suggestions. The five-star anecdote was a useful cautionary note: some teams have discovered that their agents were actively introducing more defects than they prevented, precisely because they hadn‘t built in the measurement infrastructure to detect it early.
Dave also pushed back against the proof-of-concept mentality that treats agent work as inherently experimental. The POC era, in his view, is over. Organizations that frame every agent initiative as “let‘s see if this works” create the conditions for naysayers to kill it at the first sign of friction. His preferred framing is to pick a small, low-risk pilot, commit to shipping it to production, measure it rigorously, and use that concrete success to build momentum for the next one. Owning the conversation with data is the only reliable way to keep agentic programs alive long enough to deliver real compounding value.
Bring your existing developers into the agentic transition; don‘t route around them
A consistent thread throughout Dave‘s talk was that agentic AI is not a replacement for experienced engineers, but an amplifier of their knowledge. That amplification only works if those engineers are inside the tent. Developers who feel threatened by agents will find reasons for them to fail, and frankly, they‘ll often be right, because agents built without deep domain knowledge embedded in their prompts and tools tend to produce plausible-looking but subtly wrong outputs. The engineers who know where the bodies are buried in your codebase are exactly the people who should be shaping how your agents operate.
Dave‘s specific recommendation was that when outside expertise comes in to help stand up an agentic program, that expertise should be focused on upskilling the existing team rather than doing the work for them. An external consultant who delivers a finished agent and walks away leaves the organization with something it doesn‘t fully understand and can‘t maintain or evolve. An expert who works alongside the existing team, transfers knowledge, and helps them build the verification and governance capabilities they need to operate agents independently is creating something durable.
Dave made the point that custom MCP servers are one of the highest-leverage things an organization‘s own developers can build, because that‘s where domain-specific knowledge gets embedded in a form that agents can reliably use. A generic MCP that connects to a database and lets the LLM figure out the schema from scratch on every query is both expensive in tokens and fragile in output. A purpose-built MCP that encodes exactly what that database contains, how to query it correctly, and what the results mean — written by developers who actually know the system — is the kind of deterministic grounding that makes agentic systems genuinely trustworthy in production.
Thursday at Entrepreneur Collaborative Center at 6:00 p.m. (Tampa): One of the powerful uses in AI is the automation of regular tasks. Reading data from a database and processing actions based on it, data collection and gathering, running reports and simplifying daily tasks. Automations can include simpler tasks with tools such as Manus or ChatGPT, moving into higher level tasks with Make.com or Airtable, or more complex tasks with N8N or Mind Studio. Use of various tools requires different levels of skills and costs.
At this session, you’ll look at some sample automations and discuss use cases.
Wednesday at Tampa Hackerspace at 6:00 p.m. (Tampa): This is a hands-on, online class for anyone who is brand new to 3D printing and wants to get up and running with choosing models and getting them ready to print. We recommend that you join this class from a computer you will be using to prepare models for printing, especially if you are a member of Tampa Hackerspace and plan to use the machines at our site. This class does involve installing software on your computer and following along with the instructor; however, there is no final project, and you are free to listen and take notes instead.
This class is offered online and is open to everyone. $5 fee for non-members.
Thursday at Entrepreneur Collaborative Center at 6:00 p.m. (Tampa): Learn about modern image formats (WebP, AVIF), optimization techniques that actually matter, using Cloudinary for on-the-fly transformations and delivery, and how your images impact Core Web Vitals. You’ll build hands-on skills through a real Next.js e-commerce project.
Thursday at 82° West Distilling at 7:00 p.m. (Tampa): Here’s an interesting event — an OpenClaw (formerly Moltbot, formerly ClawdBot) “get together, get OpenClaw set up on your laptop, get some rum” get-together!
It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.
In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.
What goes into this list?
I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:
Programming, DevOps, systems administration, and testing
Tech project management / agile processes
Video, board, and role-playing games
Book, philosophy, and discussion clubs
Tech, business, and entrepreneur networking events
Toastmasters and other events related to improving your presentation and public speaking skills, because nerds really need to up their presentation game
Sci-fi, fantasy, and other genre fandoms
Self-improvement, especially of the sort that appeals to techies