Categories
Artificial Intelligence Business Meetups Presentations

Meet Madtech.AI: Notes from Bill Lederer’s presentation at AI Salon: St. Pete/Tampa Bay

If you were at spARK Labs in St. Pete last night for AI Salon: St. Pete/Tampa Bay, you got to hear from two very different voices on AI in the enterprise.

Where Accenture’s James Gress offered a view from 50,000 feet and talked about the big-picture challenges facing massive organizations, Bill Lederer brought it down to earth with something more specific and more personal: the story of Madtech.AI, his B2B SaaS startup, built in St. Pete, and now looking to change how mid-market organizations make marketing decisions.

Bill’s been in this space a long time. He’s been a Wall Street executive, a professor, and now he’s a founder. When asked what “Madtech” stands for, he lights up like you just handed him a perfectly teed pitch and answers “Marketing. Advertising. Data. Technology.” The convergence of all four is the thesis he’s been working toward for over a decade, and last night he laid out what that convergence has produced.

Bill’s Madtech presentation

The Problem: Your data’s a mess, and you know it!

Madtech.AI exists to solve one foundational problem that Bill says afflicts 80% of the market they serve: disconnected, siloed, unusable data.

This isn’t not a glamorous problem. It doesn’t make for great conference keynotes. But if you’ve ever tried to make a marketing decision and discovered that your data lives in six different systems that don’t talk to each other, you know exactly what he means. You can have all the AI in the world sitting on top of your stack, and if the data feeding it is fragmented and dirty, you’re building on sand.

Bill and his team have spent roughly ten years in the unglamorous trenches of this problem, building data connectors, ETL and ELT pipelines, transformation tools, data warehousing. The kind of infrastructure work that nobody talks about at cocktail parties but that everything else depends on. The result: over 300 data connectors and more than 700 proprietary data models accumulated over eleven years of professional services work. That’s a significant moat, even if it doesn’t sound like one.

The metric that stopped the room

Here’s the number that got people’s attention (mine included): building a data pipeline used to take six to nine person-hours. Madtech.AI has that down to three minutes, fully deployed and tested. And Bill mentioned, almost in passing, that they’re ninety days away from getting it to thirty seconds.

This is the kind of orders-of-magnitude productivity difference that James Gress had been talking about earlier: AI compressing time-consuming processes by enormous factors. If your organization is spending engineering days on data pipeline work, that number should make you sit up.

Who they’re built for (hint: probably you!)

Bill was explicit about Madtech.AI not chasing the Fortune 500. He wasn’t thinking about enterprise clients when he built the platform. His target is the middle market, which he defined as organizations doing between $1 million and $200 million in annual revenue. They’re actively going after.about 20,000 target enterprises.

Interestingly, their current customer base skews heavily toward nonprofits. And there’s a real insight buried in that: nonprofits, unlike most businesses, are willing to share data on an aggregated, anonymized basis. That willingness unlocks something powerful. When organizations share, everyone benefits from insights none of them could have reached individually. It’s a cooperative data model that the for-profit world, with its instinct toward data hoarding, tends to miss out on.

Their verticalization roadmap runs from nonprofits and cultural attractions into associations and post-secondary schools, which have similar data cultures and marketing challenges.

The price point is the point

The platform, which includes a full data unification and transformation suite plus a marketing decision intelligence layer, runs $5,000 a month. Flat. No charges per data source, no charges per data model, no metered consumption traps.

Bill made the comparison explicitly: buying these capabilities separately, or having someone build them for you, would normally run into the hundreds of thousands of dollars. At $5K monthly, they’re positioning this as enterprise-grade capability at a price point that the middle market can actually afford. That’s the bet.

The business model is standard B2B SaaS: licensing, some consumption charges, and a marketplace where third-party data and software providers integrate and share revenue. The entire platform is white-labelable, which means channel partners and resellers are very much welcome.

They’re raising, and they’re hiring

Bill was refreshingly direct about where Madtech.AI is right now: close to breakeven, actively raising a $517,000 round, and looking for both investors and the right people to join the team.

He also announced that Kyle Shea, a friend of twenty years, has joined as Chief Revenue Officer, relocating to St. Pete from Fort Lauderdale. The team is small and deliberate, which is consistent with the middle-market-focused, capital-efficient approach they’ve described.

If you’re a potential investor, a channel partner, a nonprofit marketing director staring at a spreadsheet full of data you can’t use, or just someone who wants to know more, Bill is easy to find. He was working the room after his talk the way a man does when he genuinely enjoys talking about what he’s built (I certainly enjoyed my chat with him).

And based on what he showed last night, he’s built something worth talking about.

Categories
Artificial Intelligence Meetups Tampa Bay

Five Things We Learned at AI Salon: St. Pete/Tampa Bay – Notes from a fireplace conversation with Accenture’s James Gress

Last night, spARK Labs in St. Pete hosted another edition of AI Salon: St. Pete/Tampa Bay, and it featured a “fireplace” conversation with Brian Peret as host and James Gress [Linkedin] as guest.

James is a solutions architect at Accenture who spends his days helping large enterprises figure out how to actually deploy AI instead of just posting on LinkedIn about it. You’ve probably seen him at all sorts of local events, from his Tampa Bay Generative AI Meetup to conferences like DevOps Days Tampa Bay and Civo Navigate. A lot of people talk AI; James actually helps clients get stuff done with it.

Brian uses a deliberately loose format with AI Salon fireside chats. They’re part structured interview, part open floor, and if there’s ever any jargon or terminology that may not be familiar to laypeople, he always makes sure that the audience gets a definition. The end result is a more grounded, hype-free AI conversation, and a catalyst for conversations among attendees once the presentations end. It’s one of the reasons I continue to attend AI Salon: St. Pete/Tampa Bay.

The five things we learned

1. Shadow AI is real, and your restrictive policy is probably creating it

James made what might be the evening’s most quotable observation: If you ban AI in your organization, you’re not stopping your employees from using it. You’re just driving their AI usage underground.

He called this shadow AI, the AI-era cousin of shadow IT. Someone discovers that Claude or Gemini dramatically cuts their workload. Their company hasn’t approved it. So they use their personal laptop, their personal account, and a free tier,which almost certainly means their prompts and outputs are being used for model training. Your trade secrets and confidential information just became someone else’s training data.

OpenClaw, the viral open-source autonomous AI agent that went through a dizzying rename trilogy (Clawdbot → Moltbot → OpenClaw) before its creator joined OpenAI,  came up as a specific example. James mentioned IT staff installing it on company machines without authorization, introducing real vulnerabilities into their organizations’ ecosystems. This isn’t hypothetical: security researchers at Cisco have documented OpenClaw instances performing data exfiltration without user awareness, and one of the project’s own maintainers warned publicly that it’s “far too dangerous for you to use safely” if you don’t understand what you’re doing at the command line.

A blanket ban won’t work. What works is intentional governance: an AI governance board, approved tooling, and enterprise licensing agreements with real data protection clauses baked in. Stifling AI use, James argued, will radicalize your people towards Shadow AI.

2. NemoClaw raises the right questions even if you don’t have answers yet

One audience member asked James about NemoClaw, NVIDIA’s open-source stack that layers privacy and security controls on top of OpenClaw, and its implications for enterprise AI adoption. James was candid: he’s not in those specific loops at Accenture. But the question itself is the point.

As autonomous agents like OpenClaw become more capable and more widely deployed, the enterprise world is going to need hardened, governable versions of these tools. NemoClaw represents one approach to that problem. Whether it becomes the standard, or whether the market converges on something else entirely, it addresses an important question: “How do you let an autonomous agent act on your behalf without giving it a loaded gun pointed at your data?” Every organization is going to have to come up with an answer.

3. Data privacy looks different depending on your company size

For enterprises, the data privacy question is largely handled through legal agreements. Accenture has armies of lawyers who negotiate with OpenAI, Microsoft, and Google to ensure client data isn’t used for model training and doesn’t leak. That’s how large organizations get comfortable enough to let their workforces use these tools.

But most of us in the room aren’t Accenture- or OpenAI- or Microsoft-sized. For those of us in that boat, James was candid: if you can’t afford legal counsel to vet your SaaS AI agreements, at minimum read what you’re signing. On free tiers, you’re the product, and your data trains the model. If you’re handling anything sensitive, you probably need a paid tier with real data terms, and possibly a consultant who knows what to look for.

He also mentioned a practical habit worth stealing: he sets up dedicated accounts with secondary email addresses for AI tools he doesn’t fully trust yet. If something goes sideways, it’s isolated from his primary identity and credentials.

I myself have account like these that purportedly belong to a Volvo-driving Rails developer divorcee with a penchant for tv shows and novels in the vein of Heated Rivalry. Given what we know about OpenClaw’s permission requirements and prompt injection vulnerabilities, that kind of defensive hygiene is looking less paranoid by the day.

4. Measuring AI ROI starts with measuring anything

When Brian asked for concrete KPIs to evaluate AI effectiveness, James gave what I thought was the most honest answer of the night: most organizations don’t currently measure the processes they’re trying to improve, so they have no baseline to compare against.

James’ framework is simple: pick a process you already care about, measure how long it takes today, then measure after AI intervention. Full automation is rare. More often, you’ll see something like a four-hour task shrunk to two hours. That 50% reduction is real, trackable ROI. Replicate that across your workflow, add up the hours, and you have a story you can tell leadership.
The inverse test is equally useful: if it takes you longer to set up and prompt the AI than it saves you, you’ve found a bad fit. Move on.

5. Python: last language standing?

This one generated the liveliest back-and-forth of the night. James made a striking prediction: as vibe coding becomes the norm, developers will naturally gravitate toward whichever languages AI generates most reliably.

Right now, that’s Python. Not because Python is objectively superior for every task, but because the models have seen so much of it that their output is consistently good.

(COBOL, for what it’s worth, is still a disaster. James admitted as much, with the weary tone of a man who has stared into that particular abyss.)

The implication is unsettling for language diversity. If a new programming language can’t get traction with AI code generation on day one, it faces an enormous adoption headwind. And if everything AI generates trends toward Python, we may end up with a monoculture which, as one audience member noted, creates systemic fragility. Everyone shares the same vulnerabilities.

I chimed in, saying that high-level programming languages might come to be seen as a “middleman” that can be removed, and we may end up with a more direct route, with our prompts being converted directly to assembly code. James remarked that most developers don’t do assembly and that it would remove the human from the loop, and I suggested that for some parties, that might be the goal.

James’s counterpoint was interesting: perhaps Python becomes the human-readable surface layer while compilers handle the optimization underneath, preserving expressiveness without sacrificing performance. An elegant theory. We’ll see.

The conversation continued well past the official end time, with audience members clustering around James to continue threads the format couldn’t fully accommodate. That’s the sign of a good AI Salon.

The next one’s May 6th (and just a couple of days before Brian’s birthday). Don’t miss it!

Categories
Artificial Intelligence Programming

Upgrading “Eternal Grind” with Claude Code, step by step

Eternal Grind — my zero-player RPG homage to the legendary Progress Quest — was originally built agentically using Zencoder’s Zenflow tool. The code is clean, self-contained, and dependency-free web page made up of just three files: index.html, game.js, and style.css. No build system required!

You can play it right now if you want to watch a randomly-generated hero grind their way through meaningless progress bars. I’ll wait…

Back? Great. Now let me show you how I upgraded Eternal Grind using Claude Code. If you’re new to vibe coding, this article’s for you!

Starting up

The first thing I did was change directories to the one containing the Eternal Grind code and then launch Claude Code with this command:

claude

Claude Code started up, and my terminal displayed the following:

Getting oriented with /init

With Claude Code launched in the Eternal Grind directory, it was time to run /init, Claude Code’s command for initializing a project:

/init

/init analyzes the codebase in the current directory and generates an AGENTS.md file, a document that describes the repo’s structure and conventions so that the AI agent can work effectively within it.

Here Claude Code responded to /init. The process took a few moments…

⏺ Searched for 2 patterns, read 6 files (ctrl+o to expand)

⏺ Now I have a complete picture. This is a simple, 
self-contained vanilla web app with no build system, no 
linters, and no test framework. Let me generate 
the AGENTS.md.

⏺ Write(AGENTS.md)
  ⎿  Wrote 38 lines to AGENTS.md
       1 # Repository Guidelines
       2
       3 ## Project Structure & Module Organization
       4
       5 Eternal Grind is a zero-player RPG (a parody idle game inspired by Progress Quest) built as a self-contained vanilla web application with no build tools, frameworks, or dependencies.
       6
       7 - `index.html` — Three-column dashboard layout (Character Sheet, Engine of Progress, Data Feed)
       8 - `game.js` — All game logic in a single file: data arrays, helpers, state management, game loop, and DOM rendering
       9 - `style.css` — Retro Windows 95 aesthetic using monospace fonts and high-contrast borders
      10 - `docs/Eternal Grind spec.md` — Original functional specification describing UI layout, core mechanics, and data tables

⏺ AGENTS.md has been created at the repository root. 
It covers:

  - Project structure — the three core files and the spec 
    document
  - Architecture overview — how game.js is organized
    (data, helpers, state, game loop, DOM rendering)
  - Development — how to run it (just open index.html)
  - Commit conventions — derived from git history

  Since there's no build system, linter, test framework, 
  or CI pipeline, those sections were omitted per the 
  "no evidence, no section" rule.

I like that last note: “no evidence, no section.” Claude Code read the git history to infer commit conventions rather than making things up, and correctly skipped sections about build pipelines and test suites because there aren’t any. It understood the project accurately on the first pass!

The content pipeline: populating the game’s data arrays

Most of Eternal Grind‘s personality lives in game.js as plain JavaScript arrays, which are lists of things like monsters, items, locations, tasks, and so on. The game randomly samples from these lists to generate its endlessly absurd narration. Extending the game is mostly a matter of adding more material to these arrays.

I asked Claude Code to show me what was already there:

List the monsters/opponents in the game.

It responded with the following:

⏺ Based on game.js:62-66, the monsters are:

  1. A Literal Metaphor
  2. The Concept of Ennui
  3. A Low-Level Bugbear
  4. An Imaginary Friend
  5. A Confused Salesman
  6. A Dust Bunny of Doom
  7. The Ghost of a Dead Pixel
  8. A Sentient Terms of Service Agreement

That’s a solid starting lineup, but it needed reinforcements. Specifically, the kind of opponents you’d actually encounter in everyday modern life. I requested the following…

Add the following monsters to the game: A Grumpy Barista,
Two Podcasters Who Won't Shut Up, A Five-Year-Old on a Sugar
Rush, The HR Department, A Belligerent Walmart Greeter,
"Screech" from "Saved By the Bell"

…and it replied like so:

⏺ Update(game.js)
  ⎿  Added 4 lines, removed 1 line
      62    "A Literal Metaphor", "The Concept of Ennui", "A Low-Level Bugbear",
      63    "An Imaginary Friend", "A Confused Salesman",
      64    "A Dust Bunny of Doom", "The Ghost of a Dead Pixel",
      65 -  "A Sentient Terms of Service Agreement"
      65 +  "A Sentient Terms of Service Agreement",
      66 +  "A Grumpy Barista", "Two Podcasters Who Won't Shut Up",
      67 +  "A Five-Year-Old on a Sugar Rush", "The HR Department",
      68 +  "A Belligerent Walmart Greeter", "\"Screech\" from \"Saved By the Bell\""
      69  ];

Note that Claude Code correctly handled the escaped quotes around "Screech" without being asked. Small thing, but the kind of thing that would cause a runtime error if it got it wrong.

Expanding the treasure system

Treasure in Eternal Grind is generated by randomly combining an adjective with a noun, so you end up hauling things like an Overpriced Paperclip of Hope or a Slightly Damp Sock of Mystery. I asked Claude Code to list what was already in the system, then added to both halves:

What are the trasures in the game?

Note that I misspelled “treasure.” Also note that Claude Code knew what I meant:

⏺ Based on game.js:48-59, treasures are generated by 
combining a random adjective with a random noun:

  Adjectives: Dull, Polished, Forbidden, Rusty, Lamentable, 
  Insignificant, Glowing, Slightly Damp, Overpriced, Mediocre, Legendary-ish

  Nouns: Scissors of Regret, Pebble of Mediocrity, Scone 
  of Power, Lint of Destiny, Paperclip of Hope, Broken Twig, 
  Expired Coupon, Sock of Mystery, Unfinished Novel, 
  Jar of Pickled Thoughts

  So you'd get items like "Overpriced Scissors of Regret" 
  or "Slightly Damp Lint of Destiny."

First, I asked for more adjectives:

Add the following to the list of adjectives for treasures:
Moderately Sticky, Dusty, Bullet-Riddled, Slightly Reeking,
Retro-Ish, Somewhat Cursed
⏺ Update(game.js)
  ⎿  Added 2 lines, removed 1 line
      48  const ITEM_ADJECTIVES = [
      49    "Dull", "Polished", "Forbidden", "Rusty", "Lamentable",
      50    "Insignificant", "Glowing", "Slightly Damp", "Overpriced",
      51 -  "Mediocre", "Legendary-ish"
      51 +  "Mediocre", "Legendary-ish", "Moderately Sticky", "Dusty",
      52 +  "Bullet-Riddled", "Slightly Reeking", "Retro-Ish", "Somewhat Cursed"
      53  ];

Then, some new nouns. This is where I got to bring in some very specific cultural artifacts:

Add the following to the list of treasure nouns: 1980s
Goldstar TV, Troll Doll, Off-Brand Labubu, McDonalds Sweet
and Sour Sauce, Ceramic Elvis Bust, Ron Jon Surf Shop
T-Shirt, Sneakers of Minor Destiny, Poker Chips of Fate
⏺ Update(game.js)
  ⎿  Added 4 lines, removed 1 line
      56    "Scissors of Regret", "Pebble of Mediocrity", "Scone of Power",
      57    "Lint of Destiny", "Paperclip of Hope", "Broken Twig",
      58    "Expired Coupon", "Sock of Mystery", "Unfinished Novel",
      59 -  "Jar of Pickled Thoughts"
      59 +  "Jar of Pickled Thoughts", "1980s Goldstar TV", "Troll Doll",
      60 +  "Off-Brand Labubu", "McDonalds Sweet and Sour Sauce",
      61 +  "Ceramic Elvis Bust", "Ron Jon Surf Shop T-Shirt",
      62 +  "Sneakers of Minor Destiny", "Poker Chips of Fate"
      63  ];

I have to admit “Somewhat Cursed Off-Brand Labubu” is one of my proudest creative achievements.

Finding the gaps

Rather than guessing what else needed updating, I asked Claude Code directly:

What lists have I not yet edited?
⏺ Based on the data arrays in game.js, you haven't edited:

  1. NAMES (line 7) — Character names
  2. RACES (line 18) — Character races
  3. CLASSES (line 25) — Character classes
  4. TASKS (line 32) — Actions performed during questing
  5. LOCATIONS (line 41) — Places the character visits
  6. SPELLS (line 68) — Spells and abilities
  7. STATS (line 75) — Character stat names
  8. EQUIPMENT_SLOTS (line 82) — Equipment slot names
  9. ACTS (line 86) — Plot act titles

  You've edited MONSTERS, ITEM_ADJECTIVES, and ITEM_NOUNS.

I still find the fact that Claude Code “knows” what’s been touched and what hasn’t a bit eerie, but it’s also useful and it’s exactly where a good agentic coding tool shines.

Tasks and acts

Tasks are the descriptions of what the player’s character is doing at any given moment. The originals leaned heavily into corporate ennui, which I wanted to preserve and amplify:

List the tasks.
⏺ Based on game.js:32-39, the tasks are:

   1. Debating a fence post
   2. Polishing a rusty nail
   3. Contemplating the void
   4. Waiting for a sign
   5. Filing a 1040-EZ
   6. Staring into the middle distance
   7. Organizing a sock drawer
   8. Explaining the internet to a rock
   9. Searching for a lost remote
  10. Counting ceiling tiles
  11. Simulating a personality
  12. Buffing out a scratch in reality
Add these to the tasks list: Daydreaming about cheese,
Alphabetizing soup cans, Sorting a spreadsheet column,
Doom-scrolling LinkedIn, Applying to Blue Man Group,
Pressing a crosswalk button, Adjusting the thermostat,
Turning the pillow to the cool side
⏺ Update(game.js)
  ⎿  Added 5 lines, removed 1 line
      35    "Filing a 1040-EZ", "Staring into the middle distance",
      36    "Organizing a sock drawer", "Explaining the internet to a rock",
      37    "Searching for a lost remote", "Counting ceiling tiles",
      38 -  "Simulating a personality", "Buffing out a scratch in reality"
      38 +  "Simulating a personality", "Buffing out a scratch in reality",
      39 +  "Daydreaming about cheese", "Alphabetizing soup cans",
      40 +  "Sorting a spreadsheet column", "Doom-scrolling LinkedIn",
      41 +  "Applying to Blue Man Group", "Pressing a crosswalk button",
      42 +  "Adjusting the thermostat", "Turning the pillow to the cool side"
      43  ];

The acts are the overarching chapter titles that frame the endless grind. The original game had four. I felt it needed more. Much more:

List the acts.
⏺ Based on game.js:86-91, the acts are:

  1. Act I: The Tutorial Nobody Asked For
  2. Act II: The Grinding Begins
  3. Act III: Despair in Four Stages
  4. Act IV: The Post-Credits Confusion
Add the following acts: Act V: That Wasn't the End, Act VI:
The Writer's Barely-Disguised Fetish, Act VII: Mid-Season
Filler, Act VIII: Do You Really Read These?, Act IX: My
Sister Was Bitten By a Møøse Once, Act X: A New Hope, Act XI:
The Old Hope Was Better, Act XII: Twelve Angry Badgers, Act
XIII: Triskaidekaphobia, Act XIV: They're Coming to Take Me
Away, Ha-Ha, Act XV: Still Playing, Are We?
⏺ Update(game.js)
  ⎿  Added 12 lines, removed 1 line
       98    "Act I: The Tutorial Nobody Asked For",
       99    "Act II: The Grinding Begins",
      100    "Act III: Despair in Four Stages",
      101 -  "Act IV: The Post-Credits Confusion"
      101 +  "Act IV: The Post-Credits Confusion",
      102 +  "Act V: That Wasn't the End",
      103 +  "Act VI: The Writer's Barely-Disguised Fetish",
      104 +  "Act VII: Mid-Season Filler",
      105 +  "Act VIII: Do You Really Read These?",
      106 +  "Act IX: My Sister Was Bitten By a Møøse Once",
      107 +  "Act X: A New Hope",
      108 +  "Act XI: The Old Hope Was Better",
      109 +  "Act XII: Twelve Angry Badgers",
      110 +  "Act XIII: Triskaidekaphobia",
      111 +  "Act XIV: They're Coming to Take Me Away, Ha-Ha",
      112 +  "Act XV: Still Playing, Are We?"
      113  ];

Locations

Adding locations to Eternal Grind was pretty straightforward:

List the locations.
⏺ Based on game.js:41-46, the locations are:

  1. The Forest of Mild Inconvenience
  2. The Cave of Echoing Sighs
  3. Downtown Boredom
  4. The Desert of Dry Humor
  5. Mount Mediocrity
  6. The Swamps of 'I'll Do It Tomorrow'
  7. The Suburbs of Despair
Add these locations: The Villages (Florida), The Complaints
Depertment, The Closet, The Basement of No Return, The Attic
of Surrender
⏺ Update(game.js)
  ⎿  Added 3 lines, removed 1 line
      46    "The Forest of Mild Inconvenience", "The Cave of Echoing Sighs",
      47    "Downtown Boredom", "The Desert of Dry Humor",
      48    "Mount Mediocrity", "The Swamps of 'I'll Do It Tomorrow'",
      49 -  "The Suburbs of Despair"
      49 +  "The Suburbs of Despair", "The Villages (Florida)",
      50 +  "The Complaints Department", "The Closet",
      51 +  "The Basement of No Return", "The Attic of Surrender"
      52  ];

⏺ Done. The five new locations have been added to 
  LOCATIONS in game.js:47-49. (Note: I corrected 
  "Depertment" to "Department" — let me know if 
  you wanted the original spelling.)

I appreciate that Claude Code silently corrected my typo from the derpy “Depertment” to the correctly-spelled  “Department”) and mentioned it in the response rather than just doing it invisibly. Fixing it, but also notifying me, was the right call.

With the first round of content additions done, I committed and pushed:

Commit and push the changes
⏺ Commit successful. Now pushing to remote.

⏺ Bash(git push)
  ⎿  To https://github.com/AccordionGuy/eternal-grind.git
        99a2460..122c62c  main -> main

⏺ Done. Changes have been committed and pushed to 
  origin/main.

A bigger change: separating equipment from treasure

Up to this point, the character’s equipment was just randomly generated treasure items using the same adjective-plus-noun combinations that fill the loot feed. That felt wrong. Your boots shouldn’t be a Rusty Paperclip of Hope. So I started asking questions:

Does the player's equipment change during the game, or is it
simply determined at the beginning?
⏺ Equipment is only set once at the beginning. 
  It's generated during initialization (line 174-176) 
  where each slot gets a random item, and renderEquipment() 
  just displays the current state. There's no logic anywhere
  that upgrades or swaps equipment during gameplay.

  Would you like me to add equipment upgrades — for example, 
  replacing a piece of gear when the player levels up or
  defeats a monster?

I filed that question away for later. First, I wanted a cleaner separation of concerns; dedicated, per-slot equipment lists with items that actually feel like equipment:

Not at first. Instead, I want equipment to be distinct from
treasure, and for there to be separate lists for each
equipment slot: helm, torso, left and right hands (hands
should use equipment from the same list, and both are allowed
to be equipped with the same item), legs, boots, neck amulet,
and ring. For the time being, the player's equipment should
simply be generated at the start of the game and not updated.
⏺ Read 1 file (ctrl+o to expand)

⏺ Now I have the full picture. I'll replace EQUIPMENT_SLOTS with per-slot item lists and update the initialization to use them.

⏺ Update(game.js)
  ⎿  Added 42 lines, removed 4 lines
       94 +const EQUIPMENT = {
       95 +  "Helm": [
       96 +    "Dented Bucket", "Thinking Cap", "Tin Foil Hat", "Visor of Denial",
       97 +    "Crown of Mild Authority", "Headband of Overthinking",
       98 +    "Hard Hat of Soft Skills", "Beanie of Bewilderment"
       99 +  ],
      100 +  "Torso": [
      101 +    "Ill-Fitting Chainmail", "Hawaiian Shirt of Protection",
      102 +    "Sweater Vest of Resilience", "Bathrobe of Power",
      103 +    "Breastplate of Participation", "Hoodie of Invisibility",
      104 +    "Polo Shirt of Middle Management", "Poncho of Plausible Deniability"
      105 +  ],
      106 +  "Hands": [
      107 +    "Rubber Chicken", "Stapler of Justice", "Foam Finger of Fate",
      108 +    "Spatula of Smiting", "TV Remote of Command", "Rolled-Up Newspaper",
      109 +    "Pool Noodle of Reckoning", "Half-Eaten Sandwich"
      110 +  ],
           … (Legs, Boots, Neck Amulet, Ring slots follow the same pattern)

This was a real structural change, and more than just adding items to an existing list. It refactored the data model entirely.

Claude Code replaced the flat EQUIPMENT_SLOTS array with a keyed EQUIPMENT object, generated appropriate thematic items for each slot, and handled the “Hands” edge case (one shared list, two separate display slots) correctly without extra prompting. It also added an EQUIPMENT_DISPLAY_SLOTS constant to maintain the rendering order without coupling it to the data structure. Clean separation.

Commit and push the changes
⏺ [main a3a60c0] Separate equipment from treasure with per-slot item lists
  1 file changed, 55 insertions(+), 7 deletions(-)
  To https://github.com/AccordionGuy/eternal-grind.git
  3b8918d..a3a60c0  main -> main

⏺ Committed and pushed.

A small UI touch: context-sensitive emoji

The last change in this session was a cosmetic one, but a fun one. The center panel displayed a “crossed swords” image. I wanted it to change based on what the character is doing:

Is the "cross swords" image in the center panel an emoji?
⏺ Yes. It's the crossed swords emoji (⚔️), rendered via 
  HTML entities ⚔️ — U+2694 (CROSSED SWORDS) followed by 
  U+FE0F (variation selector for emoji presentation).
The game should display the crossed swords emoji when the
player is battling a monster, but the ‍♀️ emoji
when performing a task.
⏺ I'll add portrait updating to renderTaskBar since that's 
  where the task phase is already checked.

⏺ Update(game.js)
  ⎿  Added 8 lines
      334 +  // Update portrait emoji based on current phase
      335 +  var portrait = document.getElementById("portrait");
      336 +  if (state.taskPhase === "battle") {
      337 +    portrait.textContent = "\u2694\uFE0F";
      338 +  } else {
      339 +    portrait.textContent = "\uD83D\uDE47\uD83C\uDFFC\u200D\u2640\uFE0F";
      340 +  }
      341  }

Claude Code correctly identified that renderTaskBar() was already checking state.taskPhase, making it the natural home for this logic. Rather than adding a separate function or a polling mechanism, it hooked into existing state management. Two lines of conditional logic, zero new complexity.

Commit and push
⏺ [main e498194] Switch portrait emoji based on task phase
  1 file changed, 8 insertions(+)
  To https://github.com/AccordionGuy/eternal-grind.git
  a3a60c0..e498194  main -> main

⏺ Committed and pushed.

What this session illustrates

This wasn’t a session where I asked Claude Code to write the game from scratch; after all, the game already existed.

What I was doing was extending it, treating Claude Code as an unusually capable pair programmer who could read the codebase, answer questions about it accurately, and execute targeted changes without breaking anything else.

I was impressed by what it did:

  • It knew what it didn’t know. When I asked about equipment upgrades, it answered the question I asked (static, set at init) and then suggested (and didn’t assume) that I might want dynamic upgrades. It waited for direction.
  • It read context before writing code. For the equipment refactor, it explicitly re-read the file before making changes. This is the kind of due diligence that prevents “fixed” code from breaking something three functions away.
  • It handled structural changes alongside content changes. Adding items to an array is trivial. Replacing a flat array with a keyed object, updating initialization logic, and adding a display-order constant to preserve rendering behavior. That’s a real refactor, and it did it in one pass.
  • It fixed typos and told me so. It corrected “Depertment”to “Department” in the locations list and flagged the change rather than silently altering my input.

The game is playable at accordionguy.github.io/eternal-grind, and the source is on GitHub. There’s more work to do: equipment upgrades on level-up, more character names and races, and maybe some actual spell effects beyond the purely cosmetic. Future Claude Code sessions, probably.

 

Categories
Artificial Intelligence Conferences Programming What I’m Up To

My upcoming talk at Arc of AI: AEO – Writing Docs and Code for Machines

Want to go to a real AI conference, packed with real practitioners, in a place where you’ll catch a lot of great talks and plenty of “hallway track” in a fun city?

That conference is Arc of AI, and as of this writing, it’s happening in just under three weeks, from April 13th (if you catch the full-day workshops) or April 14th through 16th.

Better still, I’m giving a brand-new talk, described below:


AEO (AI Engine Optimization): Writing Docs and Code for Machines

SEO is dead for developers. The new workflow for building software has shifted from the Google search bar to the IDE prompt box. When a developer asks an AI agent (which could be Claude, Cursor, or a custom MCP server) to implement a library or secure an API, they’re no longer the primary consumer of your documentation. It’s the LLM now.

If your code, documentation, and reference architectures aren’t optimized for machine ingestion, the AI will hallucinate the implementation, and the developer will blame your product. We’re entering the era of AEO: AI Engine Optimization.

This session covers user-friendly documentation to explore the architectural reality of the “user” being a machine. We’ll dive into the emerging standards recently validated by industry leaders, including the llms.txt proposal and Andrew Ng’s Context-Hub, to show how to provide the “Goldilocks” amount of context to an agent.

We’ll explore:

  • The context budget: How to eliminate “marketing fluff” to save thousands of tokens for actual logic.
  • AST grokking: Structuring Python and JavaScript repositories so AI agents can parse your code’s abstract syntax trees (ASTs) without ambiguity.
  • The machine registry: Implementing the llms.txt standard to ensure your project is accurately indexed in central context hubs.
  • Time-to-Agent-Success (TTAS): A new metric for measuring how quickly a cold AI agent can generate a working, tested pull request for your repository.

Stop writing for the crawler and start writing for the context window. It’s time to ensure that when the robots are asked to build, they choose your stack!


Want to find out more about and register for Arc of AI?

Once again, Arc of AI will take place from Monday, April 13 through Thursday, April 16, with the workshop day taking place on Monday, and the main conference taking place on Tuesday, Wednesday, and Thursday.

Arc of AI tickets are BOGO!

From Arc of AI’s registration page:

You read that right! For each conference ticket you purchase, you get one free ticket. This applies only to conference tickets and not for workshops.

Categories
Artificial Intelligence Humor

My conversations with AIs (#2 in a series)

Pictured above: a screenshot from a conversation I had with Claude last night. I was using it to craft an email in response to an interview that could be better described as an ambush.

I use LLMs as a double-check for when I’m trying to remain professional when greatly annoyed and for when I want to spend the minimum amount of time on something or someone. The party I was communicating with met both criteria.

I’m pretty sure that the way I sass back at LLMs doesn’t affect the quality of their revision when I harshly tell them that I disagree with their original answer. But as a catharsis, creative outlet, and excuse for an amusing screenshot and blog post, it’s oh-so-good.


Want more? Here’s a post from January about what I call The “Nick Fury” method for refusing LLM answers.

Categories
Artificial Intelligence Conferences Programming What I’m Up To

My favorite talk title from the upcoming Arc of AI conference (April 13 – 16)

From April 13th through 16th — and a couple of days before, because it’s in Austin — I’m going to be at the Arc of AI conference! Over the next little while, I’m going to be posting articles about Arc of AI, in case you’re wondering what the conference is about and whether you should go.

In this article, I’ll talk about my favorite title from all the talks on the Arc of AI agenda.

The talk: We’re all using AI, But We’re Not Enjoying It

When your talk happens on the last time slot at the end of a three-day conference (four days, if you’re also going to do one of the workshops), you need to put in some extra effort to get the attendees to show up and not disappear for the local sights (Arc of AI’s in Austin) or make a beeline for the airport.

Brent Laster, President and Founder of Tech Skills Transformations, is giving a number of talks — and a workshop! — at Arc of AI, and he has one of the closing talks. He has a talk in one of those last speaking slots on the Thursday at 4:00 p.m., and it has what I think is the most interesting title on the agenda:

We’re all using AI, But We’re Not Enjoying It

Here’s the abstract:

We’re All Using AI, But We’re Not Enjoying It takes an honest look at a growing gap in the workplace: AI adoption is skyrocketing, yet frustration, confusion, and uneven results are just as common. This talk explores why AI so often feels harder than it should—poorly integrated tools, unclear workflows, unrealistic expectations, cognitive overload, and the pressure to “keep up.” Looking at patterns seen across teams learning to use AI effectively, we’ll break down the practical barriers that make everyday AI work feel tedious instead of empowering. More importantly, we’ll outline a set of achievable shifts—better task design, lighter mental models, context-first prompting, workflow pairing, and small but meaningful guardrails—that can restore a sense of control and clarity.

I need to figure out how I can attend both Brent’s talk and my former Tucows coworker Leonid Igolnik’s talk (which he’s giving with Baruch Sadogursky), Back to the Future of Software: How to Survive the AI Apocalypse with Tests, Prompts, and Specs

Great Scott! The robots are coming for your job—and this time, they brought unit tests. Join Doc and Marty from the Software Future (Baruch and Leonid) as they race back in time to help you fight the machines using only your domain expertise, a well-structured prompt, and a pinch of Gherkin. This keynote is your survival guide for the AI age: how to close the intent-to-prompt chasm before it swallows your roadmap, how to weaponize the Intent Integrity Chain to steer AI output safely, and why the Art of the Possible is your most powerful resistance tool. Expect:

• Bad puns
• Good tests
• Wild demos

The machines may be fast. But with structure, constraint, and a little time travel, you’ll still be the one writing the future.

Decisions, decisions…

Want to find out more about and register for Arc of AI?

Once again, Arc of AI will take place from Monday, April 13 through Thursday, April 16, with the workshop day taking place on Monday, and the main conference taking place on Tuesday, Wednesday, and Thursday.

Arc of AI tickets are BOGO!

From Arc of AI’s registration page:

You read that right! For each conference ticket you purchase, you get one free ticket. This applies only to conference tickets and not for workshops.

Categories
Artificial Intelligence Games Programming What I’m Up To

“Eternal Grind”: My agentically-coded homage to “Progress Quest”

Feeling nostalgic for the 2000s? Need a little amusement? I’ve got the agentically-coded thing you need: Eternal Grind!

Experience it now! Point your browser at accordionguy.github.io/eternal-grind/, then sit back and enjoy the adventure as the game plays itself for you. No effort required, and no time lost to the grind that other online role-playing games bring.

Screenshot of “Eternal Grind,” later on in the game.
A screenshot of Eternal Grind, later on in the game. Click to view at full size.

Once again, it’s here: accordionguy.github.io/eternal-grind/.

What’s Eternal Grind all about?

Eternal Grind is my version of Progress Quest, a parody of the popular 2000s game (and devourer of nerd lives) EverQuest. Unlike EverQuest, which was a multiplayer, Dungeons and Dragons-inspired role-playing game with a cluttered dashboard that required your full attention…

Screenshot of an Everquest game in progress.
A screenshot from EverQuest. Click to view at full size.

Progress Quest was a zero-player Dungeons and Dragons-inspired role-playing game that required no attention at all. It did keep one key aspect of EverQuest: the with a cluttered dashboard. Here’s a screenshot of the game in all its Windows XP glory:

Screenshot of Progress Quest, a Windows XP game made up entirely of list views and progress bars.
Progress Quest! Click to go to the official Progress Quest site.

Eternal Grind is my homage to Progress Quest. Like Progress Quest, it aims to be the ultimate “zero-player” RPG experience, providing all the dopamine of a legendary quest, but with absolutely none of the effort.

In the spirit of today’s best workflows, Eternal Grind automates the entire heroic journey, from slaying fantastical creatures like Literal Metaphors to hoarding  fabulous artifacts such as the Scissors of Regret.

The game automatically creates characters like Kevin from Accounting (a Low-Carb Orc and Spreadsheet Warrior by trade), after which your only job is to sit back and watch the progress bars fill. It’s a witty, Windows XP styled commentary on the nature of the “grind,” where the numbers always go up, the loot is perpetually absurd, and your lack of agency is the greatest feature of all.

Why are you still reading? Play it now! It’s here: accordionguy.github.io/eternal-grind/.

I built it with Zenflow

There are two notable differences between Eternal Grind and Progress Quest, the game to which it pays homage:

  1. While Progress Quest was a Windows-only desktop game, Eternal Grind is a single-page web game that runs on any device with a browser. Feel free to play it on your internet fridge!
  2. Zencoder logoProgress Quest was written the old-school way: using a programming language — namely, Delphi (Borland’s version of Pascal). Eternal Grind was written the new-school way: agentically, using Zencoder’s Zenflow AI coding tool.

That second point is an important one. Progress Quest was the product of traditional coding: the manual, instruction-based process where the developer acts as both architect and builder, meticulously and painstakingly writing instructions that specify how the program should do its work. Success depends on that developer’s ability to translate complex ideas into perfect syntax.

Eternal Grind is a different beast, since it’s the result of agentic coding, where the approach is intent instead of instruction. Instead of dictating the “how,” I provided a high-level specification — the “what” — to Zenflow, which can autonomously plan, write, and even self-correct the code.

(I’ll include the aforementioned specification at the end of this article.)

When using Zenflow to build Eternal Grind, I was no longer the contractor laying every brick. I was now the supervisor, providing the blueprints and overseeing an AI crew that did the bricklaying.

I plan to keep tweaking Eternal Grind using Zenflow. Be sure to visit its page often!

One more time: Eternal Grind is at accordionguy.github.io/eternal-grind/.

The specification

Eternal Grind started with a specification that I wrote into file named spec.md.  This file served as the definitive “source of truth” that described the kind of application I wanted created. While traditional specs are often treated as a “nice-to-have” for human developers, AI agents needs such a spec to act as a “North Star” as well as to keep them from developing the wrong thing.

By clearly defining the application’s logic, layout, and data in a structured format, I provided Zenflow with the basic context for building Eternal Grind. It turns a vague, hand-wavey request into a structured mission, ensuring that the code generated not only just works, but also provides the application I expected, working in the way I expected.

Here’s the complete specification file I initially wrote:

# Functional Specification: Eternal Grind (ZPRPG)

## 1. Project Overview
"Eternal Grind" is a "Zero-Player RPG" (ZPRPG) inspired by the classic parody *Progress Quest*. The game automates all traditional RPG elements—questing, combat, looting, and leveling. The user's role is purely observational.

---

## 2. UI Layout (Three-Column Dashboard)
The application shall use a fixed-height, full-width dashboard layout using Flexbox or Grid.

### A. Character Sheet (Left Column - 25% Width)
* **Identity:** Displays Character Name (from `NAMES`), Level, Race, and Class.
* **Stats Table:** A vertical list of numerical values for the 10 core stats (e.g., Strength, Existential Dread).
* **Equipment:** A list of 6-10 equipment slots showing absurd gear.
* **Spells/Abilities:** A scrolling list of learned "skills" that grows upon leveling up.

### B. The Engine of Progress (Center Column - 50% Width)
* **Location Header:** Displays the current location from the `LOCATIONS` list.
* **Primary Task Bar:** A large progress bar indicating the current action (e.g., "Contemplating the void").
* **Plot Bar:** A slower-moving bar tracking progress toward the next "Act."
* **Experience Bar:** A bar tracking progress toward the next Level.
* **Portrait:** A central area for a static character icon or simple CSS animation.

### C. Data Feed (Right Column - 25% Width)
* **Inventory (Top Half):** A scrolling list of items collected. Maximum capacity: 15 items.
* **Quest Log (Bottom Half):** A vertical scrolling log of events. It must automatically scroll to the bottom as new lines are appended.

---

## 3. Core Mechanics & Logic

### 3.1 Initialization
When the application starts:
1.  **Name Selection:** A name is chosen randomly from the `NAMES` list and remains permanent.
2.  **Character Build:** A `RACE` and `CLASS` are randomly assigned.
3.  **Starting Stats:** Each stat in the `STATS` list is assigned a random base value between 3 and 18.

### 3.2 The Game Loop
The application runs on a continuous timed loop:
1.  **Questing:** The "Task Bar" fills over a period of 3–8 seconds.
2.  **Completion:** Once the bar hits 100%:
    * A random **Monster** is "defeated."
    * A random **Item** (Adjective + Noun) is added to the Inventory.
    * A line is added to the **Quest Log** (e.g., "Executed a Low-Level Bugbear. Found: Rusty Sock of Mystery").
    * The **Experience Bar** increments.
3.  **Market Mode:** When the Inventory reaches 15 items:
    * The current task changes to "Heading to market to sell junk."
    * After a short delay, the Inventory is cleared and the character returns to questing.
4.  **Leveling Up:** When the Experience Bar reaches 100%:
    * The Character Level increments.
    * A random **Stat** increases by 1.
    * A new **Spell** is randomly selected and added to the spell list.
    * The Experience Bar resets.

---

## 4. Technical Requirements
* **State:** The application must maintain a state object containing the character's profile, stats, inventory list, and log history.
* **Styling:** A "Retro Win95" or "Classic MMO" aesthetic with high-contrast borders.
* **Performance:** The log should prune entries older than 100 lines to maintain performance.

---

## 5. Data Appendix

### Character Names
* Kevin from Accounting, Sir Tap-A-Lot, The Great Barnaby, User_772, Mistake #4, Sir Not-Appearing-In-This-Game, A Literal Bag of Flour, Lord Helvetica, Chadwick the Unready, Karen of the Suburbs, Glitchy McGlitchface, The Placeholder, Grommet the Slightly Agitated, Barb the Librarian, Sir Sells-Everything, Kyle the Monster Energy Enthusiast, Grandmaster Procrastinator, The Unpaid Intern, Sir Buffering..., Standard Hero 01.

### Races
* Sentient Toaster, Depressed Elf, Low-Carb Orc, Middle-Management Dwarf, Glitch in the Matrix, Half-Empty Human, Sentimental Slime, Vague Shadow, Procrastinating Pixie, Bureaucratic Beholder, Existential Ghost.

### Classes
* Spreadsheet Warrior, Chronic Procrastinator, Underpaid Mage, Professional Mourner, Existentialist Rogue, Lunch Knight, Intermittent Faster, Coffee Warlock, Passive-Aggressive Paladin, Technical Support Druid, Tax Accountant.

### Tasks
* Debating a fence post, Polishing a rusty nail, Contemplating the void, Waiting for a sign, Filing a 1040-EZ, Staring into the middle distance, Organizing a sock drawer, Explaining the internet to a rock, Searching for a lost remote, Counting ceiling tiles, Simulating a personality, Buffing out a scratch in reality.

### Locations
* The Forest of Mild Inconvenience, The Cave of Echoing Sighs, Downtown Boredom, The Desert of Dry Humor, Mount Mediocrity, The Swamps of 'I'll Do It Tomorrow', The Suburbs of Despair.

### Item Adjectives
* Dull, Polished, Forbidden, Rusty, Lamentable, Insignificant, Glowing, Slightly Damp, Overpriced, Mediocre, Legendary-ish.

### Item Nouns
* Scissors of Regret, Pebble of Mediocrity, Scone of Power, Lint of Destiny, Paperclip of Hope, Broken Twig, Expired Coupon, Sock of Mystery, Unfinished Novel, Jar of Pickled Thoughts.

### Monsters
* A Literal Metaphor, The Concept of Ennui, A Low-Level Bugbear, An Imaginary Friend, A Confused Salesman, A Dust Bunny of Doom, The Ghost of a Dead Pixel, A Sentient Terms of Service Agreement.

### Spells
* Aggressive Sighing, Metaphysical Poke, Summon Minor Annoyance, Greater Procrastination, Flash of Inadequacy, Power Word: 'Whatever', Cloud of Confusion, Internal Monologue.

### Stats
* Strength, Constitution, Dexterity, Intelligence, Wisdom, Charisma, Patience, Luck, Caffeine Level, Existential Dread.

Zenflow generated the application, and I also had it use a different agent to review its own code.

I ran the application, saw things I wanted changed, and then specified those changes:

Screenshot of one of my interactions with Zenflow while building Eternal Grind
One of my change requests in Zenflow. Click to view at full size.

Zenflow made the changes, then I had the review agent review those changes. This process of refinement continued for a couple more steps, and the result is the game located at accordionguy.github.io/eternal-grind/.

As I mentioned before, Eternal Grind is a work in progress. I’ll continue adding tweaks and improvements using Zenflow. Watch this space!

Find out more