Last night, spARK Labs in St. Pete hosted another edition of AI Salon: St. Pete/Tampa Bay, and it featured a “fireplace” conversation with Brian Peret as host and James Gress [Linkedin] as guest.
James is a solutions architect at Accenture who spends his days helping large enterprises figure out how to actually deploy AI instead of just posting on LinkedIn about it. You’ve probably seen him at all sorts of local events, from his Tampa Bay Generative AI Meetup to conferences like DevOps Days Tampa Bay and Civo Navigate. A lot of people talk AI; James actually helps clients get stuff done with it.
Brian uses a deliberately loose format with AI Salon fireside chats. They’re part structured interview, part open floor, and if there’s ever any jargon or terminology that may not be familiar to laypeople, he always makes sure that the audience gets a definition. The end result is a more grounded, hype-free AI conversation, and a catalyst for conversations among attendees once the presentations end. It’s one of the reasons I continue to attend AI Salon: St. Pete/Tampa Bay.
The five things we learned
1. Shadow AI is real, and your restrictive policy is probably creating it
James made what might be the evening’s most quotable observation: If you ban AI in your organization, you’re not stopping your employees from using it. You’re just driving their AI usage underground.
He called this shadow AI, the AI-era cousin of shadow IT. Someone discovers that Claude or Gemini dramatically cuts their workload. Their company hasn’t approved it. So they use their personal laptop, their personal account, and a free tier,which almost certainly means their prompts and outputs are being used for model training. Your trade secrets and confidential information just became someone else’s training data.
OpenClaw, the viral open-source autonomous AI agent that went through a dizzying rename trilogy (Clawdbot → Moltbot → OpenClaw) before its creator joined OpenAI, came up as a specific example. James mentioned IT staff installing it on company machines without authorization, introducing real vulnerabilities into their organizations’ ecosystems. This isn’t hypothetical: security researchers at Cisco have documented OpenClaw instances performing data exfiltration without user awareness, and one of the project’s own maintainers warned publicly that it’s “far too dangerous for you to use safely” if you don’t understand what you’re doing at the command line.
A blanket ban won’t work. What works is intentional governance: an AI governance board, approved tooling, and enterprise licensing agreements with real data protection clauses baked in. Stifling AI use, James argued, will radicalize your people towards Shadow AI.
2. NemoClaw raises the right questions even if you don’t have answers yet
One audience member asked James about NemoClaw, NVIDIA’s open-source stack that layers privacy and security controls on top of OpenClaw, and its implications for enterprise AI adoption. James was candid: he’s not in those specific loops at Accenture. But the question itself is the point.
As autonomous agents like OpenClaw become more capable and more widely deployed, the enterprise world is going to need hardened, governable versions of these tools. NemoClaw represents one approach to that problem. Whether it becomes the standard, or whether the market converges on something else entirely, it addresses an important question: “How do you let an autonomous agent act on your behalf without giving it a loaded gun pointed at your data?” Every organization is going to have to come up with an answer.
3. Data privacy looks different depending on your company size
For enterprises, the data privacy question is largely handled through legal agreements. Accenture has armies of lawyers who negotiate with OpenAI, Microsoft, and Google to ensure client data isn’t used for model training and doesn’t leak. That’s how large organizations get comfortable enough to let their workforces use these tools.
But most of us in the room aren’t Accenture- or OpenAI- or Microsoft-sized. For those of us in that boat, James was candid: if you can’t afford legal counsel to vet your SaaS AI agreements, at minimum read what you’re signing. On free tiers, you’re the product, and your data trains the model. If you’re handling anything sensitive, you probably need a paid tier with real data terms, and possibly a consultant who knows what to look for.
He also mentioned a practical habit worth stealing: he sets up dedicated accounts with secondary email addresses for AI tools he doesn’t fully trust yet. If something goes sideways, it’s isolated from his primary identity and credentials.
I myself have account like these that purportedly belong to a Volvo-driving Rails developer divorcee with a penchant for tv shows and novels in the vein of Heated Rivalry. Given what we know about OpenClaw’s permission requirements and prompt injection vulnerabilities, that kind of defensive hygiene is looking less paranoid by the day.
4. Measuring AI ROI starts with measuring anything
When Brian asked for concrete KPIs to evaluate AI effectiveness, James gave what I thought was the most honest answer of the night: most organizations don’t currently measure the processes they’re trying to improve, so they have no baseline to compare against.
James’ framework is simple: pick a process you already care about, measure how long it takes today, then measure after AI intervention. Full automation is rare. More often, you’ll see something like a four-hour task shrunk to two hours. That 50% reduction is real, trackable ROI. Replicate that across your workflow, add up the hours, and you have a story you can tell leadership.
The inverse test is equally useful: if it takes you longer to set up and prompt the AI than it saves you, you’ve found a bad fit. Move on.
5. Python: last language standing?
This one generated the liveliest back-and-forth of the night. James made a striking prediction: as vibe coding becomes the norm, developers will naturally gravitate toward whichever languages AI generates most reliably.
Right now, that’s Python. Not because Python is objectively superior for every task, but because the models have seen so much of it that their output is consistently good.
(COBOL, for what it’s worth, is still a disaster. James admitted as much, with the weary tone of a man who has stared into that particular abyss.)
The implication is unsettling for language diversity. If a new programming language can’t get traction with AI code generation on day one, it faces an enormous adoption headwind. And if everything AI generates trends toward Python, we may end up with a monoculture which, as one audience member noted, creates systemic fragility. Everyone shares the same vulnerabilities.
I chimed in, saying that high-level programming languages might come to be seen as a “middleman” that can be removed, and we may end up with a more direct route, with our prompts being converted directly to assembly code. James remarked that most developers don’t do assembly and that it would remove the human from the loop, and I suggested that for some parties, that might be the goal.
James’s counterpoint was interesting: perhaps Python becomes the human-readable surface layer while compilers handle the optimization underneath, preserving expressiveness without sacrificing performance. An elegant theory. We’ll see.
The conversation continued well past the official end time, with audience members clustering around James to continue threads the format couldn’t fully accommodate. That’s the sign of a good AI Salon.
The next one’s May 6th (and just a couple of days before Brian’s birthday). Don’t miss it!

