There are plenty of reasons to attend BSides Tampa, a cybersecurity conference that brings in 2,000+ attendees, including…
Great keynotes and presentations across seven tracks: keynotes, red team, blue team, cloud security, GRC and privacy, appsec, and AI and emerging
The exhibitor hall, where they don’t scan your badge, which means that you won’t get spammed as a result and they won’t sell your info
Interactive villages: malware, social engineering, IOT, network, lockpicking
A chance to meet the technology and cybersecurity professionals in the area, including these two…
But the most compelling reason I can think of to go is…
Let me repeat that:
80 percent of success is just showing up.
Let me illustrate with a story. Last May, techie-about-town Ammar Yusuf said he could hook me up with a free ticket to VueConf, which was taking place right here in Tampa.
I’d just come back from an expensive two-week trip, and I was still operating as an independent consultant. The spring and summer of 2025 were pretty slow; the well of clients was running dry.
I was strongly tempted to turn down the free ticket so I could devote more time and energy to finding my next job or client. Some might argue that it would be the smart thing to do.
But I decided to take the free ticket and go to VueConf instead, because I remembered all those times when showing up led to great things. Again, I remind you:
I ended up chatting with Pratik, who then offered both me and Anitra free tickets to the Dev/Nexus conference in Atlanta that would take place a couple of weeks later. It was short notice, and Atlanta’s a 7+ hour drive from Tampa. But we remembered the rule:
So we went, learned a lot, and had a great time:
And while we were at Dev/Nexus, I ran into Pratik, who was walking the exhibitor floor with Venkat Subramaniam, who knows me because I show up to his talks whenever he comes to town.
Here’s the “Bollywood Buddy Movie Poster” photo taken at the meetup where I met Venkat:
When I ran into Pratik and Venkat at Dev/Nexus, Pratik suggested to Venkat that I speak at the Arc of AI conference that would take place the following month. Venkat thought that would be a good idea, and asked me to submit a couple of talk proposals. So I did, even though I was knee-deep in contract work and a job search, because…
My submissions got accepted, and the result was my talk about writing documentation and example code for consumption by AI agents:
…and I met a lot of people:
And here’s the kicker: not only did I get to meet new people and attend (and speak) at conferences, but all this helped me land my current job at NetFoundry. The fact that I’d managed to land a speaker gig at Arc of AI was a key point in my job interviews. And I wouldn’t have the key point for that interview if…
I didn’t speak at Arc of AI, which wouldn’t have happened if
I didn’t apply to speak at Arc of AI, which wouldn’t have happened if
I didn’t go to Dev/Nexus, which wouldn’t have happened if
I didn’t go to Pratik’s talk at the Tampa Java User Group meetup, which wouldn’t have happened if
I didn’t go to VueConf with the free ticket Ammar gave me.
The lesson here is simple:
So if you don’t have prior commitments and you can afford to do so and you’re in a tech/tech-adjacent/cybersecurity/cybersecurity-adjacent field — and especially if you’re looking for work — consider going to BSides Tampa tomorrow, because you know what showing up can do for you!
Once again, ticket prices are:
$45 for general admission
$30 for students and military
…and you can save 20% by using Tampa Devs’ discount code, TampaDevs20_BSIDESTAMPA_2026.
If you’ve been building anything with agents in the past year, you already know the shape of the problem even if you haven’t named it: you’ve got a model in one cloud, a vector store in another, a tool server somewhere on-prem, an MCP gateway facing the public internet, and a handful of A2A flows stitching the whole thing together. It works. Better than that, it’s exciting!
Let me say this as someone who’s spent a few years in cybersecurity and the last couple of weeks elbow-deep in OpenZiti: the AI systems that we’re implementing are built on a network model that was designed before any of this stuff existed, and that network model hasn’t kept up with what we’re doing today.
The core argument that Philip makes in his presentation is one I think every developer working on agentic systems needs to internalize, regardless of what they’re shipping on top of:
The traditional internet model lets you connect first and authenticate second. Agentic AI breaks that model so badly that we can’t pretend anymore.
Let me walk through why.
The exploit window has collapsed, and AI is the reason
Tap to view at full size
Philip opened with this knowledge bomb: the median time-to-exploit for newly disclosed vulnerabilities has dropped from days to hours.
AI has joined the Red Team. There’s AI-assisted reconnaissance, AI-assisted fuzzing, AI-assisted exploit synthesis, and more. Every part of the attacker’s pipeline is getting the same productivity boost the rest of us are getting from Copilot and Claude. The asymmetry is brutal. Defenders have to be right about every service they expose, while attackers only have to be right about one.
The LiteLLM supply-chain incident is a useful recent example. An exploit got injected upstream, and because the compromised library ran in environments where it could see them, attackers walked off with SSH keys, Kubernetes tokens, cloud credentials, and the rest of the usual environment-variable buffet. None of that would’ve happened if the service running LiteLLM wasn’t reachable from the place the attacker was sitting. Reachability was the precondition for everything else that went wrong.
In most “AI security” conversations, the talk is about the model: prompt injection, jailbreaks, output filtering, runtime guardrails, and so on. These issues matter, but there’s a much more “boring” question that’s worth asking…
Can the attacker even get a packet to your service in the first place?
If the answer is “yes”, all the model-layer controls in the world are working with their hands tied.
Reachability is the problem
Here’s the structural issue Philip kept circling back to, and it’s worth stating plainly because we’ve all just internalized it as how computers work:
The traditional networking model allows connectivity before authentication.
In your standard server application, you open a port. Clients, including ones that have no business knowing the server exists, sends SYN. The server completes the handshake, and then it asks the client “Who are you?”
By the time a malicious client is answering that question, the people behind it have already fingerprinted your TLS stack, learned your server software, probed for known CVEs, and maybe even identified an exploit they’d like to try.
This is fine for, say, a public web server that genuinely wants to be discovered by anyone. It is wildly inappropriate for an internal MCP gateway, an LLM endpoint scoped to a specific agent, or an A2A flow between two services that should have no business talking to anyone but each other.
There’s a reason bouncers check for ID while you’re still outside the nighclub.
Philip’s metaphor for this is…Hogwarts. Because of course it is.
Imagine if any random Muggle could walk up to Platform 9¾, see the magical world clearly visible behind a flimsy enchantment, and start poking at the bricks to figure out which sequence opens the wall. The whole point of the wizarding world’s security model is that Muggles don’t even know it’s there. Reachability is the threat. Once something is known to exist, it’s only a matter of time before somebody works out how to get in.
Most of our infrastructure today is like Hogwarts with a “Muggles Keep Out” sign on the gate. Everyone can see it. Everyone can probe it. We’re hoping the lock holds.
The identity-first approach
Tap to view at ffull size.
The inversion Philip proposes is something that NetFoundry’s OpenZiti project actually implements. It’s straightforward to describe and surprisingly hard to undo once you’ve seen it:
Strong cryptographic identity comes first. Every agent, every service, every endpoint gets a unique, attestable identity. Not a shared secret. Not a long-lived token someone copy-pasted into an env. An actual cryptographic identity tied to the workload.
Authentication and authorization happen before any data plane exists. No TCP handshake. No UDP packet. No DNS resolution that even confirms the service is real. If you don’t have a valid identity for this specific service under the current policy, there is nothing on the network for you to interact with.
Reachability is granted, scoped, and revocable. A policy says that identity X can talk to service Y for purpose Z. Change the policy, change the reachability. No firewall ticket. No VLAN reshuffle. No RMF package update.
Here’s the phrase that Philip used:
Connectivity should be an outcome of policy.
It shouldn’t be a prerequisite. That’s the difference:
In the traditional model, the network is a thing you build first, and then layer controls on top of it.
In the identity-first model, the network only exists between identities that have been explicitly authorized to see each other. Everything else is dark.
Tap to view at full size.
For agentic systems specifically, this matters because the topology is insanely fan-out. An agent may need to call three LLMs, four tool servers, two vector stores, and a partner organization’s API in a single workflow. Each of those is a trust boundary:
In the traditional model, every one of those flows is a potential firewall rule, a potential exposed endpoint, a potential lateral-movement path if something upstream gets popped.
In the identity-first model, each flow is a policy, and only the policy-permitted flows have any network presence at all.
The developer-velocity argument
Sure, the security argument is the headline, but if you’ve ever worked anywhere with a serious change-management process, the velocity argument might land harder.
Philip mentioned someone he’d recently spoken with who was building a new service. The platform supported outbound 443. The service needed thirty different ports. Each port change was a firewall ticket. Each ticket was an RMF update. The math on that timeline is grim, and it’s grim in commercial environments too. Anyone who’s tried to get a new outbound rule through a Fortune 500 change board has stories.
In a network where reachability is governed by policy on top of identity rather than by plumbing at OSI levels 3 and 4, that whole category of friction collapses. You’re not asking the network team to change the network. You’re updating a policy that says “this identity can now reach this service.” The underlay (your VLANs, your security groups, your jump hosts) doesn’t have to know or care.
Oh, and in case you don’t remember your OSI levels, here they are, illustrated with cats:
(Layers 3 and 4 are the network and transport layers.)
The downstream effects compound:
Telemetry gets quieter. When the only traffic that exists on a path is authenticated, authorized traffic, your SOC stops drowning in scan noise from the open internet. The signal-to-noise ratio on alerts goes way up.
Credentials simplify. No more shared service tokens that everybody on the team has a copy of. Identity is per-workload, scoped, and revocable.
The underlay becomes boring (and in security, boring is good). You can run the same workload across satcom, LTE, hotel Wi-Fi, and a hyperscaler VPC, and the security posture doesn’t change. The overlay handles it.
That last point matters more than it sounds for AI work specifically. Agents don’t sit in one tidy network segment. They reach across clouds, across organizations, across SaaS boundaries. Trying to enforce zero trust by keeping all that traffic inside a controlled underlay is a losing battle. Enforcing it at the identity layer means the underlay can be anything.
Where’s this going for agents?
Tap to view at full size
In his talk, Philip mentioned Cloud Security Alliance work, building a reference architecture for agentic systems on top of identity-first connectivity. It’s taking on this shape:
Foundation: cryptographic identity and attestation. Every agent proves what it is before any path exists.
Reachability: policy-driven, identity-scoped, no ambient network presence.
Authorization: agents see only the tools, models, and data their policy permits. No tool discovery for things they’re not allowed to touch.
Governance: human-in-the-loop for high-risk actions, audit trails tied to the cryptographic identity that took the action.
The thing I like about this stack is that the Foundation layer fixes the boring-but-fatal problem (reachability), which lets the upper layers actually do their jobs without being constantly undermined from below. You can have the world’s best prompt-injection defenses, and they don’t help you if your tool server got pwned because somebody port-scanned it from the open internet.
What you should take away if you’re a developer
It’s the middle of my third week at NetFoundry, and I’m still in the “drinking from the firehose” phase, where I’m interalizing these things:
If your threat model says “we’ll catch them at the application layer,” update your threat model. The exploit window is too short for that to be the only defense.
“Is this service reachable from where the attacker is sitting?” is the first question, not the last. If the answer can be “no,” make it “no.”
Identity-first is not a product category you buy. It’s a property your architecture either has or doesn’t. You can get there with OpenZiti, with various commercial overlays, with SPIFFE/SPIRE for the identity piece, with combinations. The label matters less than the property. (But hey, I’d love it if you went with OpenZiti, and double if you tell NetFoundry you heard about it from me!)
The biggest unlock isn’t security, it’s that you stop spending your week filing firewall tickets.
Philip closed with a line that I think is the right one to leave on, paraphrased: any sufficiently advanced security model looks like magic. In this context, magic means the thing you’re trying to attack isn’t there. That’s the bar. Not “well-defended.” Not “hardened.” Not visible at all unless you’ve already proven who you are.
For agentic AI, where the speed of attack and the fan-out of the topology are both moving in directions that make traditional networking less viable every month, that bar is starting to look less like a nice-to-have and more like the only model that actually scales.
If you want to dig in: the OpenZiti project is open source and a reasonable place to get hands-on with what identity-first overlay networking actually looks like in practice.
Last week was my first week at NetFoundry, where I’m the Senior Developer Advocate. It was fun, and it was also like drinking from a high-tech, encrypted firehose!
To mark the occasion, I sat down with NetFoundry’s Head of Developer Experience (and also developer; he does a lot!) Clint Dovholuk for my first episode on Ziti TV. We spent an hour diving into the “meat” of Zero Trust, networking architecture, and why your traditional VPN might be the “castle and moat” that finally (and unintentionally) lets the invaders in.
If you’re a developer who has always viewed networking infrastructure as someone else’s problem (and as a recovering mobile developer, I’m certainly guilty on that charge), here’s the deep-dive breakdown of what I learned in my first week on the job.
Clint said that Zero Trust might be better understood if you called it Explicit Trust. In the old “Castle and Moat” model, if you’re in the castle, you’re trusted. In the OpenZiti model, we assume the network is already compromised. You have zero privileges until they are explicitly granted based on:
Authentication: “Who are you?”
Authorization: “What are you allowed to do?”
A lot of resources will authenticate and authorize you through some kind of sign-in process. Clint describes OpenZiti as moving the process out by one layer into the network so you can’t even connect to an OpenZiti-protected resource without being authenticated and authorized first.
Or, to quote Clint:
With OpenZiti and Zero Trust, if you have a service that’s protected by OpenZiti, you first need to authenticate to the OpenZiti overlay network, and then you need to have an authorization that permits the operation you’re trying to perform.
OpenZiti also uses a Zero Privilege approach. Once again, to quote Clint:
The whole idea is that you have no privileges until you are granted privileges, and only then are you able to take whatever operation you want.
“Jay double-you tee” vs. “Jawt”
Apparently we’re on different sides of this debate. Clint prefers referring to JWTs as “Jay double-U tees,” while I prefer to call them “Jawts.”
OpenZiti and NetFoundry: How are they related?
OpenZiti is the network overlay project, and NetFoundry is the company behind OpenZiti.
The “Open” in OpenZiti comes from the fact that it’s an open source project. This is in keeping with the philosophy that a cybersecurity product should be open source because making source code publicly visible enables a community of developers, analysts, and other experts to audit, test, and improve it.
If you have the time, tech skills, and inclination, you can use OpenZiti and run your own overlay network at zero cost — if you don’t count the cost of said time and tech skills. It’s all up for grabs here.
However, if you’d rather spend your time and technical expertise elsewhere, especially once your needs get up to scale, such as on your main line of business, NetFoundry is here to provide you with a managed OpenZiti platform.
It’s easy to run one controller and two routers on your laptop. But when you’re an enterprise managing a fleet of routers, handling upgrades, and monitoring metrics, you’re suddenly in the “overlay business” instead of your actual business. NetFoundry is the “Easy Button” that manages OpenZiti for you [19:10].
The quickstart
Clint then gave a quick demonstration of the OpenZiti quickstart, which creates a fully functional OpenZiti network overlay on your system in a couple of seconds. This overlay has both a router and a controller, and each has a specific job.
Controller
The OpenZiti controller [24:36] serves as the brain of the overlay network. It’s the authority responsible for managing the state of the environment and ensuring that all connections are secure and verified before traffic ever flows.
Its responsibilities can be broken down into several key functions:
1. API surface and management
The controller surfaces several critical APIs that different components of the network interact with. These include:
Edge Client API: Used by SDKs and tunnelers to authenticate and discover services.
Management API: The interface used by administrators (often via the Ziti CLI) to configure the network, such as creating new identities or defining service policies.
Fabric and OIDC APIs: Used for internal mesh communication and identity provider integration.
2. The authority on explicit trust
The controller is the primary decision-maker for the two pillars of Zero Trust security:
Authentication: It verifies the identity of any user, device, or “workload” attempting to connect (answering “Who are you?”).
Authorization: It checks configured policies to determine exactly what that identity is allowed to access (answering “What are you allowed to do?”).
Unlike a traditional network where a firewall might be open by default, the controller ensures the network is dark by default. No connection is permitted until the controller has explicitly authorized it.
3. Bootstrapping trust, a.k.a. enrollment
The controller is the starting point for bringing new devices into the fold through a process called “Bootstrapping Trust”.
It issues One-Time Tokens (OTTs) (essentially signed JSON Web Tokens) that are delivered to users.
When a client initiates enrollment, the controller validates the token and facilitates a Certificate Signing Request (CSR) exchange.
The end result is a strong, cryptographically verifiable identity that the client uses for all future secure communications.
4. Orchestrating the mesh
While the controller does not actually handle the data traffic (that is the job of the routers), it provides the “map.” It coordinates with the edge routers to broker data channels, ensuring that when a client “dials” a service, the routers know how to steer that traffic to the correct destination.
Router
The OpenZiti router [26:09] is the workhorse of the network. While the controller acts as the brain and makes policy decisions, routers constitute the data plane: the actual infrastructure that moves bits from point A to point B.
According to Clint, the router’s job can be broken down into these core functions:
1. Forming the mesh overlay
The routers are responsible for creating the “mesh overlay network”. Unlike a traditional hub-and-spoke networking model, these routers connect to one another to form an interconnected fabric. Even if you start with just one router, you can deploy many others to extend this mesh.
2. Brokering data channels
The primary job of a router is to broker data channels. When an application wants to send data, the router facilitates the creation of a secure path. It effectively “steers” the traffic through the mesh to ensure it reaches the intended destination router and, ultimately, the target service.
3. Serving as the entry point for clients
Everything in OpenZiti is technically an SDK client, whether it’s a standalone app or a “tunneler.” These clients connect directly to the routers to form the necessary channels for communication. The router acts as the listener that accepts these connections once the controller has given the “okay.”
4. Shuttling the actual data
The router is where the heavy lifting happens. It is the component that actually sends your data from one side to the other. While the controller handles the logic of authentication and authorization, it never touches the application data itself. That task is handled entirely by the routers.
5. Enforcing the “dark network”
By acting as the only point of entry into the mesh, routers help enforce the “dark by default” philosophy. Unless a client has been explicitly authorized by the controller, a router will not broker a channel for it, effectively keeping the protected services invisible to the public internet, and by extension, unauthorized and malicious parties.
The coolest part for a developer? You can spin this all up on your local machine in about seven seconds with a simple ziti edge quickstart [23:00].
Why not just use a VPN?
One of my questions was the one every developer asks: “Why can’t I just use a VPN?”
Clint insists that an OpenZiti overlay actually is a VPN [34:05] in the broadest sense, in that it’s a virtual network that’s closed off to unauthorized parties. It just functions much differently than the “one big mush” of traditional VPNs, which are open by default, and once you’re in, you can see everything.
On the other hand, OpenZiti is dark by default [35:45]. If you have a server on the open internet, it usually has an open port (such as port 22 for SSH or 443 for HTTPS). With Ziti, you close those ports entirely. The service becomes “dark,” and the ports are invisible, and you can’t attack what you can’t even find.
The “magic dance” of bootstrapping trust
I’ll admit, when I first tried to set up a client and server, I got a little lost in the “magic dance” of certificates. Clint called this process bootstrapping trust [38:47].
It starts with a One-Time Token (OTT), which is a signed JWT, and the process goes like this:
The admin creates an identity on the controller [41:09].
The client uses the token to find the Controller’s URL [43:11].
The handshake takes place, where the client verifies the controller’s certificate, and they exchange a CSR (Certificate Signing Request) [44:43].
Strong identity: The result is a JSON file containing a key that must be protected like a secret.
AI Agents and the MCP Gateway
We also took a detour into Agentic AI. Clint has been using MCP (Model Context Protocol) Gateways to let Claude interact with the Ziti CLI.
The breakthrough here is efficiency and security. By using an MCP Gateway, you don’t have to give your raw credentials to the AI [57:02]. Plus, by using a targeted MCP server, you can strip a massive 100k data object down to a 10k summary, saving a fortune in tokens [59:12].
Real-world use: From blue bubbles to drones
I asked Clint who is actually using this in the wild. The “Adopters” list is growing, including projects like Blue Bubbles (the tool that brings iMessage features to Android) [50:33].
But the stakes get higher. We discussed Zero Trust Drones and secure communications on the battlefield [52:12]. When you’re in a high-stakes environment like Ukraine, having secure, “dark” comms is a necessity, not a luxury.
More coming soon!
This was the first of many Ziti TV livestreams featuring Clint and Yours Truly. The next one’s scheduled for Friday, April 30th at 11:00 a.m. U.S. Eastern / 8:00 a.m. U.S. Pacific / 1500 UTC, and you can view past livestreams in the Live section of the OpenZiti YouTube channel.
Today is my first day as Senior Developer Advocate at NetFoundry, the company behind OpenZiti.
I am thrilled, slightly jet-lagged from the onboarding reading, and (because some things never change)my accordion is within arm’s reach of the desk. If you are going to explain zero trust networking to developers, you might as well have an accordion-powered rock and roll backup plan.
This is the post where I tell you what the job is, what the product is, why the name makes me smile, and why I think this is going to be a good couple of years.
The short version
I am joining the team that invented and maintains OpenZiti, an open source zero trust networking platform. My job, alongside my colleague Clint, is to be the developer-facing voice of the project: write code, build demos, ship tutorials, show up in the communities where the conversations are actually happening, and make sure what we hear from developers gets back to the product and engineering teams in a form they can act on.
The timing is interesting. NetFoundry recently announced NetFoundry for AI, an AI-focused use of the platform aimed squarely at the problem every AI team is quietly panicking about right now: how do you let AI agents, MCP servers, and LLMs talk to each other and to the rest of your infrastructure without turning your network into Swiss cheese?
More on that in a minute. First, the name.
What is OpenZiti, and why is it called that?
The “ziti” in OpenZiti comes from “ZT”, as in “zero trust”. Say “Z-T” out loud a few times, let the letters slur a little, and you end up somewhere in the neighborhood of “ziti.” Then somebody noticed that ziti is also a tubular pasta, and because developers are developers, that became the visual identity. The OpenZiti logo is, essentially, a piece of pasta. I respect this deeply. My last employer’s mascot was a twerking login box. My current employer’s mascot is a delightfully cheesy, tasty dinner.
This also explains this cryptic comic I posted on my socials earlier, as a hint about the new job:
By the way, the rightmost pasta in the comic is a slouching ziti. Also, in case you need a quick explainer, here’s a helpful infographic:
Infographic from Sip Bite Go. Click to see the source.
The “Open” part is the substantive half of the name: OpenZiti is genuinely open source, Apache 2.0 licensed, and the whole thing lives in public on GitHub. You can pull it down right now, stand up a controller and some routers on your own hardware, and have a zero trust overlay network running on your laptop by lunchtime. (I know this because that is literally what I am doing this week as part of my onboarding. More on that later too.)
So what does it actually do?
Here is the mental model I am starting with, and I reserve the right to refine it as I get deeper in:
Today’s network model is “castle and moat.” You put a firewall around your stuff, you open ports for the services that need to be reachable, and you hope the bad guys don’t find a way through the gate. When they do (and they always do) they are inside the castle with the crown jewels.
Zero trust flips this. Instead of trusting the network, you trust identity. Every connection is authenticated, every connection is authorized, every connection is encrypted, and nothing is reachable just because of where it is on the network.
OpenZiti is the overlay that makes this practical. It gives every app, service, device, or agent a cryptographic identity, routes their traffic through a mesh of routers that only accept authenticated connections, and requires no open inbound firewall ports. This is the part that makes network engineers do a double-take. Nothing listens on the public internet. Attackers can’t port-scan what isn’t there.
If you have ever been the person who had to file a firewall change ticket to let service A talk to service B, and then waited three weeks and filled out a compliance form, you already understand the appeal.
The AI angle, which is where I am spending a lot of my first year
Here is the thing about AI agents and MCP servers: they are, architecturally, the worst possible citizens of a perimeter-based network.
They need to talk to a lot of things. They hold API keys. They get spun up and torn down on timelines that do not match anybody’s firewall change window. They are, by design, non-human identities with significant privileges, and most of the infrastructure around them was designed for humans with laptops.
NetFoundry for AI is the pitch for applying OpenZiti’s identity-first model to this mess:
A zero trust enclave for your users, agents, MCP servers, and LLMs, so none of them are reachable over the open network
Strong identities for the non-human participants (agents and MCP servers have been running around with service accounts and bearer tokens for too long)
API keys and service credentials held separately from the agents themselves, so a compromised agent isn’t also a compromised credential vault
Token tracking, cost accounting, and LLM routing across multiple providers, because once you have the identity layer you might as well use it to see what is happening
If you have been reading Global Nerdy for a while, you know the pattern. I spent three and a half years at Auth0 explaining OAuth 2.0, OIDC, and identity to mobile developers who would rather do literally anything else. The work was: take something that sounds like a standards committee threw up on a whiteboard, anchor it to a problem the developer actually has, and give them working code that does not require them to read 400 pages of RFC.
Zero trust networking is the same shape of problem. The concepts are genuinely hard. The vocabulary is dense. Most developers have never had to think about overlay networks before. But the underlying motivation, “I don’t want my AI agent’s API key to become somebody’s weekend project,” is something every builder can feel in their bones.
And some of you might remember my monthly Tampa Bay AI Meetup, which is now sitting around 2,200 members. The through-line of that community has been the same thing I am now getting paid to do full-time: take genuinely complicated infrastructure and make it feel approachable. Zero trust for AI agents is squarely in that Venn diagram.
What happens next
For the next little while, the plan is mostly “shut up and build.” I am standing up OpenZiti from scratch on my own hardware, embedding the SDK in a demo app, running MCP Gateway with Claude Desktop and a couple of backends, running LLM Gateway with a local model and a commercial one, and lurking in every community where OpenZiti and MCP get talked about. No hot takes until I have earned them.
After that, the usual Joey stuff: blog posts, short demo videos, office hours, and actual conversations in the places where developers hang out: r/openziti, r/mcp, the OpenZiti Discourse, and wherever else the work takes me.
If you build on OpenZiti, or you have been curious about it, or you just want to commiserate about explaining infrastructure to developers, my DMs are open. I am @AccordionGuy on GitHub, Joey de Villa on LinkedIn, and the accordion is here if anyone wants a rock cover of something topical as a celebratory interlude.
It’s the story of how a scammer posing as an executive recruiter tried to con me out of hundreds (and possibly thousands) of dollars using AI-generated emails, a fake job description, and a fabricated “internal document” from OpenAI.
He had me… for thirty seconds, and then I thought about it.
The short version
A “recruiter” emailed me out of the blue about a developer relations role. This isn’t out of the the ordinary; this has happened before, and it’s happened a couple of times in the past couple of months.
However, this role stood out: it was Director of Developer Relations role at OpenAI. Remote-first, $230K–$280K base, Python-primary, and AI-focused. It was basically my dream job on paper.
Over the course of several emails, he asked for my resume and salary expectations while giving me nothing concrete in return: no company name, no hiring manager, no specifics.
When I finally got suspicious and asked three simple verification questions:
Who’s your contact at OpenAI?
Is this a retained or contingency search?
What’s your formal relationship with the hiring organization?
He went silent for over a day, then came back with a wall of text that answered none of them.
Then came the real play: he told me that OpenAI required three purportedly “professional documents” before I could interview, and they had to be ready in the next 48 hours:
An “Executive Impact Matrix,”
A “Technical Leadership Competency Assessment,” and
A “Cross-Functional Influence & Initiative Report”
The descriptions of these documents made it look as if they were complex and would take hours to prepare. The recruiter “helpfully” offered to connect me with a “specialist” who could prepare them for a fee.
None of these documents are real. No company asks for them. It’s a document preparation fee scam, and the whole weeks-long email exchange was just the runway to get me to that moment.
But the best part? When I didn’t bite, he followed up with a fake “OpenAI Candidate Review” document showing my name alongside other “candidates” with star ratings. This would be a massive HR violation if it were real:
But it wasn’t real! He generated it with ChatGPT. And he left behind evidence — the dumbass forgot to crop out the watermark.
How the AI gave him away
One of the most interesting things about this scam is how AI was both the scammer’s greatest tool and his undoing.
Every email he sent me was written in polished, flawless corporate English.
But in the one paragraph where he steered me toward paying the “specialist,” the grammar suddenly fell apart:
“a professional I have known for years that specialise in this kind of documents with many great and positive result.”
The AI wrote the con. But the human wrote the close. And the seam between the two is where the truth leaked out.
This is a pattern worth watching for. As AI-powered scams become more common, the tell is going to be a shift in quality at the moment where the scammer needs to speak in their own words. You’ll see well-written text, abruptly followed by different writing style marked by poor, non-idiomatic grammar (because they’re communicating with you in a language they don’t know well). Keep an eye out for that sudden transition.
The 3 questions real recruiters can answer
If you’re job searching right now and a recruiter reaches out, ask them these three questions:
Who is your contact at the hiring company?
Is this a retained or contingency search?
What is your formal relationship with the hiring organization?
A real recruiter answers these in seconds. A fake one dodges, deflects, or disappears.
8 fake recruiter red flags
Based on my experience, here are eight things to watch out for:
The job seems tailor-made for you. LLMs make it trivially easy to generate a convincing “JD” (job description) from someone’s LinkedIn profile. If it checks every single box, ask why.
The information only flows one direction. They ask for your resume, salary, and preferences. They give you nothing concrete: no company name, no hiring manager, no search terms.
The email footer doesn’t add up. Gmail addresses or mismatched domains, vague or incomplete street addresses, and an “alphabet soup” of certifications are all warning signs.
They dodge verification questions. Real recruiters are proud of their client relationships. Fake ones ghost you when you ask for specifics.
They ask you to pay for documents or preparation. No legitimate employer requires this. Ever. This is always the scam.
Watch for the grammar shift. Polished emails that suddenly drop in quality when money enters the conversation? That’s AI-generated content with a human-written sales pitch sloppily stitched in.
Check the metadata. If they send you an “official” document, look at every corner, every file property, every detail. Scammers are playing a numbers game, and as a result, they’re often rushed and sloppy. Sometimes they literally leave the watermark.
The emotional setup is part of the scam. Flattery, validation, and the sense that someone finally sees your worth is intoxicating, especially when you’ve been job hunting for months. That’s by design. The best time to be skeptical is when you most want to believe.
Losses from recruitment fraud exceeded $500 million in 2024, according to the FTC.
6 in 10 job seekers encountered a fake recruiter in 2025, and 1 in 4 fell for a scam, according to a PasswordManager.com survey.
AI tools are making these scams more polished, more personalized, and harder to detect. The “spray and pray” emails with obvious typos are being replaced by tailored, multi-email campaigns that build trust over weeks before making their move.
If you’re job searching (or know someone who is), please share this post and the video. The more people know what to look for, the less effective these scams become.
Watch the video
Once again, here’s the video, where I walk through the entire scam step by step, from the first email to the ChatGPT watermark:
On Tuesday, two popular tech events take place in Tampa, and you may be wondering which one you should attend. I’ll answer your question by quoting the little girl from that classic Old El Paso commerical:
This past Tuesday (July 15, 2025), I appeared on a news segment on Tampa’s WFLA Channel 8 evening news, where I was brought in to comment about ways to not fall for AI-powered phone scams. The video from that news segment is pictured above.
While the segment talked about using AI to mimic people’s voices and faces and have them say whatever you want, there wasn’t time to demonstrate this capability — so I’m doing it here.
Here’s a video I recorded back in October 2023 to promote a Python course that I was teaching:
I then fed that video to HeyGen, the AI avatar service, and used it to translate my video into Spanish. Here’s the result:
I don’t speak Spanish anywhere as fluently and smoothly as my HeyGen-generated version, and note that HeyGen went so far as to sync my lips with the Spanish words!
The Spanish voice is also a decent approximation of mine — close enough that it might fool even people who know me well, given a stressful situation full of emotion and other distractions, which is the sort of scenario that con artists try to create in a phone scam.
You should also note that the Spanish video was made with the version of HeyGen from October 2023. I’m sure it’s undergone significant improvements since then.