Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!

































































Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!

































































Here’s what’s happening in the thriving tech scene in Tampa Bay and surrounding areas for the week of Monday, March 2 through Sunday, March 8!
This list includes both in-person and online events. Note that each item in the list includes:
✅ When the event will take place
✅ What the event is
✅ Where the event will take place
✅ Who is holding the event

| Event name and location | Group | Time |
|---|---|---|
| Computer Repair Clinic 2079 Range Rd |
Tampa Bay Technology Center | 8:30 AM to 12:30 PM EST |
| Designer Cowork @ Shortwave Coffee (Channelside) Shortwave Coffee |
Tampa Bay Designers (Formerly Tampa Bay UX) | 10:00 AM to 1:00 PM EST |
| New in Tech Meetup – Canopy, St Pete The Canopy |
Tampa Bay New-In-Tech | 5:30 PM to 7:30 PM EST |
| Friday Board Game Night Bridge Club |
Tampa Gaming Guild | 5:30 PM to 11:00 PM EST |
| MTG: Commander FNM Critical Hit Games |
Critical Hit Games | 6:00 PM to 11:00 PM EST |
| Taps & Drafts | EDH/MtG Night 1Up Entertainment, Tampa |
Nerdbrew Events | 7:00 PM to 9:00 PM EST |
| Modern FNM Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 7:00 PM to 10:30 PM EST |
| DIFFERENT LOCATION! “On Anger” – Seneca, Books 1 & 2 USF Tampa College of Education |
Tampa Stoics | 7:00 PM to 9:00 PM EST |
| Friday Pokemon Tournament Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 7:30 PM to 11:30 PM EST |
| Return to the top of the list | ||

How do I put this list together?
It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.
In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.
What goes into this list?
I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:
Last Thursday, February 19th, Tampa Java User Group welcomed Pratik Patel, Java Champion and Director of Developer Relations at Azul Systems, to give his AI Native Architecture talk at Kforce headquarters. Tampa Bay AI Meetup was happy to partner with Tampa JUG, and we thank Ammar Yusuf for the invite!

We had a pretty full room…
…followed by an accordion number…
…followed by Pratik’s presentation.
Here are my notes from the presentation:
Here’s a fun “icebreaker” game to try at your next tech gathering: ask the room to name the three fundamental types of AI, and watch what happens.
When tried on the crowd at last Thursday’s Tampa Java User Group / Tampa Bay AI Meetup, a lot of people called “generative AI,” which was hardly a surprise.
We came close, but didn’t directly name, the second kind: predictive analysis. It’s the kind of AI that’s been quietly running inside every credit card transaction you’ve made for the past decade. It saved me a lot of headache last year when someone used my credit card number to buy enough gas to fill an F-250 in rural Georgia while I was having a poke bowl in St. Pete. A neural network detected the mismatch between the gas-guzzler purchase and my usual spending and location patterns, which led to a text from the credit card company, and my immediate “That wasn’t me” response.
None of us got the third one: time-series AI. It’s the branch that looks at data across time to spot trends and make forecasts. Not “Will Joey buy 50 gallons of gas in rural Georgia?” but “What has Joey been buying every Friday evening for the past two years, and what does that predict about next Friday?”
Pratik kicked off his talk on AI-native architecture with this. By the time he was done, we’d gotten a serious rethink of not just what kinds of AI exist, but what it actually means to build an application with AI at its core, as opposed to just bolting AI onto the side and hoping for a stock price bump.
One of the central arguments Pratik made is that data is what separates a defensible business from one that can be replicated by a developer with a generous cloud credit and a free afternoon.
He used Penske Truck Leasing as his example. Anyone can, theoretically, buy a bunch of trucks and stand up a website. What you can’t easily replicate is a decade of auction data, bidding history, customer behavioral patterns, and operational intelligence. That data is what lets Penske do something like: identify a customer who bid on a truck but didn’t win the auction, then automatically reach out to offer them a similar vehicle. The data made it obvious, and a system acted on it.
This is why the old saying “data is the new oil” is actually more apt than it sounds. Raw oil isn’t useful until it’s refined. Raw data sitting in an S3 bucket isn’t useful either until it’s refined toom by cleaning it, structuring it, and using it to power an application that your competitors simply don’t have the history to replicate. This kind of advantage that sets you apart is referred to as a moat.
In this new world, where anyone can vibe-code a decent SaaS clone in an afternoon using AI tools, your proprietary data may be that moat protecting you from someone in their mom’s basement with good taste and ambition.
Pratik laid out a three-layer view of what an AI application architecture actually looks like in the real world. It was a helpful maps to the “who does what” question that comes up whenever engineering teams start building this stuff.
On the left side is data acquisition and preprocessing. This comprises tools like Apache Kafka for event streaming, Apache Iceberg as a data layer that lets multiple teams share the same underlying datasets without tripping over each other, and Spark for processing data at scale. This is where collection, cleaning, and transformation happen. It’s also where most AI projects quietly die, because the data turns out to be messier than anyone admitted during planning.
In the middle is model building and fine-tuning. Pratik was direct here: your company is almost certainly not going to train its own large language model from scratch. The estimates for what it cost to train GPT-5 range from $100 million to over a billion dollars in GPU time. Unless “Uncle Larry” is personally funding your AI initiative, you’re going to use an off-the-shelf model, like OpenAI, Gemini, Claude, or one of the increasingly capable open-weight models like DeepSeek or Alibaba’s Qwen3. The Python ecosystem owns this tier for now, thanks to its long history in data science and extensive libraries, though Java options like Deep Learning4J are maturing.
On the right is inference and integration, which is where most application developers will actually spend their time. This is the code you write to orchestrate models, retrieve relevant context, handle the results, and deliver a useful experience to users. This is also where AI-native thinking diverges sharply from “AI bolted on,” which Pratik spent considerable time on.
Here it is: LLMs are non-deterministic, and that changes everything about how you build software.
Traditional software is built on deterministic foundations. If you write a database query that asks for a specific user profile, you will get the exact same answer every time: that user’s profile. The result is deterministic, and it’s reliable in a way that software developers have spent the previous decades taking for granted.
LLMs don’t work that way. Ask the same question twice and you may get meaningfully different answers. That’s just a fundamental property of how token prediction and the attention mechanism work. The model doesn’t do the deterministic thing and look up an answer. Instead, it generates an answer based on probabilistic similarity to everything it has ever been trained on.
When the generated answer is wrong, we call it hallucination. But the more accurate framing is that hallucination is the shadow side of the same capability that makes these models useful at all.
(Joey’s note: I like to say “All LLM responses are hallucinations. It’s just that some hallucinations are useful.”)
For casual applications, such as “Find me a bar with karaoke near downtown Tampa,” we can put up with a certain amount of “wrongness.” You go there, find out there’s no karaoke, drink anyway, call it a night. However, for a system that’s analyzing medical imaging and flagging potential tumors, our tolerance for wrongness is zero, and “the model felt pretty confident” is not an acceptable answer.
The emerging approaches to this are interesting: evaluation frameworks built into tools like Spring AI and LangChain that let you run suites of tests against model outputs; and something called “LLM as a judge,” where you use a second model to evaluate the outputs of the first. Ask OpenAI a question, get an answer, hand both the question and the answer to Gemini and say: “Does this look right?” It’s new, it’s imperfect, and it’s the current state of the art.
The good news, as Pratik put it: everyone is early. You are not behind.
Don’t let the $20/month subscription price fool you into thinking AI inference is cheap at scale.
Pratik made the case that inference costs are not going to come down dramatically anytime soon, and offered some uncomfortable data points in support. Moore’s Law, the Intel cofounder’s observation that transistor density on chips doubles every 18 months, is effectively dead. We’re at the sub-nanometer level of chip fabrication and at that level of miniaturization, you’re really starting to fight the laws of physics.
GPU prices have gone in the opposite direction of what you might hope: the Nvidia 5090, the top consumer-grade card, has gone from roughly $2,000 at launch to $4,000 on the secondary market. RAM prices have spiked because every data center on Earth is buying it for AI workloads. When Pratik noticed RAM prices shooting up, he moved money into Western Digital and Seagate stock. He may be onto something.
The practical upshot for developers building applications: if you’re running hundreds of evaluation tests per hour during development (which is what you should be doing, given the non-determinism problem described above) burning frontier model tokens for all of that is going to get expensive fast.
Pratik’s solution is to do the bulk of development testing against locally-run open-weight models via Ollama. His current recommendations: qwen3-coder for coding-adjacent tasks (and it legitimately does not phone home, I’ve run Wireshark to confirm), and nemotron from Nvidia for more general work. Then switch to the frontier model for production and final evaluation. Your laptop handles the iteration, and the cloud handles the deployment.
You’ve heard this story before, even if you don’t immediately recognize it.
Pratik brought up an old term: sneakernet. That’s from the era when all software was executables running on your machine, and deploying software meant physically walking to a user’s desk with a floppy disk. Then came the cloud, and suddenly continuous deployment became a thing, and anyone still doing quarterly releases felt like a relic.
But here’s what’s easy to forget: cloud native wasn’t just about faster deploys. It forced a complete rethink of how applications are designed, how they’re operated, and how they fail. The servers went from being pets (named, tended, mourned when they died) to being cattle (anonymous, disposable, replaced without ceremony). This called for a different approach.
Pratik’s central argument is that we’re at exactly that same inflection point with AI, and that most companies are going to blow it, at least initially.
When your boss comes in and says “put some AI in the product so our stock price goes up” (Pratik confirmed this is a real conversation people are having in real offices, not a joke), the tempting response is to bolt on a RAG endpoint, add “AI-powered” to the marketing copy, and call it a day. Retrieve some relevant documents. Stuff them into a prompt. Return a plausible-sounding answer. Ship it!
That’s not AI-native. That’s sneakernet with an LLM duct-taped to it.
An AI-native system learns, adapts, and acts autonomously. Not when a user presses a button. Proactively, in response to new data, with judgment that improves over time.
Pratik described the evolution of his own download analytics system as a concrete example. It started as “AI bolted on,” with a natural language interface that let people query a Spark cluster without writing SQL. Useful. Not native.
Over the past year and a half, he rebuilt it into something different: a system that monitors weekly data feeds, detects when something has changed (for example, a spike in Java 17 downloads), connects that to relevant context from an internal knowledge base (there was a critical security patch), and proactively sends him a synthesized briefing before he even thinks to ask. He still reviews it. But the thinking now happens without him.
The hotel booking example he used to illustrate the idea is even more vivid. Pratik has a specific, consistent set of hotel preferences: he wants to be within walking distance of wherever he’s speaking, the gym needs to be a real gym (not a treadmill and a motivational poster — Hotel 5 in Seattle, I’m lookin’ right at you), and he always searches by exact address rather than city name. He does this exact sequence of clicks every single time he books a hotel. An AI-native Marriott system would see this behavioral pattern, learn from it, and surface the right three options without him having to do any of that manual filtering. Not because someone programmed “Pratik likes gyms” into a rule engine, but because the system observed his behaviors, inferred some patterns, and generalized.
Could you do all of this algorithmically? Technically, yes. But think about it: you’d be writing bespoke preference logic for millions of users with different, compounded, evolving preferences, and you’d be doing it forever. The whole point of using an LLM here is that you’re borrowing its capacity for generalization instead of hand-coding every case yourself.
Pratik offered a measured take on the current agentic AI frenzy. Agents can act, but do they actually learn from what they’ve done? That’s the gap between today’s agentic frameworks and a genuinely AI-native system. Agents are probably not going away because they’re real and useful, but the framing will shift again in six months ( that’s just how this space works). The best approach is to build the fundamentals, not the hype.
On fine-tuning: if you need a model that’s deeply specialized for a domain, you don’t have to build an LLM from scratch. Low-Rank Adaptation (LoRA) lets you take an existing large model and attach a domain-specific adapter that shifts its weights toward your area of expertise. OpenAI’s recently released finance-specific model that they built in collaboration with Goldman Sachs, trained on a large corpus of financial data is exactly this. The base model does the heavy lifting. The adapter makes it fluent in corn futures.
On RAG: retrieval-augmented generation is essentially fancy-pants prompt stuffing. You find the documents most relevant to a user’s query, pull them in, and let the model reason over them. It’s the right approach for a lot of use cases, it’s not magic, and it works best when your underlying data is actually clean and well-structured. Remember the greybeard saying: “Garbage in, garbage out,” a principle that the age of AI has managed to make both more important and more dangerous, since we can now generate garbage at industrial scale.
If you walked away from Pratik’s talk with one thing, it should probably be this: the fundamental shift AI requires isn’t technical. It’s conceptual. Just like cloud native forced you to stop thinking about servers as permanent fixtures and start thinking about them as fungible infrastructure, AI native requires you to stop thinking about AI as a feature you add to an application and start thinking about it as the substrate the application is built on.
The application that learns. The application that adapts. The application that wakes up when new data arrives and starts thinking before you ask it to.
That’s the goal. We’re early. The tools are changing fast. But the direction is clear, and the developers who internalize that shift now, rather than bolting features on and hoping for a stock price bump, are going to be the ones building the interesting stuff.
If you’d like to dive deeper into what Pratik was talking about, he has companion sample apps. The details are in this picture:
Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics, and memes I found over the past week. Share and enjoy!






























































Here’s what’s happening in the thriving tech scene in Tampa Bay and surrounding areas for the week of Monday, February 23 through Sunday, March 1!
This list includes both in-person and online events. Note that each item in the list includes:
✅ When the event will take place
✅ What the event is
✅ Where the event will take place
✅ Who is holding the event

Tuesday at 10:00 a.m. online: Explore the power of community support in finding job opportunities and learn how to leverage unconventional networking strategies. Register now and expand your professional network!
Find out more and register here.
Tuesday at 5:30 at Hidden Springs Ale Works (Tampa): It’s the last Tuesday of the month, which means it’s time for another TampaTech Taps & Taco Tuesday! Come connect with industry peers, have some of Hidden Springs’ fine beers at 15% off, and of course, free tacos!
No speakers, no presentations — just great conversations and a raffle (because that’s way more fun!)
Find out more and register here.
Tuesday at 6:00 at Buffalo Wild Wings (Oldsmar): It’s a casual and engaging evening of AI discussions, great food, and new connections! Whether you’re an AI enthusiast, developer, artist, or just curious about the future of Generative AI, this meetup is the perfect place to share ideas, ask questions, and explore the possibilities of AI.
Find out more and register here.
Tuesday at 6:00 at Entrepreneur Collaborative Center (Tampa): As businesses grow, productivity challenges rarely come from a lack of effort—they come from limited systems, unclear processes, and overloaded teams. This session focuses on how entrepreneurs can use AI to support their teams, increase output, and improve execution without adding headcount.
Here’s what they’ll cover:
Find out more and register here.
Tuesday at 6:00 p.m. at Embarc Collective (Tampa): Network, Collaborate, and Scale — Where Tampa’s Digital Marketers Connect. Two free drink tickets with admission.
Calling all e-commerce operators, remote workers, digital marketers, Shopify entrepreneurs, Facebook Ads pros and AI enthusiasts and start-up entrepreneurs — join us for an evening of high-impact networking and actionable conversations in the heart of Tampa Bay.
Who Should Attend:
Find out more and register here.
Tuesday at 6:00 at MakerSpace Pinellas (Largo): Come one come all, those with no experience and those who have published their own games alike! They’ll be getting together every other week to build video games. We will primarily be working in Godot, a free and open source game development engine, but we are open to building in Unity, Unreal, etc.
Find out more and register here.
Wednesday at 12:30 online: It’s back!
The Heart of Agile’s Coffee Corner brings people together via Zoom in a casual setting to share and discuss ways we Collaborate, Deliver, Reflect, and Improve.
This month’s session is a reset for the St. Pete – Tampa – Orlando group. We’ve been quiet for a while, and we’d like to reconnect, hear what you need, and shape what this Coffee Corner becomes next.
In this 60-minute session, you can expect:
Find out more and register here.
Wednesday at 5:00 at the WeWork Building (Tampa): Get a comprehensive summary of key re:Invent 2025 announcements and insights from the keynotes, followed by three deep dive sessions selected from the most popular re:Invent presentations covering Amazon Bedrock, AgentCore, Kiro, and Amazon Quick Suite.
Find out more and register here.
Wednesday at 6:00 p.m. at Geographic Solutions (Palm Harbor):
Tampa Devs will be joining the Pinellas Tech Network for a fast‑paced look at the trends shaping the web in 2026.
They’ll explore how AI is becoming a true collaborator in development, why performance and composable architectures now define modern builds, and how design is shifting toward more organic, minimal, and human‑focused experiences.
Whether you’re a developer, designer, digital strategist, or simply tech curious, you’ll walk away with practical insights to apply to your next project—and connect with fellow tech leaders across Pinellas and Tampa Bay.
Speakers:
Find out more and register here.

Thursday at 4:00 p.m. at American Legion Post 138 (Tampa): It’s the Tampa/MacDill AFB Orange Call! In a military context, an “orange call” refers to an alert signaling a heightened cybersecurity state of readiness.
This orange call’s purpose is to gather and network amongst fellow communicators, guardians, and enablers of all ranks, titles, and experience levels, share resources, and seek professional development. They’ll will conduct a round table meet-and-greet and discuss MacDill communicators and missions, including the increasing role of cyber and the importance of defending our nation’s networks.
Find out more and register here.
Thursday at 5:30 p.m. at Dynaway (Tampa): Ready to get your hands dirty and learn by doing? Join the DUG Meetup for a 3-month group hackathon where we’ll design, build, and ship a real solution together!
This month, they’re kicking things off by choosing a use case and mapping out a plan. Bring your ideas, your curiosity, and your willingness to experiment. There’ll be a vote on what to build and start sketching out the approach.
What to Expect
Find out more and register here.
Friday at 8:30 a.m. at Rapid7 (Tampa): Meet your fellow local techies and get a tour of the Rapid7 office! A leader in security operations, Rapid7 is dedicated to providing intelligent threat detection and vulnerability management through product offerings such as InsightVM, InsightIDR, and Threat Command.
Homebrew Hillsborough is Hillsborough County’s collaborative coffee networking for techies and entrepreneurs. They’re taking the conversation to a community business resource near you and providing real-time relevant tech talks and tours. Come meet with others in our community to expand the network and see how we are creating a Homegrown Hillsborough.
This is a great opportunity for businesses, local innovators and entrepreneurs and anyone interested in helping strengthen and grow our local economy to come together to network, share ideas, collaborate, ask for help and offer it.
Find out more and register here.
Friday at 12:00 p.m. online: Cracking the Code: How to Win AI-Driven Job Interviews features a panel of industry professionals who work closely with hiring technology, talent acquisition, and career development. This session breaks down how AI is used throughout the interview process and what employers are actually evaluating behind the scenes.
This live panel discussion will explore how AI tools assess resumes, analyze interview responses, and measure candidate fit. The speakers share practical guidance on how to prepare for AI-driven interviews, communicate skills clearly, and avoid common mistakes that can hold candidates back. Attendees will also gain insight into how human decision makers interact with AI systems and where candidates can stand out.
Find out more and register here.
| Event name and location | Group | Time |
|---|---|---|
| Let’s Open The House of Doors Friday, Feb 27 · 6:30 PM to 8:30 PM EST |
Pages and Plates Book Club | 11:00 AM |
| Computer Repair Clinic 2079 Range Rd |
Tampa Bay Technology Center | 8:30 AM to 12:30 PM EST |
| Feb 2026 Homebrew Hillsborough: Rapid7 Rapid7 |
Homebrew Hillsborough | 8:30 AM to 10:30 AM EST |
| Cracking the Code: How to Win AI-Driven Job Interviews Online event |
Tech Success Network | 12:00 PM to 1:00 PM EST |
| Woodshop Safety (Members Only) Tampa Hackerspace West |
Tampa Hackerspace | 1:00 PM to 4:00 PM EST |
| Friday Board Game Night Bridge Club |
Tampa Gaming Guild | 5:30 PM to 11:00 PM EST |
| Friday Night Magic at Conworlds Emporium Conworlds Emporium |
Tarpon Springs Community Fun & Games | 5:30 PM to 9:00 PM EST |
| MTG: Commander FNM Critical Hit Games |
Critical Hit Games | 6:00 PM to 11:00 PM EST |
| Taps & Drafts | EDH/MtG Night 1Up Entertainment, Tampa |
Nerdbrew Events | 7:00 PM to 9:00 PM EST |
| Modern FNM Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 7:00 PM to 10:30 PM EST |
| Mario Kart @ Grand Prix – Clearwater Tampa Bay Grand Prix |
Gen Geek | 7:00 PM to 9:00 PM EST |
| Friday Pokemon Tournament Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 7:30 PM to 11:30 PM EST |
| Return to the top of the list | ||
| Event name and location | Group | Time |
|---|---|---|
| Geeks Go Celebrate My Birthday At The Plant City Strawberry Festival Florida Strawberry Festival |
Geekocracy! | 10:00 AM to 4:00 PM EST |
| Sunday Chess at Wholefoods in Midtown, Tampa Whole Foods Market |
Chess Republic | 2:00 PM to 5:00 PM EST |
| D&D Adventurers League Critical Hit Games |
Critical Hit Games | 2:00 PM to 7:30 PM EST |
| IMPROV Drop-In Class! (FUN! No experience required) [$20] Spitfire Theater |
Tampa 20’s and 30’s Social Crew | 2:00 PM to 4:00 PM EST |
| Numenera: A one shot of a science fantasy RPG Emerald City Comics 4902 113th Ave N, Clearwater, Florida 33760 |
St Pete and Pinellas Tabletop RPG Group | 3:00 PM to 6:00 PM EST |
| Sunday Pokemon League Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 4:00 PM to 8:00 PM EST |
| Sew Awesome! (Textile Arts & Crafts) 4933 W Nassau St |
Tampa Hackerspace | 5:30 PM to 8:30 PM EST |
| A Duck Presents NB Movie Night Discord.io/Nerdbrew |
Nerd Night Out | 7:00 PM to 11:30 PM EST |
| Return to the top of the list | ||

How do I put this list together?
It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.
In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.
What goes into this list?
I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:
The newest video on the Global Nerdy YouTube channel is now online! It’s called A Fake Recruiter Tried to Scam Me — I Caught Him Using ChatGPT. Watch it now!
It’s the story of how a scammer posing as an executive recruiter tried to con me out of hundreds (and possibly thousands) of dollars using AI-generated emails, a fake job description, and a fabricated “internal document” from OpenAI.
He had me… for thirty seconds, and then I thought about it.
A “recruiter” emailed me out of the blue about a developer relations role. This isn’t out of the the ordinary; this has happened before, and it’s happened a couple of times in the past couple of months.
However, this role stood out: it was Director of Developer Relations role at OpenAI. Remote-first, $230K–$280K base, Python-primary, and AI-focused. It was basically my dream job on paper.
Over the course of several emails, he asked for my resume and salary expectations while giving me nothing concrete in return: no company name, no hiring manager, no specifics.
When I finally got suspicious and asked three simple verification questions:
He went silent for over a day, then came back with a wall of text that answered none of them.
Then came the real play: he told me that OpenAI required three purportedly “professional documents” before I could interview, and they had to be ready in the next 48 hours:
The descriptions of these documents made it look as if they were complex and would take hours to prepare. The recruiter “helpfully” offered to connect me with a “specialist” who could prepare them for a fee.
None of these documents are real. No company asks for them. It’s a document preparation fee scam, and the whole weeks-long email exchange was just the runway to get me to that moment.
But the best part? When I didn’t bite, he followed up with a fake “OpenAI Candidate Review” document showing my name alongside other “candidates” with star ratings. This would be a massive HR violation if it were real:
But it wasn’t real! He generated it with ChatGPT. And he left behind evidence — the dumbass forgot to crop out the watermark.
One of the most interesting things about this scam is how AI was both the scammer’s greatest tool and his undoing.
Every email he sent me was written in polished, flawless corporate English.
But in the one paragraph where he steered me toward paying the “specialist,” the grammar suddenly fell apart:
“a professional I have known for years that specialise in this kind of documents with many great and positive result.”
The AI wrote the con. But the human wrote the close. And the seam between the two is where the truth leaked out.
This is a pattern worth watching for. As AI-powered scams become more common, the tell is going to be a shift in quality at the moment where the scammer needs to speak in their own words. You’ll see well-written text, abruptly followed by different writing style marked by poor, non-idiomatic grammar (because they’re communicating with you in a language they don’t know well). Keep an eye out for that sudden transition.
If you’re job searching right now and a recruiter reaches out, ask them these three questions:
A real recruiter answers these in seconds. A fake one dodges, deflects, or disappears.
Based on my experience, here are eight things to watch out for:
This isn’t just my problem. It’s an epidemic:
AI tools are making these scams more polished, more personalized, and harder to detect. The “spray and pray” emails with obvious typos are being replaced by tailored, multi-email campaigns that build trust over weeks before making their move.
If you’re job searching (or know someone who is), please share this post and the video. The more people know what to look for, the less effective these scams become.
Once again, here’s the video, where I walk through the entire scam step by step, from the first email to the ChatGPT watermark:
And if you haven’t already, subscribe to the Global Nerdy YouTube channel. There’s more coming soon, and I promise it’ll be less infuriating than this one. Probably.
If this has happened to you, here’s where to report it:
And if you’ve got your own story about a fake recruiter, drop me a line on LinkedIn! Let’s make these scams harder to pull off.