Happy Saturday, everyone! Here on Global Nerdy,
Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics,
and memes I found over the past week. Share and enjoy!


















































Happy Saturday, everyone! Here on Global Nerdy,
Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics,
and memes I found over the past week. Share and enjoy!


















































Here’s what’s happening in the thriving tech scene in Tampa Bay and surrounding areas for the week of Monday, December 1 through Sunday, December 7!
This list includes both in-person and online events. Note that each item in the list includes:
✅ When the event will take place
✅ What the event is
✅ Where the event will take place
✅ Who is holding the event

| Event name and location | Group | Time |
|---|---|---|
| ONLINE: TBD Online event |
Orlando Stoics | 9:00 AM to 10:30 AM EST |
| Gulfcoast Coffee Morning Walk Dunedin Coffee Company & Bakery |
Tampa Bay Meetup (20’s & 30’s) | 10:00 AM to 1:00 PM EST |
| Board Game Day at Southern Brewing Southern Brewing & Winery |
Geekocracy! | 1:00 PM to 3:00 PM EST |
| Sunday Gaming Tampa Bay Bridge Center |
Tampa Gaming Guild | 1:00 PM to 11:00 PM EST |
| BEAST FEAST: A Daggerheart One-Shot Adventure! Critical Hit Games |
Critical Hit Games | 1:00 PM to 4:00 PM EST |
| Crop to Cup: The Complete Coffee Journey The Essem Collective |
small beanz coffee co – Coffee Education Courses | 1:00 PM to 3:30 PM EST |
| D&D Adventurers League Critical Hit Games |
Critical Hit Games | 2:00 PM to 7:30 PM EST |
| Sunday Pokemon League Sunshine Games | Magic the Gathering, Pokémon, Yu-Gi-Oh! |
Sunshine Games | 4:00 PM to 8:00 PM EST |
| Sew Awesome! (Textile Arts & Crafts) Tampa Hackerspace West |
Tampa Hackerspace | 5:30 PM to 8:30 PM EST |
| Game Night: Dice, Decks & Drama World of Beer |
The 30/40 Social Club | 7:00 PM to 9:00 PM EST |
| A Duck Presents NB Movie Night Discord.io/Nerdbrew |
Nerd Night Out | 7:00 PM to 11:30 PM EST |
| Return to the top of the list | ||

How do I put this list together?
It’s largely automated. I have a collection of Python scripts in a Jupyter Notebook that scrapes Meetup and Eventbrite for events in categories that I consider to be “tech,” “entrepreneur,” and “nerd.” The result is a checklist that I review. I make judgment calls and uncheck any items that I don’t think fit on this list.
In addition to events that my scripts find, I also manually add events when their organizers contact me with their details.
What goes into this list?
I prefer to cast a wide net, so the list includes events that would be of interest to techies, nerds, and entrepreneurs. It includes (but isn’t limited to) events that fall under any of these categories:
It’s been about a week since Gemini 3 has come out, and I’ve been hearing some really good things about it. So I’m trying out Google’s “try Gemini 3 Pro for free for a month” offer and putting it to as much use as the account will allow, and I’m posting my findings here.
Gemini 3 feels fundamentally different from earlier text-based AI models, and I wouldn’t be surprised if its competitors eventually borrow at least a couple of tricks from it.
It’s designed to be an eater of large, not necessarily organized inputs such as text, images, video, audio, code, and documents simultaneously and natively. If you want to get the most of out it, I’ve found that a structured, almost engineering-like approach works better than simply treating it like just another chatbot.
Here are some things I and other Gemini 3 noodlers have noticed:
You’ve probably heard that adding “please” to your LLM prompts provides better results. It does work with many LLMs, and that’s they’ve been trained on examples of human communication, where politeness is often associated with higher-quality responses.
Gemini 3’s design as a “chaotic context eater that tries to find the signal in its input noise” means that it treats words like “please” as fluff. My experience is that unlike ChatGPT, it treats your prompt as a set of executable instructions rather than a chat.
“Chatting up” the model as if you were on a conversation over coffee doesn’t provide better results. What works best is providing it with the necessary context — that is, the background information needed to produce the desired result — and a clear goal stated clearly and without the fluff. “If you could please look at this file and tell me what you think” won’t work as well as “Analyze the attached PDF and list the critical errors that author made.”
Keep prompts short and precise. Google’s own guidance emphasizes that Gemini 3 responds best to direct instructions. Long prompts appear to divert the focus of the AI from the actual point and result in inconsistent. There is an area where you can “go long,” and that’s with the contextual information you provide prior to the actual instructions. I’ll talk about this later on in this article.
Google is definitely trying to “zig” where other LLM vendors are trying to “zag” with the way Gemini 3 responds. It’s way more concise by default, and if you’re one of the people who was disappointed with the way OpenAI tuned ChatGPT 5.1 to be less chatty and obsequious, you’re going to be downright hurt by Gemini 3’s “ain’t nobody got time for chat” style of answer.
If you absolutely need a long, detailed narrative or a “chatty” persona, you’ve to got explicitly ask for it in your constraints. Otherwise, it will give you the shortest correct answer. And even then, I think Claude will give you more satisfying results in terms of conversational style.
Google has also stressed the Gemini 3 model’s tendency to respond in a neutral, efficient manner. You want a friendly, funny, or conversational tone? Google says you have to ask for it. I’ve only played with this a little, and in my limited experience, it seems that the “friendly, funny, or conversational tone” feels like the tone used by someone at a service desk that’s trying to fake niceness in order to make the sale.
If you need to spell out behavioral constraints to get the job done — such as “Be objective,” “Use a level of formality appropriate for an email to a lawyer,” “Don’t use external code libraries,” or “the code must be in Python 3.14+, and not in any earlier version” — do so either in the System Instruction or at the very top of your prompt. This ensures that these constraints set the direction of Gemini 3’s process before it starts crunching the data.
Gemini 3 treats text, images, audio, and video as “equal-class inputs,” but now you have to eliminate any ambiguity that comes with giving it a big pile of information.
For example, if you upload three screenshots, a video, and a PDF and then prompt with “look at this,” the model may struggle to understand which “this” you’re talking about. It “this” the first picture? The second one? The video? You need to be more specific.
Explicitly label your inputs in your prompt. Instead of vague references, say: “Use Image 1 (Funnel Dashboard) and Video 2 (Checkout Flow) to identify the drop-off point”. This forces the model to synthesize data across specific files.
Eevn better, provide the context info one upload at a time, giving a name to each as you upload it: “This is the Jenkins report,” “This is CSV of temperatures for October 2025.”
If you provide gemini 3 with data that has some index such as a timestamp or ID that identifies a specific piece of information within that data, use that index:
For complex tasks, such as analyzing a legal contract or coding a simulation, you need to force Gemini 3’s model to slow down and “think” before it generates a final response. This prevents hallucination and ensures logical consistency.
Tell it to be its own critic. Instruct the model to critique its own work before finishing. Include a step in your instructions like: “Review your generated output against the user’s original constraints. Did I answer the user’s intent?”
Make Gemini 3 do a little self-reflection. If the model needs to use a tool (like writing code or searching), ask it to explicitly state why it is using that tool and what data it expects to find before it executes the action.
I’ve been doing a fair number of client proposal and job interview presentations lately, and one of the most-enjoyed slides is the very first one I show, pictured above. Everybody loves Bender!
Feel free to borrow this one for your own presentations.
It’s been a week since Gemini 3 has come out, and I’ve been hearing some really good things about it. So I’m trying out Google’s “try Gemini 3 Pro for free for a month” offer and putting it to as much use as the account will allow, and I’m posting my findings here.
Multimodal, in the AI sense of the word, means “capable of understanding and working with multiple types of information, such as text, images, audio, and video.” We humans are multimodal, and we’re increasingly expecting AIs to be multimodal too.
The payoff that comes from an AI model being multimodal is flexibility and naturalness. There are times when it’s better to provide a picture instead of trying to type a description of that picture, or give the model a recording of a discussion rather than transcribing it first.
The marketing and developer relations teams behind AI models will claim that their particular product is multimodal, but a number of them are only indirectly that way.
Many models are strictly language models (the second “L” in “LLM”). These models use other models to translate non-text information to text, including models that “see” an image and generate text to describe it, and then use that text as the input. This approach works, but as you might have guessed, a lot gets lost in the translation. The loss is even greater with video.
Gemini 3 is different, since it’s natively multimodal. That means that it processes text, images, audio, video, and code simultaneously as a single stream of information, and without translating non-text data into text first.
Because Gemini 3 processes all inputs together, it can reason across them in ways other models can’t. For example, when you upload a video, it can analyze the audio tone against the facial expressions in the video while cross-referencing a PDF transcript of that video.
This opens a lot of interesting possibilities, but I thought I’d go with a couple of ideas I want to try out when coding:
Multimedia debugging: When debugging with a text-first AI model, I’d simply give it the error log. But with a multimodal model like Gemini 3, I could provide additional info for more context, including:
A screen recording of the application that includes the moment the bug rears its ugly head, I could even ask: “Locate the specific line of code in File A that’s causing the visual glitch at 0:15 in the video.”
Here’s one straight out of science fiction. Instead of asking for text answers, try asking for an answer that includes text, graphics, and interactivity.
For example, I gave it this prompt:
Don’t just explain mortgage rates. Code me an interactive loan calculator that lets me slide the interest rate and see the monthly cost change in real time.
It got to work, and in about half a minute, it generated this app…
…and yes, it worked!
In my previous article on Gemini 3, I said that it had a context window of 1 million tokens. When you combine that with multimodality, you get a chaos crushing machine. You don’t have to “clean” your data before hading it over to the model anymore!
The haystack search: Upload an entire hour-long lecture video, three related textbooks, and your messy lecture notes, and then prompt it with: “Create a study guide that highlights the 5 concepts from the video that are NOT covered in the textbooks.”
Archiving in multiple languages: Upload handwritten recipes or letters in different languages. Gemini can decipher the handwriting, translate it, and format it into a digital cookbook or archive. For a cross-cultural family like mine, this could come in really handy.
To get the best results…
Name your inputs. Don’t say “look at the image.” Provide names for images when you upload them (“This is image A…”) and then refer to them when providing instructions (“Using image A…”).
Give it the info first, instructions last. When taking advantage of that huge context window, provide Gemini with all the data you want it to use first, and put your instructions at the end, after the data. Use a bridging phrase like “Based on the info above…”
With video, use timestamps. If there’s a part of the video that’s relevant to your instructions, refer to it by timestamp (e.g., “The trend visible at 1:45 in video B…”).
It’s been about a week since Gemini 3 has come out, and I’ve been hearing some really good things about it. So I’m trying out Google’s “try Gemini 3 Pro for free for a month” offer and putting it to as much use as the account will allow, and I’m posting my findings here.
Gemini 3, probably owing to its roots at Google, seems to handle “context entropy” better than ChatGPT or Claude. By “context entropy,” I mean “messy data,” with the disorder and chaos that you expect to find in notes and documents that you accumulate over time. Unless you’ve put in a lot of time, you probably haven’t put this information into much of a structured form or created some kind of “map” that explains how (or even if) the various parts of the information are related to each other.
Gemini 3 does a good job of taking chaotic information and finding the signal. I recently fed it a collection of…
…and it’s produced some useful stuff, including a strategy document that I plan to use in my job search going forward. (More on that in a later post here and video on the Global Nerdy YouTube channel.)
The other notable thing about Gemini 3 is its context window, which is an AI model’s working memory, or the maximum amount of information that it can work with in any given chat session. Gemini 3 Pro’s context window is a huge 1 million tokens, which is roughly equivalent to about 750,000 words (a token is a small chunk of language that lies somewhere between a character and a full word).
This big context window means that it’s possible to feed Gemini 3 with a LOT of data, which could be a big application codebase, a couple of textbooks, lots of notes or transcripts, or any other big pile of messy data that you’re trying to extract meaning from.
I’m going to be working heavily with Gemini 3 over the next few weeks, and I’ll continue to post my observations here, along with tips and tricks that I either find online or figure out. Watch this space!
Happy Saturday, everyone! Here on Global Nerdy, Saturday means that it’s time for another “picdump” — the weekly assortment of amusing or interesting pictures, comics,
and memes I found over the past week. Share and enjoy!
























































































