Categories
Artificial Intelligence

Notes on using Gemini 3 Pro, part 3: Every prompting tip and trick I know (so far)

It’s been about a week since Gemini 3 has come out, and I’ve been hearing some really good things about it. So I’m trying out Google’s “try Gemini 3 Pro for free for a month” offer and putting it to as much use as the account will allow, and I’m posting my findings here.

My approach to Gemini 3, after a week of heavy noodling with it

Gemini 3 feels fundamentally different from earlier text-based AI models, and I wouldn’t be surprised if its competitors eventually borrow at least a couple of tricks from it.

It’s designed to be an eater of large, not necessarily organized inputs such as text, images, video, audio, code, and documents simultaneously and natively. If you want to get the most of out it, I’ve found that a structured, almost engineering-like approach works better than simply treating it like just another chatbot.

Here are some things I and other Gemini 3 noodlers have noticed:

1. Get to the point and don’t sweat the etiquette

You’ve probably heard that adding “please” to your LLM prompts provides better results. It does work with many LLMs, and that’s they’ve been trained on examples of human communication, where politeness is often associated with higher-quality responses.

Gemini 3’s design as a “chaotic context eater that tries to find the signal in its input noise” means that it treats words like “please” as fluff. My experience is that unlike ChatGPT, it treats your prompt as a set of executable instructions rather than a chat.

“Chatting up” the model as if you were on a conversation over coffee doesn’t provide better results. What works best is providing it with the necessary context — that is, the background information needed to produce the desired result — and a clear goal stated clearly and without the fluff. “If you could please look at this file and tell me what you think” won’t work as well as “Analyze the attached PDF and list the critical errors that author made.”

Keep prompts short and precise. Google’s own guidance emphasizes that Gemini 3 responds best to direct instructions. Long prompts appear to divert the focus of the AI from the actual point and result in inconsistent. There is an area where you can “go long,” and that’s with the contextual information you provide prior to the actual instructions. I’ll talk about this later on in this article.

2. Gemini 3 answers are terse by default, but you can change that

Google is definitely trying to “zig” where other LLM vendors are trying to “zag” with the way Gemini 3 responds. It’s way more concise by default, and if you’re one of the people who was disappointed with the way OpenAI tuned ChatGPT 5.1 to be less chatty and obsequious, you’re going to be downright hurt by Gemini 3’s “ain’t nobody got time for chat” style of answer.

If you absolutely need a long, detailed narrative or a “chatty” persona, you’ve to got explicitly ask for it in your constraints. Otherwise, it will give you the shortest correct answer. And even then, I think Claude will give you more satisfying results in terms of conversational style.

Google has also stressed the Gemini 3 model’s tendency to respond in a neutral, efficient manner.  You want a friendly, funny, or conversational tone? Google says you have to ask for it. I’ve only played with this a little, and in my limited experience, it seems that the “friendly, funny, or conversational tone” feels like the tone used by someone at a service desk that’s trying to fake niceness in order to make the sale.

3. Anchor your behavioural constraints (or: “Expectations up front”)

If you need to spell out behavioral constraints to get the job done — such as “Be objective,” “Use a level of formality appropriate for an email to a lawyer,” “Don’t use external code libraries,” or “the code must be in Python 3.14+, and not in any earlier version” — do so either in the System Instruction or at the very top of your prompt. This ensures that these constraints set the direction of Gemini 3’s process before it starts crunching the data.

4. Name or index any info you give Gemini 3

Gemini 3 treats text, images, audio, and video as “equal-class inputs,” but now you have to eliminate any ambiguity that comes with giving it a big pile of information.

For example, if you upload three screenshots, a video,  and a PDF and then prompt with “look at this,” the model may struggle to understand which “this” you’re talking about. It “this” the first picture? The second one? The video? You need to be more specific.

Explicitly label your inputs in your prompt. Instead of vague references, say: “Use Image 1 (Funnel Dashboard) and Video 2 (Checkout Flow) to identify the drop-off point”. This forces the model to synthesize data across specific files.

Eevn better, provide the context info one upload at a time, giving a name to each as you upload it: “This is the Jenkins report,” “This is CSV of temperatures for October 2025.”

If you provide gemini 3 with data that has some index such as a timestamp or ID that identifies a specific piece of information within that data, use that index:

  • With audio and video, refer to specific timestamps! For example, you can say “Analyze the user reaction in the video from 1:30 to 2:00.”
  • With data that has built-in “coordinates”, use them!  For example, you can say “Analyze columns A-D in the CSV file.”

5. Need Gemini 3 to do deeper thinking? You have to make it.

For complex tasks, such as analyzing a legal contract or coding a simulation, you need to force Gemini 3’s model to slow down and “think” before it generates a final response. This prevents hallucination and ensures logical consistency.

  • Explicit decomposition is your friend. Don’t just ask for the solution. Break big “thinky” tasks into smaller sub-tasks and have Gemini execute those. For example, ask it to “Parse the stated goal into distinct sub-tasks” first. By forcing it to create an outline or a plan, you improve the quality of the final output.
  • Tell it to be its own critic. Instruct the model to critique its own work before finishing. Include a step in your instructions like: “Review your generated output against the user’s original constraints. Did I answer the user’s intent?”

  • Make Gemini 3 do a little self-reflection. If the model needs to use a tool (like writing code or searching), ask it to explicitly state why it is using that tool and what data it expects to find before it executes the action.


Previous articles in this series