Categories
Artificial Intelligence

Notes on using Gemini 3 Pro, part 3: Every prompting tip and trick I know (so far)

It’s been about a week since Gemini 3 has come out, and I’ve been hearing some really good things about it. So I’m trying out Google’s “try Gemini 3 Pro for free for a month” offer and putting it to as much use as the account will allow, and I’m posting my findings here.

My approach to Gemini 3, after a week of heavy noodling with it

Gemini 3 feels fundamentally different from earlier text-based AI models, and I wouldn’t be surprised if its competitors eventually borrow at least a couple of tricks from it.

It’s designed to be an eater of large, not necessarily organized inputs such as text, images, video, audio, code, and documents simultaneously and natively. If you want to get the most of out it, I’ve found that a structured, almost engineering-like approach works better than simply treating it like just another chatbot.

Here are some things I and other Gemini 3 noodlers have noticed:

1. Get to the point and don’t sweat the etiquette

You’ve probably heard that adding “please” to your LLM prompts provides better results. It does work with many LLMs, and that’s they’ve been trained on examples of human communication, where politeness is often associated with higher-quality responses.

Gemini 3’s design as a “chaotic context eater that tries to find the signal in its input noise” means that it treats words like “please” as fluff. My experience is that unlike ChatGPT, it treats your prompt as a set of executable instructions rather than a chat.

“Chatting up” the model as if you were on a conversation over coffee doesn’t provide better results. What works best is providing it with the necessary context — that is, the background information needed to produce the desired result — and a clear goal stated clearly and without the fluff. “If you could please look at this file and tell me what you think” won’t work as well as “Analyze the attached PDF and list the critical errors that author made.”

Keep prompts short and precise. Google’s own guidance emphasizes that Gemini 3 responds best to direct instructions. Long prompts appear to divert the focus of the AI from the actual point and result in inconsistent. There is an area where you can “go long,” and that’s with the contextual information you provide prior to the actual instructions. I’ll talk about this later on in this article.

2. Gemini 3 answers are terse by default, but you can change that

Google is definitely trying to “zig” where other LLM vendors are trying to “zag” with the way Gemini 3 responds. It’s way more concise by default, and if you’re one of the people who was disappointed with the way OpenAI tuned ChatGPT 5.1 to be less chatty and obsequious, you’re going to be downright hurt by Gemini 3’s “ain’t nobody got time for chat” style of answer.

If you absolutely need a long, detailed narrative or a “chatty” persona, you’ve to got explicitly ask for it in your constraints. Otherwise, it will give you the shortest correct answer. And even then, I think Claude will give you more satisfying results in terms of conversational style.

Google has also stressed the Gemini 3 model’s tendency to respond in a neutral, efficient manner.  You want a friendly, funny, or conversational tone? Google says you have to ask for it. I’ve only played with this a little, and in my limited experience, it seems that the “friendly, funny, or conversational tone” feels like the tone used by someone at a service desk that’s trying to fake niceness in order to make the sale.

3. Anchor your behavioural constraints (or: “Expectations up front”)

If you need to spell out behavioral constraints to get the job done — such as “Be objective,” “Use a level of formality appropriate for an email to a lawyer,” “Don’t use external code libraries,” or “the code must be in Python 3.14+, and not in any earlier version” — do so either in the System Instruction or at the very top of your prompt. This ensures that these constraints set the direction of Gemini 3’s process before it starts crunching the data.

4. Name or index any info you give Gemini 3

Gemini 3 treats text, images, audio, and video as “equal-class inputs,” but now you have to eliminate any ambiguity that comes with giving it a big pile of information.

For example, if you upload three screenshots, a video,  and a PDF and then prompt with “look at this,” the model may struggle to understand which “this” you’re talking about. It “this” the first picture? The second one? The video? You need to be more specific.

Explicitly label your inputs in your prompt. Instead of vague references, say: “Use Image 1 (Funnel Dashboard) and Video 2 (Checkout Flow) to identify the drop-off point”. This forces the model to synthesize data across specific files.

Eevn better, provide the context info one upload at a time, giving a name to each as you upload it: “This is the Jenkins report,” “This is CSV of temperatures for October 2025.”

If you provide gemini 3 with data that has some index such as a timestamp or ID that identifies a specific piece of information within that data, use that index:

  • With audio and video, refer to specific timestamps! For example, you can say “Analyze the user reaction in the video from 1:30 to 2:00.”
  • With data that has built-in “coordinates”, use them!  For example, you can say “Analyze columns A-D in the CSV file.”

5. Need Gemini 3 to do deeper thinking? You have to make it.

For complex tasks, such as analyzing a legal contract or coding a simulation, you need to force Gemini 3’s model to slow down and “think” before it generates a final response. This prevents hallucination and ensures logical consistency.

  • Explicit decomposition is your friend. Don’t just ask for the solution. Break big “thinky” tasks into smaller sub-tasks and have Gemini execute those. For example, ask it to “Parse the stated goal into distinct sub-tasks” first. By forcing it to create an outline or a plan, you improve the quality of the final output.
  • Tell it to be its own critic. Instruct the model to critique its own work before finishing. Include a step in your instructions like: “Review your generated output against the user’s original constraints. Did I answer the user’s intent?”

  • Make Gemini 3 do a little self-reflection. If the model needs to use a tool (like writing code or searching), ask it to explicitly state why it is using that tool and what data it expects to find before it executes the action.


Previous articles in this series

Categories
Artificial Intelligence Charts, Diagrams, and Infographics What I’m Up To Work

Feel free to borrow my “AI usage disclosure” slide

I’ve been doing a fair number of client proposal and job interview presentations lately, and one of the most-enjoyed slides is the very first one I show, pictured above. Everybody loves Bender!

Feel free to borrow this one for your own presentations.

Categories
Artificial Intelligence

Notes on using Gemini 3 Pro, part 2: It’s multimodal!

It’s been a week since Gemini 3 has come out, and I’ve been hearing some really good things about it. So I’m trying out Google’s “try Gemini 3 Pro for free for a month” offer and putting it to as much use as the account will allow, and I’m posting my findings here.

What does it mean for an AI model to be multimodal?

Multimodal, in the AI sense of the word, means “capable of understanding and working with multiple types of information, such as text, images, audio, and video.” We humans are multimodal, and we’re increasingly expecting AIs to be multimodal too.

The payoff that comes from an AI model being multimodal is flexibility and naturalness. There are times when it’s better to provide a picture instead of trying to type a description of that picture, or give the model a recording of a discussion rather than transcribing it first.

Gemini 3 is natively multimodal

The marketing and developer relations teams behind AI models will claim that their particular product is  multimodal, but a number of them are only indirectly that way.

Many models are strictly language models (the second “L” in “LLM”). These models use other models to translate non-text information to text, including models that “see” an image and generate text to describe it, and then use that text as the input. This approach works, but as you might have guessed, a lot gets lost in the translation. The loss is even greater with video.

Gemini 3 is different, since it’s natively multimodal. That means that it processes text, images, audio, video, and code simultaneously as a single stream of information, and without translating non-text data into text first.

How to use Gemini 3’s multimodality

1. Coding multimodally

Because Gemini 3 processes all inputs together, it can reason across them in ways other models can’t. For example, when you upload a video, it can analyze the audio tone against the facial expressions in the video while cross-referencing a PDF transcript of that video.

This opens a lot of interesting possibilities, but I thought I’d go with a couple of ideas I want to try out when coding:

  • On-brand user interfaces: I could upload a hand-drawn sketch or a screenshot of a website I liked and say: “Build a functional React front end that captures this ‘look,’ but uses my company’s brand colors, which are defined in this style guide PDF.”
  • Multimedia debugging: When debugging with a text-first AI model, I’d simply give it the error log. But with a multimodal model like Gemini 3, I could provide additional info for more context, including:

    • The code
    • The raw server logs
    • A screen recording of the application that includes the moment the bug rears its ugly head, I could even ask: “Locate the specific line of code in File A that’s causing the visual glitch at 0:15 in the video.”

2. Get multimodal answers

Here’s one straight out of science fiction. Instead of asking for text answers, try asking for an answer that includes text, graphics, and interactivity.

For example, I gave it this prompt:

Don’t just explain mortgage rates. Code me an interactive loan calculator that lets me slide the interest rate and see the monthly cost change in real time.

It got to work, and in about half a minute, it generated this app…

…and yes, it worked!

3. Using multimodality and Gemini 3’s massive context window

In my previous article on Gemini 3, I said that it had a context window of 1 million tokens. When you combine that with multimodality, you get a chaos crushing machine. You don’t have to “clean” your data before hading it over to the model anymore!

  • The haystack search: Upload an entire hour-long lecture video, three related textbooks, and your messy lecture notes, and then prompt it with: “Create a study guide that highlights the 5 concepts from the video that are NOT covered in the textbooks.”

  • Archiving in multiple languages: Upload handwritten recipes or letters in different languages. Gemini can decipher the handwriting, translate it, and format it into a digital cookbook or archive. For a cross-cultural family like mine, this could come in really handy.

Multimodal tips

To get the best results…

  • Name your inputs. Don’t say “look at the image.” Provide names for images when you upload them (“This is image A…”) and then refer to them when providing instructions (“Using image A…”).

  • Give it the info first, instructions last. When taking advantage of that huge context window, provide Gemini with all the data you want it to use first, and put your instructions at the end, after the data. Use a bridging phrase like “Based on the info above…”

  • With video, use timestamps. If there’s a part of the video that’s relevant to your instructions, refer to it by timestamp (e.g., “The trend visible at 1:45 in video B…”).


Previous articles in this series

Categories
Artificial Intelligence

Notes on using Gemini 3 Pro, part 1: Context entropy and a huge context window

It’s been about a week since Gemini 3 has come out, and I’ve been hearing some really good things about it. So I’m trying out Google’s “try Gemini 3 Pro for free for a month” offer and putting it to as much use as the account will allow, and I’m posting my findings here.

Gemini 3 and context entropy

Gemini 3, probably owing to its roots at Google, seems to handle “context entropy” better than ChatGPT or Claude. By “context entropy,” I mean “messy data,” with the disorder and chaos that you expect to find in notes and documents that you accumulate over time. Unless you’ve put in a lot of time, you probably haven’t put this information into much of a structured form or created some kind of “map” that explains how (or even if) the various parts of the information are related to each other.

Gemini 3 does a good job of taking chaotic information and finding the signal. I recently fed it a collection of…

  • Job descriptions for positions that I’ve applied for
  • Various versions of my resume, each one tuned for a specific job application
  • Cover letters for each job application
  • Screenshots of job application forms, where I had to answer -pre-screening questions posed by prospective employers
  • Notes from interviews with recruiters and prospective employers
  • Video recordings of the aformentioned interviews, which took place on Zoom, Teams, or Google Meet (with the approval of the other parties, of course)
  • Slide presentations that I gave as part of the interview process
  • Follow-up emails, texts, and other messages
  • “Conversations” with ChatGPT and Claude through the job search process, from generating customized cover letters and resumes all the way to post-mortems

…and it’s produced some useful stuff, including a strategy document that I plan to use in my job search going forward. (More on that in a later post here and video on the Global Nerdy YouTube channel.)

Gemini 3’s huge context window

The other notable thing about Gemini 3 is its context window, which is an AI model’s working memory, or the maximum amount of information that it can work with in any given chat session. Gemini 3 Pro’s context window is a huge 1 million tokens, which is roughly equivalent to about 750,000 words (a token is a small chunk of language that lies somewhere between a character and a full word).

This big context window means that it’s possible to feed Gemini 3 with a LOT of data, which could be a big application codebase, a couple of textbooks, lots of notes or transcripts, or any other big pile of messy data that you’re trying to extract meaning from.

I’m going to be working heavily with Gemini 3 over the next few weeks, and I’ll continue to post my observations here, along with tips and tricks that I either find online or figure out. Watch this space!

Categories
Artificial Intelligence Work

Remember to not leave all the work to the AI

Tap to view at full size.

Pictured above is a photo of a page from Dawn, an English-language newspaper based in Pakistan. Take a look at the highlighted paragraph at the end of the article titled Auto sales rev up in October:

If you want, I can create an even snappier “front-page style” version with punchy one-line stats and a bold, infographic-ready layout — perfect for maximum reader impact. Do you want me to do that next?

That, of course, is the result of indiscriminately copying and pasting the output of an LLM, which is something I like to call “response injection.” It’s also a career-limiting move.

The online version of the article doesn’t have that final paragraph, but it does have this editor’s note at the end:

This report published in today’s Dawn was originally edited using AI, which is in violation of our current AI policy. The policy is available on our website and can be reviewed here. The original report also carried AI-generated artefact text from the editing process, which has been edited out in the digital version. The matter is being investigated, and the violation of AI policy is regretted. — Editor

I have nothing against using AI as a writing assistant. It’s fantastic for checking spelling, grammar, and flow, it can help you out of writer’s block, and it can do something that you could never do, no matter how smart or creative you are: it can come up with ideas you’d never come up with.

So yes, use AI, but you have to do some of the work, and you have to double-check it before putting that work out in the world!

Categories
Artificial Intelligence Career Conferences Current Events What I’m Up To

The “Careers in Tech” panel at TechX Florida / Reasons to be optimistic 2025

The Careers in Tech panel

On Saturday, I had the honor of speaking on the Careers in Tech panel at TechX Florida, which was organized by USF’s student branch of the IEEE Computer Society.

On the panel with me were:

We enjoyed speaking to a packed room…

…and I enjoyed performing the “official unofficial song of artificial intelligence” at the end of the panel:

Reasons to be optimistic 2025

During the panel, a professor in the audience asked an important question on behalf of the students there: In the current tech industry environment, what are the prospects for young technologists about to enter the market?

I was prepared for this kind of question and answered that technological golden ages often come at the same time as global crises. I cited the examples from this book…

Thank You for Being Late, by Thomas Friedman, who proposed that 2007 was “one of the single greatest technological inflection points since Gutenberg…and we all completely missed it.”

The reason many people didn’t notice the technological inflection point is because it was eclipsed by the 2008 financial crises.

During the dark early days of the COVID-19 pandemic and shutdown, the people from Techstars asked me if I could write something uplifting for the startupdigest newsletter. I wrote an article called Reasons for startups to be optimistic, where I cited Friedman’s theory and put together a table of big tech breakthroughs that happened between 2006 and 2008.

In answering the professor’s question, I went through the list, reciting each breakthrough. The professor smiled and replied “that’s a long list.”

If you need a ray of hope, I’ve reproduced the list of interesting and impactful tech things that came about between 2006 and 2008 below. Check it out, and keep in mind that we’re currently in a similar time of tech breakthroughs that are being eclipsed by crises around the world.

The leap Notes
Airbnb

In October 2007, as a way to offset the high cost of rent in San Francisco, roommates Brian Chesky and Joe Gebbia came up with the idea of putting an air mattress in their living room and turning it into a bed and breakfast. They called their venture AirBedandBreakfast.com, which later got shortened to its current name.

This marks the start of the modern web- and app-driven gig economy.

Android

The first version of Android as we know it was announced on September 23, 2008 on the HTC Dream (also sold as the T-Mobile G1).

Originally started in 2003 and bought by Google in 2005, Android was at first a mobile operating system in the same spirit as Symbian or more importantly, Windows Mobile — Google was worried about competition from Microsoft. The original spec was for a more BlackBerry-like device with a keyboard, and did not account for a touchscreen. This all changed after the iPhone keynote.

App Store

Apple’s App Store launched on July 10, 2008 with an initial 500 apps. At the time of writing (March 2020), there should be close to 2 million.

In case you don’t remember, Steve Jobs’ original plan was to not allow third-party developers to create native apps for the iPhone. Developers were directed to create web apps. The backlash prompted Apple to allow developers to create apps, and in March 2008, the first iPhone SDK was released.

Azure Azure, Microsoft’s foray into cloud computing, and the thing that would eventually bring about its turnaround after Steve Ballmer’s departure, was introduced at their PDC conference in 2008 — which I attended on the second week of my job there.
Bitcoin

The person (or persons) going by the name “Satoshi Nakamoto” started working on the Bitcoin project in 2007.

It would eventually lead to cryptocurrency mania, crypto bros, HODL and other additions to the lexicon, one of the best Last Week Tonight news pieces, and give the Winklevoss twins their second shot at technology stardom after their failed first attempt with a guy named Mark Zuckerberg.

Chrome

By 2008, the browser wars were long done, and Internet Explorer owned the market. Then, on September 2, Google released Chrome, announcing it with a comic illustrated by Scott “Understanding Comics” McCloud, and starting the Second Browser War.

When Chrome was launched, Internet Explorer had about 70% of the browser market. In less than 5 years, Chrome would overtake IE.

Data: bandwidth costs and speed In 2007, bandwidth costs dropped dramatically, while transmission speeds grew in the opposite direction.
Dell returns After stepping down from the position of CEO in 2004 (but staying on as Chairman of the Board), Michael Dell returned to the role on January 31, 2007 at the board’s request.
DNA sequencing costs drop dramatically The end of the year 2007 marks the first time that the cost of genome sequencing dropped dramatically — from the order of tens of millions to single-digit millions. Today, that cost is about $1,000.
DVD formats: Blu-Ray and HD-DVD In 2008, two high-definition optical disc formats were announced. You probably know which one won.
Facebook In September 2006, Facebook expanded beyond universities and became available to anyone over 13 with an email address, making it available to the general public and forever altering its course, along with the course of history.
Energy technologies: Fracking and solar Growth in these two industries helped turn the US into a serious net energy provider, which would help drive the tech boom of the 2010s.
GitHub Originally founded as Logical Awesome in February 2008, GitHub’s website launched that April. It would grow to become an indispensable software development tool, and a key part of many developer resumes (mine included). It would first displace SourceForge, which used to be the place to go for open source code, and eventually become part of Microsoft’s apparent change of heart about open source when they purchased the company in 2018.
Hadoop

In 2006, developer Doug Cutting of Apache’s Nutch project, took used GFS (Google File System, written up by Google in 2003) and the MapReduce algorithm (written up by Google in 2004) and combined it with the dataset tech from Nutch to create the Hadoop project. He gave his project the name that his son gave to his yellow toy elephant, hence the logo.

By enabling applications and data to be run and stored on clusters of commodity hardware, Hadoop played a key role in creating today’s cloud computing world.

Intel introduces non-silicon materials into its chips January 2007: Intel’s PR department called it “the biggest change to computer chips in 40 years,” and they may have had a point. The new materials that they introduced into the chip-making process allowed for smaller, faster circuits, which in turn led to smaller and faster chips, which are needed for mobile and IoT technologies.
Internet crosses a billion users This one’s a little earlier than our timeframe, but I’m including it because it helps set the stage for all the other innovations. At some point in 2005, the internet crossed the billion-user line, a key milestone in its reach and other effects, such as the Long Tail.
iPhone

On January 9, 2007, Steve Jobs said the following at this keynote: “Today, we’re introducing three revolutionary new products…an iPod, a phone, and an internet communicator…Are you getting it? These are not three separate devices. This is one device!”

The iPhone has changed everyone’s lives, including mine. Thanks to this device, I landed my (current until recently) job, and right now, I’m working on revising this book.

iTunes sells its billionth song On February 22, 2006, Alex Ostrovsky from West Bloomfield, Michigan purchased ColdPlay’s Speed of Sound on iTunes, and it turned out to be the billionth song purchased on that platform. This milestone proves to the music industry that it was possible to actually sell music online, forever changing an industry that had been thrashing since the Napster era.
Kindle

Before tablets or large smartphone came Amazon’s Kindle e-reader, which came out on November 19, 2007. It was dubbed “the iPod of reading” at the time.

You might not remember this, but the first version didn’t have a touch-sensitive screen. Instead, it had a full-size keyboard below its screen, in a manner similar to phones of that era.

Macs switch to Intel

The first Intel-based Macs were announced on January 10, 2006: The 15″ MacBook Pro and iMac Core Duo. Both were based on the Intel Core Duo.

Motorola’s consistent failure to produce chips with the kind of performance that Apple needed on schedule caused Apple to enact their secret “Plan B”: switch to Intel-based chips. At the 2005 WWDC, Steve Jobs revealed that every version of Mac OS X had been secretly developed and compiled for both Motorola and Intel processors — just in case.

We may soon see another such transition: from Intel to Apple’s own A-series chips.

Netflix In 2007, Netflix — then a company that mailed rental DVDs to you — started its streaming service. This would eventually give rise to binge-watching as well as one of my favorite technological innovations: Netflix and chill (and yes, there is a Wikipedia entry for it!), as well as Tiger King, which is keeping us entertained as we stay home.
Python 3

The release of Python 3 — a.k.a. Python 3000 — in December 2008 was the beginning of the Second Beginning! While Python had been eclipsed by Ruby in the 2000s thanks to Rails and the rise of MVC web frameworks and the supermodel developer, it made its comeback in the 2010s as the language of choice for data science and machine learning thanks to a plethora of libraries (NumPy, SciPy, Pandas) and support applications (including Jupyter Notebooks).

I will always have an affection for Python. I cut my web development teeth in 1999 helping build Givex.com’s site in Python and PostgreSQL. I learned Python by reading O’Reilly’s Learning Python while at Burning Man 1999.

Shopify In 2004, frustrated with existing ecommerce platforms, programmer Tobias Lütke built his own platform to sell snowboards online. He and his partners realize that they should be selling ecommerce services instead, and in June 2006, launch Shopify.
Spotify The streaming service was founded in April 2006, launched in October 2008, and along with Apple and Amazon, changed the music industry.
Surface (as in Microsoft’s big-ass table computer)

Announced on May 29, 2007, the original Surface was a large coffee table-sized multitouch-sensitive computer aimed at commercial customers who wanted to provide next generation kiosk computer entertainment, information, or services to the public.

Do you remember SarcasticGamer’s parody video of the Surface?

Switches 2007 was the year that networking switches jumped in speed and capacity dramatically, helping to pave the way for the modern internet.
Twitter

In 2006, Twittr (it had no e then, which was the style at the time, thanks to Flickr) was formed. From then, it had a wild ride, including South by Southwest 2007, when its attendees — influential techies — used it as a means of catching up and finding each other at the conference. @replies appeared in May 2007, followers were added that July, hashtag support in September, and trending topics came a year later.

Twitter also got featured on an episode of CSI in November 2007, when it was used to solve a case.

VMWare After performing poorly financially, the husband and wife cofounders of VMWare — Diane Greene, president and CEO, and Mendel Rosenbaum, Chief Scientist — left. Greene was fired by the board in July, and Rosenbaum resigned two months later. VMWare would go on to experience record growth, and its Hypervisors would become a key part of making cloud computing what it is today.
Watson IBM’s Watson underwent initial testing in 2006, when Watson was given 500 clues from prior Jeopardy! programs. Wikipedia will explain the rest:

While the best real-life competitors buzzed in half the time and responded correctly to as many as 95% of clues, Watson’s first pass could get only about 15% correct. During 2007, the IBM team was given three to five years and a staff of 15 people to solve the problems. By 2008, the developers had advanced Watson such that it could compete with Jeopardy! champions.

Wii The Wii was released in December 2006, marking Nintendo’s comeback in a time when the console market belonged solely to the PlayStation and Xbox.
XO computer You probably know this device better as the “One Laptop Per Child” computer — the laptop that was going to change the world, but didn’t quite do that. Still, its form factor lives on in today’s Chromebooks, which are powered by Chrome (which also debuted during this time), and the concept of open source hardware continues today in the form of Arduino and Raspberry Pi.
YouTube

YouTube was purchased by Google in October 2006. In 2007, it exploded in popularity, consuming as much bandwidth as the entire internet did 7 years before. In the summer and fall of 2007, CNN and YouTube produced televised presidential debates, where Democratic and Republican US presidential hopefuls answered YouTube viewer questions.

You probably winced at this infamous YouTube video, which was posted on August 24, 2007: Miss Teen USA 2007 – South Carolina answers a question, which has amassed almost 70 million views to date.

Categories
Artificial Intelligence Conferences Tampa Bay What I’m Up To

I’m speaking at the TechX Florida 2025 AI conference this Saturday!

This Saturday, November 8, I’ll be at the TechX Florida 2025 AI Conference at USF, on the Careers in Tech panel, where we’ll be talking about career paths, hiring expectations, and practical advice for early-career developers and engineers.

This conference, which is FREE to attend, will feature:

  • AI talks from major players in the industry, including Atlassian, Intel, Jabil, Microsoft, and Verizon
  • Opportunities to meet and network with companies, startups, and techies from the Tampa Bay area
  • The Careers in Tech panel, featuring Yours Truly and other experienced industry pros

Once again, the TechX Florida 2025 AI Conference will take place this Saturday, November 8th, in USF’s Engineering Building II, in the Hall of Flags. It runs from 11 a.m. to 5 p.m. and will be followed by…

TechX After Dark, a social/fundraising event running from 6 p.m. to 8 p.m., with appetizers and a cash bar.

This event charges admission:

  • FREE for IEEE-CS members
  • $10 for students
  • $20 for professionals