Categories
Artificial Intelligence What I’m Up To

A scene from an upcoming video

Joey deVilla, seen in his home office, holding a small robot in his right hand. The text “Welcome, AI overloads!” appears on screens in the background.

Just for kicks, here’s a still from my upcoming video where I ask ChatGPT some questions about OAuth and building apps with Auth0.

Categories
Artificial Intelligence

AI art isn’t quite “there” yet

AI-generated picture of an illustartion of a magazine showing a tubby-yet-muscular Elon Musk holding a big gun. His gun hand appears to have 8 fingers.
Tap to view at full size.

Pictured above is an AI-generated illustration of a magazine featuring Elon Musk. Note that his gun hand has a few too many fingers — or is it Schrödinger’s trigger discipline?

Categories
Artificial Intelligence

ChatGPT passes one of my AI professor’s tests

Picture of a the boarding area of a bus terminal with this question superimposed over it: “If someone is sitting in the waiting area of a bus station, what's the likelihood that they're wearing a tuxedo?”
Photo of Dr. Janice Glasgow.
Dr. Janice Glasgow, Professor Emerita, Queen’s University, Canada.

Back when I was a computer science student at Crazy Go Nuts University, the professor teaching the AI course — Dr. Janice Glasgow — said that this question would stump just about every conversational artificial intelligence system at the time:

If someone is sitting in the waiting area of a bus station, what’s the likelihood that they’re wearing a tuxedo?

This was back when “AI” most often referred to rules-based AI, which was based on programmed sets of rules and facts. In effect, they were made of a big collection “if this, then do that” instructions. These systems generally “learn” only through their programming; expanding their knowledge typically means doing more programming.

Today, “AI” is generally used to refer to machine learning, which take a pool of data (generally, the bigger that pool, the better) and use it to “train” themselves by making predictions based on that data and adapting to feedback (often provided by humans) on those predictions. These systems generally “learn” from “experience”. Expanding their knowledge typically means providing them with more data and feedback on the predictions they make based on that data.

(If you’d like to know more about the differences between rules-based systems and machine learning, this article is a good place to start.)

Still frame from the “Sailor Moon” anime showing Tuxedo Mask.
Tuxedo Mask might make an appearance at a bus station…

For a rules-based system to be able to answer the question “How likely is a person waiting at a bus station to be wearing a tuxedo?”, it would not only have to be able to parse an English question, but it would also have to know what a bus station is, what a tuxedo is, and general rules about clothing, different levels of formality and other social rules (many of which are unwritten), and of course, how terrible bus stations are.

ChatGPT seems to be pretty good at answering “artifical general intelligence” questions, or at least pretty good at faking it. I thought I’d ask it Dr. Glasgow’s question, and this is how it replied:

It is unlikely that someone sitting in the waiting area of a bus station would be wearing a tuxedo. Tuxedos are typically worn for formal occasions such as weddings, black tie events, or proms. It is more common for people waiting at a bus station to be dressed in casual or practical clothing that is suitable for travel. That being said, it is always possible that someone might be wearing a tuxedo for any number of reasons, but it would not be a typical or expected sight in a bus station waiting area.

Damn.