Artificial Intelligence Editorial Presentations Video

Maciej Ceglowski’s reassuring arguments for why an AI superintelligence might not be a threat to humanity

Yesterday on the OpenAI blog, founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever posted an article titled Governance of superintelligence with the subtitle “Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.”

Although it’s a good idea to think about this sort of thing, there’s also the possibility that all this fuss over superintelligence may be for nothing. In his talk, Superintelligence: The Idea That Eats Smart People, which he gave at Web Camp Zagreb in 2016, developer Maciej Ceglowski, whom I personally know from another life back in the 2000s, lists some arguments against the idea of an evil superintelligence that is a threat to humanity:

Here are just a few of Maciej’s “inside perspective” arguments, which you can also find in his companion essay:

  • The Argument From Wooly Definitions: “With no way to define intelligence (except just pointing to ourselves), we don’t even know if it’s a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.”
  • The Argument From Stephen Hawking’s Cat: “Stephen Hawking is one of the most brilliant people alive [He was alive at the time Maciej wrote this], but say he wants to get his cat into the cat carrier. How’s he going to do it? He can model the cat’s behavior in his mind and figure out ways to persuade it. He knows a lot about feline behavior. But ultimately, if the cat doesn’t want to get in the carrier, there’s nothing Hawking can do about it despite his overpowering advantage in intelligence.”
  • The Argument From Einstein’s Cat: “There’s a stronger version of this argument, using Einstein’s cat. Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn’t want to go, you know what would happen to Einstein.”
  • The Argument From Emus: “In the 1930’s, Australians decided to massacre their native emu population to help struggling farmers. They deployed motorized units of Australian army troops in what we would now call technicals—fast-moving pickup trucks with machine guns mounted on the back. The emus responded by adopting basic guerrilla tactics: they avoided pitched battles, dispersed, and melted into the landscape, humiliating and demoralizing the enemy. And they won the Emu War, from which Australia has never recovered.”
  • The Argument From Slavic Pessimism: “We can’t build anything right. We can’t even build a secure webcam. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?”
  • The Argument From Complex Motivations: “Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent. There’s a wonderful moment in Rick and Morty where Rick builds a butter-fetching robot, and the first thing his creation does is look at him and ask ‘what is my purpose?’. When Rick explains that it’s meant to pass butter, the robot stares at its hands in existential despair.
  • The Argument From Actual AI: “When we look at where AI is actually succeeding, it’s not in complex, recursively self-improving algorithms. It’s the result of pouring absolutely massive amounts of data into relatively simple neural networks. The breakthroughs being made in practical AI research hinge on the availability of these data collections, rather than radical advances in algorithms.”
  • The Argument From Maciej’s Roommate: “My roommate was the smartest person I ever met in my life. He was incredibly brilliant, and all he did was lie around and play World of Warcraft between bong rips. The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation.”

There are also his “outside perspective” arguments, which look at what it means to believe in the threat of an AI superintelligence. It includes become an AI weenie like the dorks pictured below:

The dork on the left is none other than Marc Andreesen, browser pioneer, who’s now more of a south-pointing compass these days, and an even bigger AI weenie, if tweets like this are any indication:

But more importantly, the belief in a future superintelligence feels like a religion for people who think they’re too smart to fall for religion.

As Maciej puts it:

[The Singularity is] a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith. The AI has all the attributes of God: it’s omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.

Like in any religion, there’s even a feeling of urgency. You have to act now! The fate of the world is in the balance!

And of course, they need money!

Because these arguments appeal to religious instincts, once they take hold they are hard to uproot.

Or, as this tweet summarizes it:

In case you need context:

  • Roko’s Basilisk is a thought experiment posted on the “rational discourse” site LessWrong (which should be your first warning) about a potential superintelligent, super-capable AI in the future. This AI would supposedly have the incentive to create a virtual reality simulation to torture anyone who knew of its potential existence but didn’t tirelessly and wholeheartedly work towards making that AI a reality.

    It gets its name from Roko, the LessWrong member who came up with this harebrained idea, and “basilisk,” a mythical creature that can kill with a single look.
  • Pascal’s Wager is philosopher Blaise Pascal’s idea that you should live virtuously and act as if there is a God. If God exists, you win a prize of infinite value: you go to Heaven forever and avoid eternal damnation in Hell. If God doesn’t exist, you lose a finite amount: some pleasures and luxuries during your limited lifespan.
Artificial Intelligence Math Programming

You want sum of this? (or: What does the Σ symbol mean?)

If you’ve been perusing LinkedIn or a programming site like, you may have seen that the professors who teach Stanford’s machine learning course, CS229, have posted their lecture notes online, a whopping 226 pages of them! This is pure gold for anyone who wants to get up to speed on machine learning but doesn’t have the time — or $55K a year — to spend on getting a Bachelor’s computer science degree from “The Cardinal.”

Or at least, it seems like pure gold…until you start reading it. Here’s page 1 of Chapter 1:

This is the sort of material that sends people running away screaming. For many, the first reaction upon being confronted with it would be something like “What is this ℝ thing in the second paragraph? What’s with the formulas on the first page? What the hell is that Σ thing? This is programming…nobody told me there would be math!”

If you’re planning to really get into AI programming and take great pains to avoid mathematics, I have good news and bad news for you.

First, the bad news: A lot of AI involves “college-level” math. There’s linear algebra, continuous functions, statistics, and a dash of calculus. It can’t be helped — machine learning and data science are at the root of the way artificial intelligence is currently being implemented, and both involve number-crunching.

And now, the good news: I’m here to help! I’m decent at both math and explaining things.

Over the next little while, I’m going to post articles in a series called Math WTF that will explain the math that you might encounter while learning AI and doing programming. I’m going to keep it as layperson-friendly as possible, and in the end, you’ll find yourself understanding stuff like the page I posted above.

So welcome to the first article in the Math WTF series, where I’ll explain something you’re likely to run into when reading notes or papers on AI and data science: the Σ symbol.

Σ, or sigma

As explained in the infographic above, the letter Σ — called “sigma” — is the Greek equivalent of our letter S. It means “the sum of a series.”

The series in question is determined by the things above, below, and to the right of the Σ:

  • The thing to the right of the Σ describes each term in the series: 2n + 3, or as we’d say in code, 2 * n + 3.
  • The thing below the Σ specifies the index variable — the variable we’ll use for counting terms in the series (which in this case is n) — and its initial value (which in this case is 1).
  • The thing above the Σ specifies the final value of the index variable, which in this case is 4.

So you can read the equation pictured above as “The sum of all the values of 2n + 3, starting at n = 1 and ending with n = 4.”

If you write out this sum one term at a time, starting with n = 1 and ending with n = 4, you get this…

((2 * 1) + 3) + ((2 * 2) + 3) + ((2 * 3) + 3) + ((2 * 4) + 3)

…and the answer is 32.

You could express this calculation in Python this way…

# Python 3.11

total = 0
for n in range(1, 5):
    total += 2 * n + 3

Keep in mind that range(1, 5) means “a range of integers starting at 1 and going up but not including 5.” In other words, it means “1, 2, 3, 4.”

There’s a more Pythonic way to do it:

# Python 3.11

sum([2 * n + 3 for n in range(1, 5)])

This is fine if you need to find the sum of a small set of terms. In this case, we’re looking at a sum of 4 terms, so generating a list and then using the sum function on it is fine. But if we were dealing with a large set of terms — say tens of thousands, hundreds of thousands, or more — you might want to go with a generator instead:

# Python 3.11

sum((2 * n + 3 for n in range(1, 5)))

The difference is the brackets:

  • [2 * n + 3 for n in range(1, 5)] — note the square brackets on the outside. This creates a list of 4 items. Creating 4 items doesn’t take up much processing time or memory, but creating hundreds of thousands could.
  • (2 * n + 3 for n in range(1, 5)) — note the round brackets on the outside. This creates a generator that can be called repeatedly, creating the next item in the sequence each time that generator is called. This takes up very little memory, even when going through a sequence of millions, billions, or even trillions of terms.

Keep an eye on this blog! I’ll post more articles explaining math stuff regularly.

Worth reading

For more about generators in Python, see Real Python’s article, How to Use Generators and yield in Python.

Artificial Intelligence Editorial

The bias in AI *influencers*

One of the challenges that we’ll face in AI is bias — not just in the data, but the influencers as well. Consider this tweet from @OnPageLeads:

@OnPageLeads’ tweet.
Tap to view the original tweet.

Take a closer look at the original drawing made by the child…

The original child’s drawing of the hand. Make note of the skin tone.
Tap to view the source.

…and then the AI-generated photorealistic image:

The AI’s photorealistic rendering of the child’s drawing.
Again, make note of the skin tone.
Tap to view the source.

Tech influencer Robert Scoble, societally-blind gadget fanboy that he is, was quick to heap praise on the tweet. Thankfully, he was quickly called out by people who saw what the problem was:

Robert Scoble’s poorly-considered response, followed by some righteous retorts.
Tap to view the original tweet.

Of course, this kind of structural racism is nothing new to us folks of color. The problem is criticism of this kind often gets shut down. The most egregious case of this was Google’s firing of AI ethicist Timnit Gebru, who has warned time and again that unmoderated AI has the power to enhance societal racism.

You may have heard of recent ex-Googler Geoffrey Hinton, who’s making headlines about sounding the alarm about possible existential threats about AI. He was oddly silent when Google was firing Gebru and others for saying that AI could harm marginalized people.

In fact, he downplayed their concerns in this CNN interview:

“Their concerns aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”

Geoffrey Hinton, CNN, May 2, 2023.

My only reply to Hinton’s remark is this:

The Audacity of the Caucasity

For more about Timnit Gebru and her concerns about AI — especially a lack of focus on ethics related to it — check out this podcast episode of Adam Conover’s Factually, featuring Gebru and computational linguistics professor Emily Bender:

Artificial Intelligence Current Events Video What I’m Up To

My interviews on artificial intelligence and ChatGPT on local news

Chris Cato and Joey deVilla on Fox 13 News Tampa. The “lower third” caption reads “Benefits and concerns of artificial intelligence.”

In case you missed it, here’s that interview I did for the 4:00 p.m. news on FOX 13 Tampa on Monday, April 10th with anchor Chris Cato:

It’s a follow-up to this piece that FOX 13 did back in March:

In that piece, I appeared along with:

  • Local realtor Chris Logan, who’s been using ChatGPT to speed up the (presumably tedious) process of writing up descriptions of houses for sale
  • University of South Florida associate director of the School of Information Systems and Management Triparna de Vreede, who talked about its possible malicious uses and what might be possible when AI meets quantum computing.
  • IP lawyer Thomas Stanton, who talked about how AI could affect jobs.

All of this is a good preamble for the first Tampa Artificial Intelligence Meetup session that I’ll be running — it’s happening on Wednesday, May 31st!

Artificial Intelligence Humor The Street Finds Its Own Uses For Things

AI ad of the moment: “Beer Party in Hell”

Hot on the heels of the AI-generated pizza ad Pepperoni Hug Spot, here’s a beer commercial from an artifical intelligence that clearly has never been invited to a back yard party:

There seem to be two versions of this ad online. One has Smash Mouth’s All Star as its backing track, while the other one (which is presented above) is on YouTube and is backed by generic southern rock-esque music — presumably to avoid getting a copyright “strike”.

As with Pepperoni Hug Spot, the visuals in the beer ad are located deep inside the uncanny valley:

This is how best friends drink a beer.
Tap to view the weirdness at full size.
Clearly the AI has never shotgunned a beer before.
Tap to view the weirdness at full size.
To an AI, beer and fire are pretty much the same thing.
Tap to view the weirdness at full size.
“I’m drinking from a bottle! No — a can! Wait — a bottle can!”
Tap to view the weirdness at full size.
Multiple fingers and a cap/hair blend that doesn’t exist outside of “JoJo’s Bizarre Adventure.”
Tap to view the weirdness at full size.
“Don’t stop…be-lieeeeving…”
Tap to view the weirdness at full size.
“I need more lighter fluid on the grill.”
Tap to view the weirdness at full size.
“NOW it’s a party!”
Tap to view the weirdness at full size.
“Is anyone gonna help clean up?”
Tap to view the weirdness at full size.
Artificial Intelligence Deals Programming Reading Material

Humble Bundle’s deal on No Starch Press’ Python books

Banner for Humble Bundle’s No Starch Press Python book bundle

I love No Starch Press’ Python books. They’re the textbooks I use when teaching the Python course at Computer Coach because they’re easy to read, explain things clearly, and have useful examples.

And now you can get 18 of their Python ebooks for $36 — that’s $2 each, or the cost of just one of their ebook, Python Crash Course, Third Edition!

Check out the deal at Humble Bundle, and get ready to get good at Python! At the time of writing, the bundle will be available for 20 more days.

Banner for Tampa Artificial Intelligence Meetup

Consider these books recommended reading for the Tampa Artificial Intelligence Meetup, which is now under my management, and holding a meeting later this month!

Artificial Intelligence Humor The Street Finds Its Own Uses For Things

AI ad of the moment: “Pepperoni Hug Spot”

Actual still from the AI-generated ad.
It’s so, SO wrong.

Good news, creatives — if this completely AI-generated TV ad for a fictitious pizza place is any indication, you won’t be replaced by artificial intelligence just yet.

Just watch it. It’s so…off. The people’s eyes are off-kilter, the chef’s arm appears to be on fire, and the scenes of people eating pizza slices are so off that they will haunt my dreams from the next week.

Pepperoni Hug Spot is a TV ad created by a YouTuber (or group of YouTubers) going by the name “Pizza Later” using the following combination of AI tools:

My favorite Twitter response to the ad comes from none other than Pizza Hut:

Of course, this being the age of Late-Stage Capitalism, PizzaLater has quickly created a site for Pepperoni Hug Spot, where you can buy merch.