Categories
Artificial Intelligence Programming What I’m Up To

Andrew Ng’s “ChatGPT Prompt Engineering for Developers” is free for a limited time!

Screenshot from “ChatGPT Prompt Engineering for Developers,” showing the screen’s three parts — table to contents, Jupyter Notebook, and video.
A screenshot from ChatGPT Prompt Engineering for Developers.

Here’s something much better and more useful than anything you’ll find in the endless stream of “Chat Prompts You Must Know”-style articles — it’s ChatGPT Prompt Engineering for Developers. This online tutorial shows you how to use API calls to OpenAI to summarize, infer, transform, and expand text in order to add new features to or form the basis of your applications.

Isa Fulford and Andrew Ng.
Isa Fulford and Andrew Ng.

It’s a short course from DeepLearning.AI, and it’s free for a limited time. It’s taught by Isa Fulford of OpenAI’s tech staff and all-round AI expert Andrew Ng (CEO and founder of Landing AI, Chairman and co-founder of Coursera, General Partner at AI Fund, and an Adjunct Professor at the computer science department at Stanford University).

The course is impressive for a couple of reasons:

  1. Its format is so useful for developers. Most of it takes place in a page divided into three columns:
    • A table of contents column on the left
    • A Jupyter Notebook column in the center, which you can select text and copy from, as well as edit and run. It contains the code for the current exercise
    • A video/transcript column on the right.
  2. It’s set up very well, with these major sections:
    1. Introduction and guidelines
    2. Iterative prompt development
    3. Summarizing text with GPT
    4. Inferring — getting an understanding of the text, sentiment analysis, and extracting information
    5. Transforming — converting text from one format to another, or even one language to another
    6. Expanding — given a small amount of information, expanding on it to create a body of text
    7. Chatbot — applying the techniques about to create a custom chatbot
    8. Conclusion
  3. And finally, it’s an Andrew Ng course. He’s just good at this.

The course is pretty self-contained, but you’ll find it helpful if you have Jupyter Notebook installed on your system , and as you might expect, you should be familiar with Python.

I’m going to take the course for a test run over the next few days, and I’ll report my observations here. Watch this space!

Categories
Artificial Intelligence Meetups Tampa Bay

Tomorrow: Build Eliza, the original chatbot from 1964!

The Tampa Bay Artificial Intelligence Meetup is gathering tomorrow at Computer Coach, where we’ll build Eliza, the original chatbot!

(You can still register for the meetup, but space is limited!)

Eliza was created by computer scientist Joseph Weizenbaum at MIT’s Artificial Intelligence Lab over a two-year period from 1964 to 1966. It simulated a psychotherapist that reflects what the patient says back at them or gets the patient to talk about what they just said.

Here’s a quick video clip about Eliza:

Although Eliza was written for the IBM 7094, a room-sized computer whose operator console is pictured below…

IBM 7094 operator console. Photo by Arnold Reinhold.
Tap to view at full size.

…it later became a popular program on home computers in the 1980s under the name “Eliza” or “Doctor”:

The computers I grew up on all had some version of Eliza.

Here‘s Eliza running on the TRS-80 Computer — the “CoCo” — an underappreciated computer from the 1980s:

There’s even a scene from the TV series Young Sheldon, which takes place in the late 1980s/early 1990s, where the titular character has a chat with Eliza:

Eliza’s responses in the scene are pretty accurate, except for the synthesized voice.

If you’re really curious, you can try out ELIZA online! Be warned; it won’t be as impressive as ChatGPT.

There’s no way we could code ChatGPT in a single meetup, but we will build a complete working version of ELIZA tomorrow at the Tampa Bay Artificial Intelligence Meetup! It’s also a great way to sharpen your skills in Python (which is very popular in AI circles) at the same time.

In the meetup, I’ll provide a “starter” project, and you’d code along with me until you had a working Eliza version that you could tweak into your own chatbot.

You wouldn’t need the latest and greatest computer to do it, either! A laptop from 2010 (and remember, that’s 13 years ago now!) or later would be all you’d need.

There are still a few spaces available for tomorrow”s meetup. If you’re interested, register now!

Categories
Artificial Intelligence Editorial Presentations Video

Maciej Ceglowski’s reassuring arguments for why an AI superintelligence might not be a threat to humanity

Yesterday on the OpenAI blog, founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever posted an article titled Governance of superintelligence with the subtitle “Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.”

Although it’s a good idea to think about this sort of thing, there’s also the possibility that all this fuss over superintelligence may be for nothing. In his talk, Superintelligence: The Idea That Eats Smart People, which he gave at Web Camp Zagreb in 2016, developer Maciej Ceglowski, whom I personally know from another life back in the 2000s, lists some arguments against the idea of an evil superintelligence that is a threat to humanity:

Here are just a few of Maciej’s “inside perspective” arguments, which you can also find in his companion essay:

  • The Argument From Wooly Definitions: “With no way to define intelligence (except just pointing to ourselves), we don’t even know if it’s a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.”
  • The Argument From Stephen Hawking’s Cat: “Stephen Hawking is one of the most brilliant people alive [He was alive at the time Maciej wrote this], but say he wants to get his cat into the cat carrier. How’s he going to do it? He can model the cat’s behavior in his mind and figure out ways to persuade it. He knows a lot about feline behavior. But ultimately, if the cat doesn’t want to get in the carrier, there’s nothing Hawking can do about it despite his overpowering advantage in intelligence.”
  • The Argument From Einstein’s Cat: “There’s a stronger version of this argument, using Einstein’s cat. Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn’t want to go, you know what would happen to Einstein.”
  • The Argument From Emus: “In the 1930’s, Australians decided to massacre their native emu population to help struggling farmers. They deployed motorized units of Australian army troops in what we would now call technicals—fast-moving pickup trucks with machine guns mounted on the back. The emus responded by adopting basic guerrilla tactics: they avoided pitched battles, dispersed, and melted into the landscape, humiliating and demoralizing the enemy. And they won the Emu War, from which Australia has never recovered.”
  • The Argument From Slavic Pessimism: “We can’t build anything right. We can’t even build a secure webcam. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?”
  • The Argument From Complex Motivations: “Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent. There’s a wonderful moment in Rick and Morty where Rick builds a butter-fetching robot, and the first thing his creation does is look at him and ask ‘what is my purpose?’. When Rick explains that it’s meant to pass butter, the robot stares at its hands in existential despair.
  • The Argument From Actual AI: “When we look at where AI is actually succeeding, it’s not in complex, recursively self-improving algorithms. It’s the result of pouring absolutely massive amounts of data into relatively simple neural networks. The breakthroughs being made in practical AI research hinge on the availability of these data collections, rather than radical advances in algorithms.”
  • The Argument From Maciej’s Roommate: “My roommate was the smartest person I ever met in my life. He was incredibly brilliant, and all he did was lie around and play World of Warcraft between bong rips. The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation.”

There are also his “outside perspective” arguments, which look at what it means to believe in the threat of an AI superintelligence. It includes become an AI weenie like the dorks pictured below:

The dork on the left is none other than Marc Andreesen, browser pioneer, who’s now more of a south-pointing compass these days, and an even bigger AI weenie, if tweets like this are any indication:

But more importantly, the belief in a future superintelligence feels like a religion for people who think they’re too smart to fall for religion.

As Maciej puts it:

[The Singularity is] a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith. The AI has all the attributes of God: it’s omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.

Like in any religion, there’s even a feeling of urgency. You have to act now! The fate of the world is in the balance!

And of course, they need money!

Because these arguments appeal to religious instincts, once they take hold they are hard to uproot.

Or, as this tweet summarizes it:

In case you need context:

  • Roko’s Basilisk is a thought experiment posted on the “rational discourse” site LessWrong (which should be your first warning) about a potential superintelligent, super-capable AI in the future. This AI would supposedly have the incentive to create a virtual reality simulation to torture anyone who knew of its potential existence but didn’t tirelessly and wholeheartedly work towards making that AI a reality.

    It gets its name from Roko, the LessWrong member who came up with this harebrained idea, and “basilisk,” a mythical creature that can kill with a single look.
  • Pascal’s Wager is philosopher Blaise Pascal’s idea that you should live virtuously and act as if there is a God. If God exists, you win a prize of infinite value: you go to Heaven forever and avoid eternal damnation in Hell. If God doesn’t exist, you lose a finite amount: some pleasures and luxuries during your limited lifespan.
Categories
Artificial Intelligence Math Programming

You want sum of this? (or: What does the Σ symbol mean?)

If you’ve been perusing LinkedIn or a programming site like Lobste.rs, you may have seen that the professors who teach Stanford’s machine learning course, CS229, have posted their lecture notes online, a whopping 226 pages of them! This is pure gold for anyone who wants to get up to speed on machine learning but doesn’t have the time — or $55K a year — to spend on getting a Bachelor’s computer science degree from “The Cardinal.”

Or at least, it seems like pure gold…until you start reading it. Here’s page 1 of Chapter 1:

This is the sort of material that sends people running away screaming. For many, the first reaction upon being confronted with it would be something like “What is this ℝ thing in the second paragraph? What’s with the formulas on the first page? What the hell is that Σ thing? This is programming…nobody told me there would be math!”

If you’re planning to really get into AI programming and take great pains to avoid mathematics, I have good news and bad news for you.

First, the bad news: A lot of AI involves “college-level” math. There’s linear algebra, continuous functions, statistics, and a dash of calculus. It can’t be helped — machine learning and data science are at the root of the way artificial intelligence is currently being implemented, and both involve number-crunching.

And now, the good news: I’m here to help! I’m decent at both math and explaining things.

Over the next little while, I’m going to post articles in a series called Math WTF that will explain the math that you might encounter while learning AI and doing programming. I’m going to keep it as layperson-friendly as possible, and in the end, you’ll find yourself understanding stuff like the page I posted above.

So welcome to the first article in the Math WTF series, where I’ll explain something you’re likely to run into when reading notes or papers on AI and data science: the Σ symbol.

Σ, or sigma

As explained in the infographic above, the letter Σ — called “sigma” — is the Greek equivalent of our letter S. It means “the sum of a series.”

The series in question is determined by the things above, below, and to the right of the Σ:

  • The thing to the right of the Σ describes each term in the series: 2n + 3, or as we’d say in code, 2 * n + 3.
  • The thing below the Σ specifies the index variable — the variable we’ll use for counting terms in the series (which in this case is n) — and its initial value (which in this case is 1).
  • The thing above the Σ specifies the final value of the index variable, which in this case is 4.

So you can read the equation pictured above as “The sum of all the values of 2n + 3, starting at n = 1 and ending with n = 4.”

If you write out this sum one term at a time, starting with n = 1 and ending with n = 4, you get this…

((2 * 1) + 3) + ((2 * 2) + 3) + ((2 * 3) + 3) + ((2 * 4) + 3)

…and the answer is 32.

You could express this calculation in Python this way…

# Python 3.11

total = 0
for n in range(1, 5):
    total += 2 * n + 3

Keep in mind that range(1, 5) means “a range of integers starting at 1 and going up but not including 5.” In other words, it means “1, 2, 3, 4.”

There’s a more Pythonic way to do it:

# Python 3.11

sum([2 * n + 3 for n in range(1, 5)])

This is fine if you need to find the sum of a small set of terms. In this case, we’re looking at a sum of 4 terms, so generating a list and then using the sum function on it is fine. But if we were dealing with a large set of terms — say tens of thousands, hundreds of thousands, or more — you might want to go with a generator instead:

# Python 3.11

sum((2 * n + 3 for n in range(1, 5)))

The difference is the brackets:

  • [2 * n + 3 for n in range(1, 5)] — note the square brackets on the outside. This creates a list of 4 items. Creating 4 items doesn’t take up much processing time or memory, but creating hundreds of thousands could.
  • (2 * n + 3 for n in range(1, 5)) — note the round brackets on the outside. This creates a generator that can be called repeatedly, creating the next item in the sequence each time that generator is called. This takes up very little memory, even when going through a sequence of millions, billions, or even trillions of terms.

Keep an eye on this blog! I’ll post more articles explaining math stuff regularly.

Worth reading

For more about generators in Python, see Real Python’s article, How to Use Generators and yield in Python.

Categories
Artificial Intelligence Editorial

The bias in AI *influencers*

One of the challenges that we’ll face in AI is bias — not just in the data, but the influencers as well. Consider this tweet from @OnPageLeads:

@OnPageLeads’ tweet.
Tap to view the original tweet.

Take a closer look at the original drawing made by the child…

The original child’s drawing of the hand. Make note of the skin tone.
Tap to view the source.

…and then the AI-generated photorealistic image:

The AI’s photorealistic rendering of the child’s drawing.
Again, make note of the skin tone.
Tap to view the source.

Tech influencer Robert Scoble, societally-blind gadget fanboy that he is, was quick to heap praise on the tweet. Thankfully, he was quickly called out by people who saw what the problem was:

Robert Scoble’s poorly-considered response, followed by some righteous retorts.
Tap to view the original tweet.

Of course, this kind of structural racism is nothing new to us folks of color. The problem is criticism of this kind often gets shut down. The most egregious case of this was Google’s firing of AI ethicist Timnit Gebru, who has warned time and again that unmoderated AI has the power to enhance societal racism.

You may have heard of recent ex-Googler Geoffrey Hinton, who’s making headlines about sounding the alarm about possible existential threats about AI. He was oddly silent when Google was firing Gebru and others for saying that AI could harm marginalized people.

In fact, he downplayed their concerns in this CNN interview:

“Their concerns aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”

Geoffrey Hinton, CNN, May 2, 2023.

My only reply to Hinton’s remark is this:

The Audacity of the Caucasity

For more about Timnit Gebru and her concerns about AI — especially a lack of focus on ethics related to it — check out this podcast episode of Adam Conover’s Factually, featuring Gebru and computational linguistics professor Emily Bender:

Categories
Artificial Intelligence Current Events Video What I’m Up To

My interviews on artificial intelligence and ChatGPT on local news

Chris Cato and Joey deVilla on Fox 13 News Tampa. The “lower third” caption reads “Benefits and concerns of artificial intelligence.”

In case you missed it, here’s that interview I did for the 4:00 p.m. news on FOX 13 Tampa on Monday, April 10th with anchor Chris Cato:

It’s a follow-up to this piece that FOX 13 did back in March:

In that piece, I appeared along with:

  • Local realtor Chris Logan, who’s been using ChatGPT to speed up the (presumably tedious) process of writing up descriptions of houses for sale
  • University of South Florida associate director of the School of Information Systems and Management Triparna de Vreede, who talked about its possible malicious uses and what might be possible when AI meets quantum computing.
  • IP lawyer Thomas Stanton, who talked about how AI could affect jobs.

All of this is a good preamble for the first Tampa Artificial Intelligence Meetup session that I’ll be running — it’s happening on Wednesday, May 31st!

Categories
Artificial Intelligence Humor The Street Finds Its Own Uses For Things

AI ad of the moment: “Beer Party in Hell”

Hot on the heels of the AI-generated pizza ad Pepperoni Hug Spot, here’s a beer commercial from an artifical intelligence that clearly has never been invited to a back yard party:

https://www.youtube.com/watch?v=Geja6NCjgWY

There seem to be two versions of this ad online. One has Smash Mouth’s All Star as its backing track, while the other one (which is presented above) is on YouTube and is backed by generic southern rock-esque music — presumably to avoid getting a copyright “strike”.

As with Pepperoni Hug Spot, the visuals in the beer ad are located deep inside the uncanny valley:

This is how best friends drink a beer.
Tap to view the weirdness at full size.
Clearly the AI has never shotgunned a beer before.
Tap to view the weirdness at full size.
To an AI, beer and fire are pretty much the same thing.
Tap to view the weirdness at full size.
“I’m drinking from a bottle! No — a can! Wait — a bottle can!”
Tap to view the weirdness at full size.
Multiple fingers and a cap/hair blend that doesn’t exist outside of “JoJo’s Bizarre Adventure.”
Tap to view the weirdness at full size.
“Don’t stop…be-lieeeeving…”
Tap to view the weirdness at full size.
“I need more lighter fluid on the grill.”
Tap to view the weirdness at full size.
“NOW it’s a party!”
Tap to view the weirdness at full size.
“Is anyone gonna help clean up?”
Tap to view the weirdness at full size.