The audio-only stream failed after about half a million listeners tuned in, which might sound impressive if you’re technically ignorant or spinning the story. One DeSantis spokesperson vaingloriously claimed to NBC news that “Governor DeSantis broke the internet — that should tell you everything you need to know about the strength of his candidacy….!”
Yesterday on the OpenAI blog, founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever posted an article titled Governance of superintelligence with the subtitle “Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.”
Although it’s a good idea to think about this sort of thing, there’s also the possibility that all this fuss over superintelligence may be for nothing. In his talk, Superintelligence: The Idea That Eats Smart People, which he gave at Web Camp Zagreb in 2016, developer Maciej Ceglowski, whom I personally know from another life back in the 2000s, lists some arguments against the idea of an evil superintelligence that is a threat to humanity:
The Argument From Wooly Definitions: “With no way to define intelligence (except just pointing to ourselves), we don’t even know if it’s a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.”
The Argument From Stephen Hawking’s Cat: “Stephen Hawking is one of the most brilliant people alive [He was alive at the time Maciej wrote this], but say he wants to get his cat into the cat carrier. How’s he going to do it? He can model the cat’s behavior in his mind and figure out ways to persuade it. He knows a lot about feline behavior. But ultimately, if the cat doesn’t want to get in the carrier, there’s nothing Hawking can do about it despite his overpowering advantage in intelligence.”
The Argument From Einstein’s Cat: “There’s a stronger version of this argument, using Einstein’s cat. Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn’t want to go, you know what would happen to Einstein.”
The Argument From Emus: “In the 1930’s, Australians decided to massacre their native emu population to help struggling farmers. They deployed motorized units of Australian army troops in what we would now call technicals—fast-moving pickup trucks with machine guns mounted on the back. The emus responded by adopting basic guerrilla tactics: they avoided pitched battles, dispersed, and melted into the landscape, humiliating and demoralizing the enemy. And they won the Emu War, from which Australia has never recovered.”
The Argument From Slavic Pessimism: “We can’t build anything right. We can’t even build a secure webcam. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?”
The Argument From Complex Motivations: “Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent. There’s a wonderful moment in Rick and Morty where Rick builds a butter-fetching robot, and the first thing his creation does is look at him and ask ‘what is my purpose?’. When Rick explains that it’s meant to pass butter, the robot stares at its hands in existential despair.”
The Argument From Actual AI: “When we look at where AI is actually succeeding, it’s not in complex, recursively self-improving algorithms. It’s the result of pouring absolutely massive amounts of data into relatively simple neural networks. The breakthroughs being made in practical AI research hinge on the availability of these data collections, rather than radical advances in algorithms.”
The Argument From Maciej’s Roommate: “My roommate was the smartest person I ever met in my life. He was incredibly brilliant, and all he did was lie around and play World of Warcraft between bong rips. The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation.”
There are also his “outside perspective” arguments, which look at what it means to believe in the threat of an AI superintelligence. It includes become an AI weenie like the dorks pictured below:
The dork on the left is none other than Marc Andreesen, browser pioneer, who’s now more of a south-pointing compass these days, and an even bigger AI weenie, if tweets like this are any indication:
But more importantly, the belief in a future superintelligence feels like a religion for people who think they’re too smart to fall for religion.
As Maciej puts it:
[The Singularity is] a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith. The AI has all the attributes of God: it’s omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.
Like in any religion, there’s even a feeling of urgency. You have to act now! The fate of the world is in the balance!
And of course, they need money!
Because these arguments appeal to religious instincts, once they take hold they are hard to uproot.
Or, as this tweet summarizes it:
In case you need context:
Roko’s Basilisk is a thought experiment posted on the “rational discourse” site LessWrong (which should be your first warning) about a potential superintelligent, super-capable AI in the future. This AI would supposedly have the incentive to create a virtual reality simulation to torture anyone who knew of its potential existence but didn’t tirelessly and wholeheartedly work towards making that AI a reality.
It gets its name from Roko, the LessWrong member who came up with this harebrained idea, and “basilisk,” a mythical creature that can kill with a single look.
Pascal’s Wager is philosopher Blaise Pascal’s idea that you should live virtuously and act as if there is a God. If God exists, you win a prize of infinite value: you go to Heaven forever and avoid eternal damnation in Hell. If God doesn’t exist, you lose a finite amount: some pleasures and luxuries during your limited lifespan.
This chart’s been making the rounds on LinkedIn recently — it’s the Benner Cycle, an observation and prediction made by farmer Samuel Benner, who noticed that markets follow a regular cycle of hard times, good times, and panics.
“Their concerns aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”
Geoffrey Hinton, CNN, May 2, 2023.
My only reply to Hinton’s remark is this:
For more about Timnit Gebru and her concerns about AI — especially a lack of focus on ethics related to it — check out this podcast episode of Adam Conover’s Factually, featuring Gebru and computational linguistics professor Emily Bender:
Spread this idea and share this infographic! Tap to view at full size.
Almost exactly three years ago and about a month into the pandemic, Startup Digest Tampa Bay published my article where I suggested that the 2020 pandemic might be hiding some world-changing innovations that we didn’t notice because of everything going on, just as the 2008 downturn did.
My article, titled Reasons for startups to be optimistic, was based on journalist Thomas Friedman’s theory: that 2007 was “one of the single greatest technological inflection points since Gutenberg…and we all completely missed it.” It’s an idea that he put forth in What the hell happened in 2007?, the second chapter of his 2016 book, Thank You for Being Late.
In case you’re wondering what the hell happened around 2007:
The short answer is “in the tech world, a lot.”
The medium-sized answer is this list: Airbnb, Android, the App Store, Bitcoin, Chrome, data bandwidth dropped in cost and gained in speed, Dell’s return, DNA sequencing got much cheaper, energy tech got cheaper, GitHub, Hadoop, Intel introduce non-silicon material into its chips, the internet crossed a billion users, the iPhone, Kindle, Macs switched to Intel chips, Netflix, networking switches jumped in speed and capacity, Python 3, Shopify, Spotify, Twitter, VMWare, Watson, the Wii, and YouTube.
It’s hard to spot a “golden age” when you’re living in it, and it may have been even more difficult to do so around 2007 and 2008 because of the distraction of the 2008 financial crisis.
In 2020 — 13 years after 2007 — we had the lockdowns and a general feeling of anxiety and isolation. I was about a week into unemployment when Murewa Olubela and Alex Abell approached me with an opportunity to write an article for Startup Digest Tampa Bay.
When ChatGPT was released in late November 2022, I showed it to friends and family, telling them that its underlying “engine” had been around for a couple of years. The GPT-3 model was released in 2020, but it went unnoticed by the world at large until OpenAI gave it a nice, user-friendly web interface.
That’s what got me thinking about my thesis that 2020 might be the start of a new era of initially-unnoticed innovation. I started counting backwards: 2007 is 13 years before 2020. What’s 13 years before 2007?
1981. That’s the year the IBM PC came out. While other desktop computers were already on the market — the Apple ][, Commodore PET, TRS-80 — this was the machine that put desktop computers in more offices and homes than any other. What’s 13 years before 1981?
Creative Commons image by Jeremy Keith. Tap to see the source.
1968. You don’t have any of the aforementioned innovations without the Mother of All Demos: Douglas Englebart’s demonstration of what you could do with computers, if they got powerful enough. He demonstrated the GUI, mouse, chording keyboard, word processing, hypertext, collaborative document editing, and revision control — and he did it Zoom-style, using a remote video setup!
With all that in mind, I created the infographic at the top of this article, showing the big leaps that have happened every 13 years since 1968.
If you’re feeling bad about having missed the opportunities of the desktop revolution, the internet revolution, or the smartphone revolution, consider this: It’s 1968, 1981, 1994, and 2007 all over again. We’re at the start of the AI revolution right now. What are you going to do?
Worth watching
The Mother of All Demos (1968): What Douglas Englebart demonstrates is everyday stuff now, but back when computers were rare and filled whole rooms, this was science fiction stuff:
The iPhone Stevenote (2007): Steve Jobs didn’t just introduce a category-defining device, he also gave a master class in presentations:
What the hell happened in 2007? (2017): Thomas Friedman puts a chapter from his book into lecture form and explains why 2007 may have been the single greatest tech inflection point:
Here’s the money quote from his lecture:
I think what happened in 2007 was an explosion of energy — a release of energy — into the hands of men, women, and machines the likes of which we have never seen, and it changed four kinds of power overnight.
It changed the power of one: what one person can do as a maker or breaker is a difference of degree; that’s a difference of kind. We have a president in America who can sit in his pajamas in the White House and tweet to a billion people around the world without an editor, a libel lawyer or a filter. But here’s what’s really scary: the head of ISIS can do the same from Raqqa province in Syria. The power of one has really changed.
The power of machines have changed. Machines are acquiring all five senses. We’ve never lived in a world where machines have all five senses. We crossed that line in February 2011, on of all places, a game show in America. The show called Jeopardy, and there were three contestants. Two were the all-time Jeopardy champions, and the third contestant simply went by his last name: Mr. Watson. Mr. Watson, of course, was an IBM computer. Mr. Watson passed on the first question, but he buzzed in before the two humans on the second question. The question was “It’s worn on the foot of a horse and used by a dealer in a casino.” And in under 2.5 second, Mr. Watson answered in perfect Jeopardy style, “What is a shoe?” And for the first time, a cognitive computer figured out a ton faster than a human. And the world kind of hasn’t been the same since.
It’s changed the power of many. We, as a collective, because we’ve got these amplified powers now, we are now the biggest forcing function on and in nature — which is why the new geological era is being named for us: the anthropocene.
And lastly, it changed the power of flows. Ideas now flow and circulate and change, at a pace we’ve never seen before. Six years ago, Barack Obama said marriage is between a man and a woman. Today, he says, bless it so, in my view marriage is between any two people who love each other. And he followed Ireland in that position! Ideas now flow and change and circulate at a speed never seen before.
Well, my view is that these four changes in power: they’re not changing your world; they’re reshaping your world, the world you’re going to go into. And they’re reshaping these five realms: politics, geopolitics, the workplace, ethics, and community.
Worth attending
Yup, I’m tooting my own horn here, but that’s one of the reasons why Global Nerdy exists! I’m the new organizer of Tampa Bay Artificial Intelligence Meetup, and it’s restarting with a number of hands-on workshops.