My poster from May, titled Every 13 years, an innovation changes computing forever, theorizes that roughly every thirteen years, a new technology appears, and it changes the way we use computers in unexpectedly large ways.
The first entry in my list was an exception because it didn’t feature just one technology, but a number of them. It was “The Mother of All Demos,” a demonstration of technologies that are part of our everyday life now, but must have seemed like pure science fiction at the time, December 9, 1968 — 55 years ago today.
If your curiosity about artificial intelligence goes beyond bookmarking those incessant “10 ChatGPT prompts you need to know” posts that are all over LinkedIn, you should set aside some time to read Douglas’ Hofstadter’sGödel, Escher, Bach: An Eternal Golden Braid and watch his new interview.
Gödel, Escher, Bach
I might never have read it, if not for Dr. David Alex Lamb’s software engineering course at Queen’s University, whose curriculum included reading a book from a predetermined list and writing a report on it. I’ll admit that I first rolled my eyes at having to write a book report, but then noticed that one of the books had both “Escher” and “Bach” in the title. I had no idea who “Gödel” was, but I figured they were in good company, so I signed up to write the report on the book I would later come to know as “GEB.”
I’ll write more about why I think the book is important later. In the meantime, you should just know that it:
Helped me get a better understanding of a lot of underlying principles of mathematics and its not-too-distant relative, computer science, especially the concepts of loops and recursion
Advanced my thinking about how art, science, math, and music are intertwined, and inspired one of my favorite sayings: “Music is math you can feel”
Gave me my favorite explanations of regular expressions and the halting problem
Taught me that even the deepest, densest subject matter can be explained with whimsy
Provided me with my first serious introduction to ideas in cognitive science and artificial intelligence
Yes, this is one of those books that many people buy, read a chapter or two, and then put on their bookshelf, never to touch it again. Do not make that mistake. This book will reward your patience and perseverance by either exposing you to some great ideas, or validate some concepts that you may have already internalized.
At the very least, if you want to understand “classical” AI — that is AI based on symbol manipulation instead of the connectionist, “algebra, calculus, and stats in a trench coat” model of modern AI — you should Gödel, Escher, Bach.
A new Hofstadter interview!
Posted a mere three days ago at the time of writing, the video above is a conversation between Douglas Hofstadter and Amy Jo Kim. It’s worth watching, not only for Hofstadter’s stories about how GEB came to be, but also for his take on current-era large language models and other generative AI as well as the fact that he’s being interviewed by game designer Amy Jo Kim. Among other things, Kim was a systems designer on the team that made the game Rock Band and worked on the in-game social systems for The Sims.
On the “pro” side — that is, the people arguing that AI research and development IS an existential threat:
Yosuha Bengio: Professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms. Specializes in neural networks and deep learning. He won the Turing Award with Yann LeCun and Geoffrey Hinton for their work on machine learning.
And on the “con” side — the people who are arguing that AI research and development IS NOT an existential threat:
Melanie Mitchell: Professor at the Santa Fe Institute, who’s worked in the areas of analogical reasoning, complex systems, genetic algorithms and cellular automata. She’s the author of the book AI: A Guide for Thinking Humans, published in 2019.
Yann LeCun: Meta’s chief AI scientist and professor at New York University, best known for his work on computer vision, optical character recognition, and convolutional neural networks. He won the Turing Award with Yoshua Bengio and Geoffrey Hinton for their work on machine learning.
They asked the audience to vote for a side at the start and conclusion of the debate, and while a clear majority were on the “pro” side (that is, they believed AI poses an existential threat), the “con” side won by gaining 4% of the vote at the end:
It’s hard to tell whether the Munk Debates really want you to pay to watch the video, as they have it locked down on this page and freely available on this one, so I’m linking to this YouTube posting for as long as it remains online. Enjoy!
I’ve made three appearances on Fox 13 News Tampa this year so far. If they call on me to answer more questions or explain some aspect of artificial intelligence, I’ll gladly do so!
My most recent appearance was on June 14, whose topic was all the noise about AI possibly being an existential threat to humanity. This is the one where I reminded the audience that The Terminator was NOT a documentary:
What might the next decade of software development look like? Richard Campbell has some ideas and shares them in this talk from the 2023 edition of the NDC London conference.
Here’s the video:
I know Richard from my former life at Microsoft. He’s the host of the .NET Rocks and RunAs Radio podcasts, and long-time developer, consultant, and tech company founder, and a damn good storyteller.
The first story he tells is about “The Animal Highway,” the space between his and his neighbors’ house, which is frequented by bears. This actually made me laugh out loud, since when I last saw Richard at a backyard barbecue at his house, we had to scare away a bear cub by being noisy. He picked up a pot and barbecue tongs, I picked up my accordion, and with whoops, hollers, and random squeezebox chords, we chased it away into the woods.
One of the themes that runs through his talk is that technology has grown in leaps and bounds. Near the start of the talk, he uses the example of the Cray X-MP. In 1985, it was the world’s most powerful computer. It sold for millions of dollars and required 200kW of power, which could perform 1.9 at gigaflops (billions of floating-point operations per second). It was used to model nuclear explosions and compute spaceflight trajectories.
The iPad 2 from 2011 also performs at 1.9 gigaflops, but it sold for hundreds of dollars instead of millions, and ran on battery power instead of requiring its own power plant. As Richard summed it up: “26 years later, the most powerful computer in the world is now a device we give to children. And they play Candy Crush on it.”
English: The first transistor ever made, built by John Bardeen, William Shockley and Walter H. Brattain of Bell Labs in 1947. Original exhibited in Bell Laboratories. Creative Commons photo by Unitronic. Tap to see the source.
Near the end of the talk, Richard uses another example of the technological changes that have happened in a lifetime. The picture above shows the first transistor ever, which was made in Bell Labs in 1947.
“It’s pretty hard to look at that,” he said, pointing to the photo of that transistor, “and think ‘M1 chip’.”
M1 chip diagram.
In case you were wondering, here’s how many transistors the different variations of the M1 chip have:
Chip version
Number of transistors
M1 (original version)
16 billion
M1 Pro
33.7 billion
M1 Max
57 billion
M1 Ultra
114 billion
If you want an understanding of how we got to the current state of computing and some good ideas of where it might go, Richard’s talk is not only enlightening, but also entertaining. I listened to it on this morning’s bike ride, and you might find it good listening during your workout, chores, commute or downtime.
Here it is — the recording of my interview on the 4:00 p.m. news on FOX 13 Tampa with anchor Chris Cato, where I answered more questions about artificial intelligence:
In this quick interview, we discussed:
The “existential threat to humanity” that AI potentially poses: My take is that a lot of big-name AI people who fear that sort of thing are eccentrics who hold what AI ethicist Timnit Gebru calls the TESCREAL (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism) mindset. They’re ignoring a lot of closer-to-home, closer-to-now issues raised by AI because they’re too busy investing in having their heads frozen for future revival and other weird ideas of the sort that people with too much money and living in their own bubble tend to have.
My favorite sound bite: “The Terminator is not a documentary.”
A.I. regulation: Any new technology that has great power for good and bad should actually be regulated, just as we do with nuclear power, pharma, cars and airplanes, and just about anything like that. A.I. is the next really big thing to change our lives — yes, it should be regulated.” There’s more to my take, but there’s only so much you can squeeze into a two-and-a-half minute segment.
Cool things AI is doing right now: I named these…
Shel Israel (who now lives in Tampa Bay) is using AI to help him with his writing as he works on his new book,
I’m using it with my writing for both humans (articles for Global Nerdy as well as the blog that pays the bills, the Auth0 Developer Blog) as well as for machines (writing code with the assistance of Studio Bot for Android Studio and Github Copilot for iOS and Python development)
Preventing unauthorized access to systems with machine learning-powered adaptive MFA, which a feature offered by Okta, where I work.
My “every 13 years” thesis: We did a quick run-through of something I wrote about a month ago — that since “The Mother of All Demos” in 1969, there’s been a paradigm-changing tech leap every 13 years, and the generative AI boom is the latest one:
Tap to view at full size.
And finally, a plug for Global Nerdy! This blog has been mentioned before in my former life in Canada, but this is the first time it’s been mentioned on American television.
I’ll close with a couple of photos that I took while there:
In the green room, waiting to go on. Tap to view at full size.
The view from the interview table, looking toward the anchor desk. Tap to view at full size.
The cameras, teleprompters, and monitors. Tap to view at full size.
Once again, I’d like to thank producer Melissa Behling, anchor Chris Cato, and the entire Fox 13 Tampa Bay studio team! It’s always a pleasure to work with them and be on their show.
Yesterday on the OpenAI blog, founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever posted an article titled Governance of superintelligence with the subtitle “Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.”
Although it’s a good idea to think about this sort of thing, there’s also the possibility that all this fuss over superintelligence may be for nothing. In his talk, Superintelligence: The Idea That Eats Smart People, which he gave at Web Camp Zagreb in 2016, developer Maciej Ceglowski, whom I personally know from another life back in the 2000s, lists some arguments against the idea of an evil superintelligence that is a threat to humanity:
The Argument From Wooly Definitions: “With no way to define intelligence (except just pointing to ourselves), we don’t even know if it’s a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.”
The Argument From Stephen Hawking’s Cat: “Stephen Hawking is one of the most brilliant people alive [He was alive at the time Maciej wrote this], but say he wants to get his cat into the cat carrier. How’s he going to do it? He can model the cat’s behavior in his mind and figure out ways to persuade it. He knows a lot about feline behavior. But ultimately, if the cat doesn’t want to get in the carrier, there’s nothing Hawking can do about it despite his overpowering advantage in intelligence.”
The Argument From Einstein’s Cat: “There’s a stronger version of this argument, using Einstein’s cat. Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn’t want to go, you know what would happen to Einstein.”
The Argument From Emus: “In the 1930’s, Australians decided to massacre their native emu population to help struggling farmers. They deployed motorized units of Australian army troops in what we would now call technicals—fast-moving pickup trucks with machine guns mounted on the back. The emus responded by adopting basic guerrilla tactics: they avoided pitched battles, dispersed, and melted into the landscape, humiliating and demoralizing the enemy. And they won the Emu War, from which Australia has never recovered.”
The Argument From Slavic Pessimism: “We can’t build anything right. We can’t even build a secure webcam. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?”
The Argument From Complex Motivations: “Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent. There’s a wonderful moment in Rick and Morty where Rick builds a butter-fetching robot, and the first thing his creation does is look at him and ask ‘what is my purpose?’. When Rick explains that it’s meant to pass butter, the robot stares at its hands in existential despair.”
The Argument From Actual AI: “When we look at where AI is actually succeeding, it’s not in complex, recursively self-improving algorithms. It’s the result of pouring absolutely massive amounts of data into relatively simple neural networks. The breakthroughs being made in practical AI research hinge on the availability of these data collections, rather than radical advances in algorithms.”
The Argument From Maciej’s Roommate: “My roommate was the smartest person I ever met in my life. He was incredibly brilliant, and all he did was lie around and play World of Warcraft between bong rips. The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation.”
There are also his “outside perspective” arguments, which look at what it means to believe in the threat of an AI superintelligence. It includes become an AI weenie like the dorks pictured below:
The dork on the left is none other than Marc Andreesen, browser pioneer, who’s now more of a south-pointing compass these days, and an even bigger AI weenie, if tweets like this are any indication:
But more importantly, the belief in a future superintelligence feels like a religion for people who think they’re too smart to fall for religion.
As Maciej puts it:
[The Singularity is] a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith. The AI has all the attributes of God: it’s omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.
Like in any religion, there’s even a feeling of urgency. You have to act now! The fate of the world is in the balance!
And of course, they need money!
Because these arguments appeal to religious instincts, once they take hold they are hard to uproot.
Or, as this tweet summarizes it:
In case you need context:
Roko’s Basilisk is a thought experiment posted on the “rational discourse” site LessWrong (which should be your first warning) about a potential superintelligent, super-capable AI in the future. This AI would supposedly have the incentive to create a virtual reality simulation to torture anyone who knew of its potential existence but didn’t tirelessly and wholeheartedly work towards making that AI a reality.
It gets its name from Roko, the LessWrong member who came up with this harebrained idea, and “basilisk,” a mythical creature that can kill with a single look.
Pascal’s Wager is philosopher Blaise Pascal’s idea that you should live virtuously and act as if there is a God. If God exists, you win a prize of infinite value: you go to Heaven forever and avoid eternal damnation in Hell. If God doesn’t exist, you lose a finite amount: some pleasures and luxuries during your limited lifespan.