How did the CrowdStrike Bug of July 19, 2024 take down 8.5 million Windows systems and cause the biggest global outage of all time? I’ll explain in this video, where you’ll also learn about operating systems, the kernel, device drivers, and more!
Some techies hold the attitude that “what I do is important, and what you do isn’t,” and the more socially savvy ones don’t say the quiet part out loud.
But Mira Murati, OpenAI’s CTO, did just that onstage at her alma mater, Dartmouth University, where she said this about AI displacing jobs in creative lines of work:
Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place.
The laws of time, effort, and experience make it very clear: I’m in the middle of making my worst videos right now, and you’ll want to subscribe to see how bad they are!
…and the second is a blast from the past — a promotional video featuring images of a lot of top-tier developers, followed by an image that’s supposed to represent you, the everyday developer…and guess whose image they used:
There’ll be a mix of short- and long-form videos, where I’ll cover software development topics and technology news in interesting, unusual, and amusing ways.
I’m spending the month of June working on the first set of videos, which I’ll release as quickly as I can, so you know they’ll be bad. And if you’re thinking “But HOW bad?”, there’s only one way to find out: visit the channel and subscribe!
If you’ve tried to go past the APIs like the ones OpenAI offers and learn how they work “under the hood” by trying to build your own neural network, you might find yourself hitting a wall when the material opens with equations like this:
How can you learn how neural networks — or more accurately, artificial neural networks — do what they do without a degree in math, computer science, or engineering?
There are a couple of ways:
Follow this blog. Over the next few months, I’ll cover this topic, complete with getting you up to speed on the required math. Of course, if you’re feeling impatient…
Read Tariq Rashid’s book, Make Your Own Neural Network. Written for people who aren’t math, computer science, or engineering experts, it first shows you the principles behind neural networks and then leaps from the theoretical to the practical by taking those principles and turning them into working Python code.
Along the way, both I (in this blog) and Tariq (in his book) will trick you into learning a little science, a little math, and a little Python programming. In the end, you’ll understand the diagram above!
One more thing: if you prefer your learning via video…
My poster from May, titled Every 13 years, an innovation changes computing forever, theorizes that roughly every thirteen years, a new technology appears, and it changes the way we use computers in unexpectedly large ways.
The first entry in my list was an exception because it didn’t feature just one technology, but a number of them. It was “The Mother of All Demos,” a demonstration of technologies that are part of our everyday life now, but must have seemed like pure science fiction at the time, December 9, 1968 — 55 years ago today.
If your curiosity about artificial intelligence goes beyond bookmarking those incessant “10 ChatGPT prompts you need to know” posts that are all over LinkedIn, you should set aside some time to read Douglas’ Hofstadter’sGödel, Escher, Bach: An Eternal Golden Braid and watch his new interview.
Gödel, Escher, Bach
I might never have read it, if not for Dr. David Alex Lamb’s software engineering course at Queen’s University, whose curriculum included reading a book from a predetermined list and writing a report on it. I’ll admit that I first rolled my eyes at having to write a book report, but then noticed that one of the books had both “Escher” and “Bach” in the title. I had no idea who “Gödel” was, but I figured they were in good company, so I signed up to write the report on the book I would later come to know as “GEB.”
I’ll write more about why I think the book is important later. In the meantime, you should just know that it:
Helped me get a better understanding of a lot of underlying principles of mathematics and its not-too-distant relative, computer science, especially the concepts of loops and recursion
Advanced my thinking about how art, science, math, and music are intertwined, and inspired one of my favorite sayings: “Music is math you can feel”
Gave me my favorite explanations of regular expressions and the halting problem
Taught me that even the deepest, densest subject matter can be explained with whimsy
Provided me with my first serious introduction to ideas in cognitive science and artificial intelligence
Yes, this is one of those books that many people buy, read a chapter or two, and then put on their bookshelf, never to touch it again. Do not make that mistake. This book will reward your patience and perseverance by either exposing you to some great ideas, or validate some concepts that you may have already internalized.
At the very least, if you want to understand “classical” AI — that is AI based on symbol manipulation instead of the connectionist, “algebra, calculus, and stats in a trench coat” model of modern AI — you should Gödel, Escher, Bach.
A new Hofstadter interview!
Posted a mere three days ago at the time of writing, the video above is a conversation between Douglas Hofstadter and Amy Jo Kim. It’s worth watching, not only for Hofstadter’s stories about how GEB came to be, but also for his take on current-era large language models and other generative AI as well as the fact that he’s being interviewed by game designer Amy Jo Kim. Among other things, Kim was a systems designer on the team that made the game Rock Band and worked on the in-game social systems for The Sims.
On the “pro” side — that is, the people arguing that AI research and development IS an existential threat:
Yosuha Bengio: Professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms. Specializes in neural networks and deep learning. He won the Turing Award with Yann LeCun and Geoffrey Hinton for their work on machine learning.
And on the “con” side — the people who are arguing that AI research and development IS NOT an existential threat:
Melanie Mitchell: Professor at the Santa Fe Institute, who’s worked in the areas of analogical reasoning, complex systems, genetic algorithms and cellular automata. She’s the author of the book AI: A Guide for Thinking Humans, published in 2019.
Yann LeCun: Meta’s chief AI scientist and professor at New York University, best known for his work on computer vision, optical character recognition, and convolutional neural networks. He won the Turing Award with Yoshua Bengio and Geoffrey Hinton for their work on machine learning.
They asked the audience to vote for a side at the start and conclusion of the debate, and while a clear majority were on the “pro” side (that is, they believed AI poses an existential threat), the “con” side won by gaining 4% of the vote at the end:
It’s hard to tell whether the Munk Debates really want you to pay to watch the video, as they have it locked down on this page and freely available on this one, so I’m linking to this YouTube posting for as long as it remains online. Enjoy!