Categories
Business Editorial Humor

The strategy behind Twitter’s rebranding, explained

This is the latest from Pizza Cake Comics, created by Ellen Woodbury. Click the comic or this link to see it on its originating page.

Categories
Business Current Events Editorial

There is no plan at Twitter/X. There are only pants and paved cowpaths.

A sloppy rebranding

As I write this — 11:10 p.m. on the evening of Tuesday, July 25, 2023 — more than a day since Twitter ditched the bird icon and name for X. The problem is that the rebranding wasn’t terribly thorough.

Consider this screenshot from my Twitter/X home page:

Also, if you click on the links at the bottom right of the home page, the pages they lead to still bear the Twitter bird and name:

The newly-renamed corporation had a crew to remove the old Twitter sign from its San Francisco headquarters, but the police were called in and stopped it since the company never contacted the building owners about it, nor did they get a permit to set up the sign removal equipment on the street:

At last report, the sign on Twitter HQ looks like this:

These are the kinds of mistakes that a marketing or brand manager would never make, because they know that rebranding is something that requires a plan.

But there is no plan. There’s a goal — ditching the Twitter name and replacing it with Musk’s beloved brand, X — and there’s PANTS.

Pantsing and paving the cowpath

“Planner” and “pantser” are terms that many novel writers use to describe two very different writing styles:

  • Planners have their novel outlined and planned out before they start writing it. They’ve got clear ideas of the story they’re trying to tell, and their characters and settings are fleshed out.
  • Pansters — the term comes from the expression “by the seta of one’s pants,” which means by instinct and without much planning — might have a vague idea of what they want to write about and are simply making it up as they write.

Both are legitimate ways of creating things, although a planner will tell you that planning is better, and a pantster will do the same for pantsing.

As an organization, Twitter has been a pantser from its inception. Most of the features that we consider to be part of the platform didn’t originate with them; they were things that the users did that Twitter “featurized.”

Consider the hashtag — that’s not a Twitter creation, but the invention of Chris Messina, whom I happen to know from my days as a techie in Toronto and the early days of BarCamp:

Retweets? The term and the concept were invented by users. We used to put “RT” at the start of any tweet of someone else that we were re-posting to indicate that we were quoting another user. Twitter saw this behavior and turned it into. feature.

The same goes for threads (not the app, but conversational threads). To get around the original 144-character limit, users would make posts that spanned many tweets, using conventions like “/n” where n was a “page number.” Twitter productized this.

All these features were a good application of “pantsing” — being observant of user behavior and improvising around it. This approach is sometimes called “paving the cowpath.”

If you do a web search using the term paving the cowpath ux (where UX means user experience), the results tend to be articles that say it’s a good idea, because you’ll often find that users will find ways around your design if it doesn’t suit their needs, as shown in the photo above.

However, if you do a search using the term paving the cowpath business, the articles take a decidedly negative slant and recommend that you don’t do that. User behavior and business processes are pretty different domains, and business processes do benefit from having a plan. As a business, Twitter had no plan, which is why they’ve always been in financial trouble despite being wildly successful in terms of user base and popularity.

To paraphrase Mark Zuckerberg’s observation about Twitter, it’s a clown car that somehow drove into a gold mine.

Pantsing as a process

Since Elon Musk’s takeover, Twitter has been pantsing at never-before-seen levels, largely based on Musk’s whims. We’ve seen:

And the company’s been losing developers for reasons that started with cost-cutting, but soon, people were losing their jobs for contradicting the boss. Working for Musk is like working for Marvel Comics supervillain Dr. Doom:

More on Musk

If you’d like to hear more about Twitter and Musk, including three theories on why Musk has descended into madness — I’m particularly intrigued by theories (ketamine, a.k.a “Special K,”, a.k.a. horse tranquilizers) and (simulation theory) — check out the latest episode of the Search Engine podcast, hosted by Reply All’s former host PJ Vogt, What’s Going on with Elon Musk?

Categories
Current Events Editorial

The top story on Techmeme right now…

Thanks to Dave Playford for the find!

…is the announcement by Florida’s Karen-In-Chief Ron DeSantis with Elon “Space Karen / South Afri-Karen” Musk on an incredibly glitchy Twitter Space last night.

The audio-only stream failed after about half a million listeners tuned in, which might sound impressive if you’re technically ignorant or spinning the story. One DeSantis spokesperson vaingloriously claimed to NBC news that “Governor DeSantis broke the internet — that should tell you everything you need to know about the strength of his candidacy….!”

It seems impressive until you recall that Twitch star “TheGrefg” recently held an audio and video stream (which would use more bandwidth than audio alone) that had 1.7 million viewers — over three times the audience DeSantis’ announcement garnered.

That’s what happens when you choose a technology based on whether its owner is “on side” vs. whether it’s well-run and working well.

Also worth reading…

From The Daily Beast: Ron DeSantis’ 2024 Campaign Launch Fail Could Predict What Happens Next

Categories
Artificial Intelligence Editorial Presentations Video

Maciej Ceglowski’s reassuring arguments for why an AI superintelligence might not be a threat to humanity

Yesterday on the OpenAI blog, founder Sam Altman, President Greg Brockman and Chief Scientist Ilya Sutskever posted an article titled Governance of superintelligence with the subtitle “Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.”

Although it’s a good idea to think about this sort of thing, there’s also the possibility that all this fuss over superintelligence may be for nothing. In his talk, Superintelligence: The Idea That Eats Smart People, which he gave at Web Camp Zagreb in 2016, developer Maciej Ceglowski, whom I personally know from another life back in the 2000s, lists some arguments against the idea of an evil superintelligence that is a threat to humanity:

Here are just a few of Maciej’s “inside perspective” arguments, which you can also find in his companion essay:

  • The Argument From Wooly Definitions: “With no way to define intelligence (except just pointing to ourselves), we don’t even know if it’s a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.”
  • The Argument From Stephen Hawking’s Cat: “Stephen Hawking is one of the most brilliant people alive [He was alive at the time Maciej wrote this], but say he wants to get his cat into the cat carrier. How’s he going to do it? He can model the cat’s behavior in his mind and figure out ways to persuade it. He knows a lot about feline behavior. But ultimately, if the cat doesn’t want to get in the carrier, there’s nothing Hawking can do about it despite his overpowering advantage in intelligence.”
  • The Argument From Einstein’s Cat: “There’s a stronger version of this argument, using Einstein’s cat. Not many people know that Einstein was a burly, muscular fellow. But if Einstein tried to get a cat in a carrier, and the cat didn’t want to go, you know what would happen to Einstein.”
  • The Argument From Emus: “In the 1930’s, Australians decided to massacre their native emu population to help struggling farmers. They deployed motorized units of Australian army troops in what we would now call technicals—fast-moving pickup trucks with machine guns mounted on the back. The emus responded by adopting basic guerrilla tactics: they avoided pitched battles, dispersed, and melted into the landscape, humiliating and demoralizing the enemy. And they won the Emu War, from which Australia has never recovered.”
  • The Argument From Slavic Pessimism: “We can’t build anything right. We can’t even build a secure webcam. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?”
  • The Argument From Complex Motivations: “Complex minds are likely to have complex motivations; that may be part of what it even means to be intelligent. There’s a wonderful moment in Rick and Morty where Rick builds a butter-fetching robot, and the first thing his creation does is look at him and ask ‘what is my purpose?’. When Rick explains that it’s meant to pass butter, the robot stares at its hands in existential despair.
  • The Argument From Actual AI: “When we look at where AI is actually succeeding, it’s not in complex, recursively self-improving algorithms. It’s the result of pouring absolutely massive amounts of data into relatively simple neural networks. The breakthroughs being made in practical AI research hinge on the availability of these data collections, rather than radical advances in algorithms.”
  • The Argument From Maciej’s Roommate: “My roommate was the smartest person I ever met in my life. He was incredibly brilliant, and all he did was lie around and play World of Warcraft between bong rips. The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation.”

There are also his “outside perspective” arguments, which look at what it means to believe in the threat of an AI superintelligence. It includes become an AI weenie like the dorks pictured below:

The dork on the left is none other than Marc Andreesen, browser pioneer, who’s now more of a south-pointing compass these days, and an even bigger AI weenie, if tweets like this are any indication:

But more importantly, the belief in a future superintelligence feels like a religion for people who think they’re too smart to fall for religion.

As Maciej puts it:

[The Singularity is] a clever hack, because instead of believing in God at the outset, you imagine yourself building an entity that is functionally identical with God. This way even committed atheists can rationalize their way into the comforts of faith. The AI has all the attributes of God: it’s omnipotent, omniscient, and either benevolent (if you did your array bounds-checking right), or it is the Devil and you are at its mercy.

Like in any religion, there’s even a feeling of urgency. You have to act now! The fate of the world is in the balance!

And of course, they need money!

Because these arguments appeal to religious instincts, once they take hold they are hard to uproot.

Or, as this tweet summarizes it:

In case you need context:

  • Roko’s Basilisk is a thought experiment posted on the “rational discourse” site LessWrong (which should be your first warning) about a potential superintelligent, super-capable AI in the future. This AI would supposedly have the incentive to create a virtual reality simulation to torture anyone who knew of its potential existence but didn’t tirelessly and wholeheartedly work towards making that AI a reality.

    It gets its name from Roko, the LessWrong member who came up with this harebrained idea, and “basilisk,” a mythical creature that can kill with a single look.
  • Pascal’s Wager is philosopher Blaise Pascal’s idea that you should live virtuously and act as if there is a God. If God exists, you win a prize of infinite value: you go to Heaven forever and avoid eternal damnation in Hell. If God doesn’t exist, you lose a finite amount: some pleasures and luxuries during your limited lifespan.
Categories
Editorial Humor

Everything you need to know about cryptocurrency, in a single Twitter poll

Adam Kotsko posted a hilarious Twitter poll a couple of days ago, and the results are in:

Adam Kotsko’s Twitter pool: What’s your favorite tech innovation? Illegal cab company: 16%. Illegal hotel chain: 12.2%. Fake money for criminals: 38%. Plagiarism machine: 33.9%.
Tap to view the original Tweet.

Also worth reading

See my article: Arguments for staying away from crypto, NFTs, and “Web3” in general.

Categories
Editorial

Another “boom cycle” chart

Tap to view at full size.

This chart’s been making the rounds on LinkedIn recently — it’s the Benner Cycle, an observation and prediction made by farmer Samuel Benner, who noticed that markets follow a regular cycle of hard times, good times, and panics.

It’s an interesting idea that might be worth lining up with the one I had about cycles in computer innovations:

Tap to view at full size.
Categories
Artificial Intelligence Editorial

The bias in AI *influencers*

One of the challenges that we’ll face in AI is bias — not just in the data, but the influencers as well. Consider this tweet from @OnPageLeads:

@OnPageLeads’ tweet.
Tap to view the original tweet.

Take a closer look at the original drawing made by the child…

The original child’s drawing of the hand. Make note of the skin tone.
Tap to view the source.

…and then the AI-generated photorealistic image:

The AI’s photorealistic rendering of the child’s drawing.
Again, make note of the skin tone.
Tap to view the source.

Tech influencer Robert Scoble, societally-blind gadget fanboy that he is, was quick to heap praise on the tweet. Thankfully, he was quickly called out by people who saw what the problem was:

Robert Scoble’s poorly-considered response, followed by some righteous retorts.
Tap to view the original tweet.

Of course, this kind of structural racism is nothing new to us folks of color. The problem is criticism of this kind often gets shut down. The most egregious case of this was Google’s firing of AI ethicist Timnit Gebru, who has warned time and again that unmoderated AI has the power to enhance societal racism.

You may have heard of recent ex-Googler Geoffrey Hinton, who’s making headlines about sounding the alarm about possible existential threats about AI. He was oddly silent when Google was firing Gebru and others for saying that AI could harm marginalized people.

In fact, he downplayed their concerns in this CNN interview:

“Their concerns aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”

Geoffrey Hinton, CNN, May 2, 2023.

My only reply to Hinton’s remark is this:

The Audacity of the Caucasity

For more about Timnit Gebru and her concerns about AI — especially a lack of focus on ethics related to it — check out this podcast episode of Adam Conover’s Factually, featuring Gebru and computational linguistics professor Emily Bender: