Categories
Artificial Intelligence Meetups What I’m Up To

Scenes from an AI meetup in San Francisco

With a little time to kill in San Francisco last Monday evening before I had to help run demos at Okta’s annual conference, Oktane, I decided to look around for something to do. A quick web search for events later, I found myself en route to Cow Hollow to a meetup with the title Sharing our tricks and magic for pushing generative AI applications into production.

In this article, you’ll find my photos and notes from that event.

Neon and the Secret Garden

​Back around the time of the dot-com bubble, I was working in a Toronto consultancy made up of me and my friend Adam. We often worked at home, but when we were feeling stir-crazy, we took our laptops to a couple of local cafés and worked from there.

Since then, I’ve kept my eye out for my dream working café. There’s a pretty nice one in my neck of the woods — The Corner Club — and I take meetings and work from there every now and again.

But I have to admit it: Neon — the venue for the meetup — is my dream café / coworking space. It has more open places to hang out up front of the building, quieter working stalls in the back, and behind the building is the Secret Garden, an outdoor patio space. That’s where they served the food for the meetup: a combo of steak and veggie burritos, along with chips, guac and salsa.

The meetup was free, but also marked as “sold out” with a “join waitlist” button. That didn’t deter me because I knew the Great Unwritten Truth of Free Events:

Half the people who register for a free event never actually show up.

As I expected, no one was at the door to check attendees against a registration list. Besides, I had the accordion with me, and the “I’m with the entertainment” line often works.

The crowd at this meetup were pretty hardcore. I’d say about half of them either worked in an AI-related position at a more established company or at a scrappy AI startup, while the other half worked at a tech company and had an interest in AI. I suppose I fall into the latter category.

I struck up a conversation with someone who specialized in virtual memory who wanted to work on some memory virtualization techniques for use in large AI systems. We then walked out the back entrance to Neon’s “Secret Garden…”

…where they were serving food. I got into a conversation with someone who worked at Stability.ai, where we were joined by someone who wanted to make the leap from marketing to development.

When the Stability.ai developer was momentarily pulled away from the conversation, the marketer whispered “That name — Stability.ai — that’s familiar. What do they do?”

“Stable Diffusion,” I whispered back, and that was a name the marketer recognized. “Come to think of it, I don’t recognize any of this meetup’s speaker’s companies.”

Presentation 1: Build bulletproof generative AI applications with Weaviate and LLMs

This was the abstract for this presentation:

​Building AI applications for production is challenging, your users don’t like to wait, and delivering the right results in milliseconds instead of seconds will win their hearts. We’ll show you how to build caching, fact-checking, and RAG: Retrieval Augmented Generation pipelines with real-world examples, live demos, and ready-to-run GitHub projects using Weaviate, your favorite open-source vector database.

Philip Vollet, Head of Developer Growth at Weaviate, gave this presentation. Weaviate makes a vector database, where the data is stored as vectors — think of them as really long tuples — a format that’s particularly useful for AI purposes.

I’m going to spend some time this weekend going through my hastily-scribbled notes and comparing them to my full-resolution versions of my photos of the presentations to see what I can glean from them.

I’ve included my photos here so that you can get a feel for what was shown at the event, and hey — you might find them useful.

Presentation 2: Customizing LLM Applications with Haystack

Here’s the abstract for the presentation:

​Every LLM application comes with a unique set of requirements, use cases and restrictions. Let’s see how we can make use of open-source tools and frameworks to design around our custom needs.

The second presentation was by Tuana Celik, Developer Advocate at deepset, who make Haystack, a natural language processing (NLP) framework, and a cloud-based SaaS framework for machine learning and NLP.

Presentation 3: Context Matters: Boosting LLM Accuracy with Unstructured.io Metadata

This was the abstract for this presentation:

​Retrieval Augmented Generations (RAGs), limited by plain text representation and token size restrictions, often struggle to capture specific, factual information from reliable source documents. Discover how to use metadata and vector search to enhance the ability of LLMs to accurately retrieve specific knowledge and facts from a vast array of documents.

The final presenter was Ronny Hoesada, Developer Relations Engineer at Unstructured, who make a product that converts unstructured enterprise data into formats that can be fed into vector databases and large language models.

Aftermath and observations

Observation : RAG is a hot topic in this crowd. The big topic of the evening and a common thread through all three presentations was RAG — retrieval-augmented generation. This is a process that enhances the results produced by large language models by retrieving additional facts or information from an external knowledge source. If you’ve ever added to a discussion by looking something up on your phone, you’ve performed a simple version of RAG.

Observation : Many SF tech meetups start later than Tampa Bay ones. I arrived in San Francisco Monday morning, and spent most of the day in my hotel room working on this article for Okta’s main blog and this article for the Auth0 by Okta developer blog. That process took the better part of the day, and by the time I’d finished the final edits at 6:30 p.m., I thought it would be too late to go to a meetup — but I was wrong. When I perused Meetup.com, it turned out that lot of in-person meetups in the San Francisco Bay area start at 7:00 p.m., including this one. I’ve been to Tampa Bay meetups that wrap up at that time!

Observation : Some attendees came a long way to catch this meetup, and many of them didn’t have a car. If you check the discussion on the Meetup page for this event, you’ll see it’s all about getting rides to the venue:

Observation : People were seriously ready to work the room. More than half the attendees stuck around when the presentations ended. Some stayed for the beer, some stayed to mingle or hustle for their next job, and some stayed specifically to talk to the presenters.

I showed up wearing my travel clothes (see the photo above, taken at TPA earlier that morning), which were a sport jacket, dress shirt, jeans, and dress shoes, and as a result, a number of people at the event approached me and asked what company I was starting up. They saw chatty guy in a blazer and the neural networks in their heads pattern-matched it as founder.

Jared from Silicon Valley, patron saint of the Patagonia fleece vest.

I had conversations with founders or people who reported directly to a founder earlier that evening, so I did some introductions. They were easy to spot — it was a chilly night (10° C / 50° F) and a good breeze was coming in from the Bay, and they’d showed up in fleece vests, as is the custom there.

Observation : A lot of people here really know their stuff. The conversational topics were pretty hardcore, from discussions of cosine similarity and the finer points of tokenization (with a sidebar conversation about handling out-of-vocabulary cases) to how much of Hugging Face’s ever-growing set of models people have tried. “I’m a dabbler,” I admitted, “no more than a handful — a couple of the conversational ones, and a text-to-image and text-to-audio model.”

I also got deep into a chat about the Mojo programming language, during which I glibly introduced myself to someone as “Markov Cheney,” and to my complete lack of surprise, they got the joke.

I’m still mulling over my experience at this meetup and thinking about some meetup organization and presentation tricks to borrow.

Categories
Business Current Events Editorial

There is no plan at Twitter/X. There are only pants and paved cowpaths.

A sloppy rebranding

As I write this — 11:10 p.m. on the evening of Tuesday, July 25, 2023 — more than a day since Twitter ditched the bird icon and name for X. The problem is that the rebranding wasn’t terribly thorough.

Consider this screenshot from my Twitter/X home page:

Also, if you click on the links at the bottom right of the home page, the pages they lead to still bear the Twitter bird and name:

The newly-renamed corporation had a crew to remove the old Twitter sign from its San Francisco headquarters, but the police were called in and stopped it since the company never contacted the building owners about it, nor did they get a permit to set up the sign removal equipment on the street:

At last report, the sign on Twitter HQ looks like this:

These are the kinds of mistakes that a marketing or brand manager would never make, because they know that rebranding is something that requires a plan.

But there is no plan. There’s a goal — ditching the Twitter name and replacing it with Musk’s beloved brand, X — and there’s PANTS.

Pantsing and paving the cowpath

“Planner” and “pantser” are terms that many novel writers use to describe two very different writing styles:

  • Planners have their novel outlined and planned out before they start writing it. They’ve got clear ideas of the story they’re trying to tell, and their characters and settings are fleshed out.
  • Pansters — the term comes from the expression “by the seta of one’s pants,” which means by instinct and without much planning — might have a vague idea of what they want to write about and are simply making it up as they write.

Both are legitimate ways of creating things, although a planner will tell you that planning is better, and a pantster will do the same for pantsing.

As an organization, Twitter has been a pantser from its inception. Most of the features that we consider to be part of the platform didn’t originate with them; they were things that the users did that Twitter “featurized.”

Consider the hashtag — that’s not a Twitter creation, but the invention of Chris Messina, whom I happen to know from my days as a techie in Toronto and the early days of BarCamp:

Retweets? The term and the concept were invented by users. We used to put “RT” at the start of any tweet of someone else that we were re-posting to indicate that we were quoting another user. Twitter saw this behavior and turned it into. feature.

The same goes for threads (not the app, but conversational threads). To get around the original 144-character limit, users would make posts that spanned many tweets, using conventions like “/n” where n was a “page number.” Twitter productized this.

All these features were a good application of “pantsing” — being observant of user behavior and improvising around it. This approach is sometimes called “paving the cowpath.”

If you do a web search using the term paving the cowpath ux (where UX means user experience), the results tend to be articles that say it’s a good idea, because you’ll often find that users will find ways around your design if it doesn’t suit their needs, as shown in the photo above.

However, if you do a search using the term paving the cowpath business, the articles take a decidedly negative slant and recommend that you don’t do that. User behavior and business processes are pretty different domains, and business processes do benefit from having a plan. As a business, Twitter had no plan, which is why they’ve always been in financial trouble despite being wildly successful in terms of user base and popularity.

To paraphrase Mark Zuckerberg’s observation about Twitter, it’s a clown car that somehow drove into a gold mine.

Pantsing as a process

Since Elon Musk’s takeover, Twitter has been pantsing at never-before-seen levels, largely based on Musk’s whims. We’ve seen:

And the company’s been losing developers for reasons that started with cost-cutting, but soon, people were losing their jobs for contradicting the boss. Working for Musk is like working for Marvel Comics supervillain Dr. Doom:

More on Musk

If you’d like to hear more about Twitter and Musk, including three theories on why Musk has descended into madness — I’m particularly intrigued by theories (ketamine, a.k.a “Special K,”, a.k.a. horse tranquilizers) and (simulation theory) — check out the latest episode of the Search Engine podcast, hosted by Reply All’s former host PJ Vogt, What’s Going on with Elon Musk?