Categories
Artificial Intelligence Conferences Tampa Bay What I’m Up To

I’m speaking at the TechX Florida 2025 AI conference this Saturday!

This Saturday, November 8, I’ll be at the TechX Florida 2025 AI Conference at USF, on the Careers in Tech panel, where we’ll be talking about career paths, hiring expectations, and practical advice for early-career developers and engineers.

This conference, which is FREE to attend, will feature:

  • AI talks from major players in the industry, including Atlassian, Intel, Jabil, Microsoft, and Verizon
  • Opportunities to meet and network with companies, startups, and techies from the Tampa Bay area
  • The Careers in Tech panel, featuring Yours Truly and other experienced industry pros

Once again, the TechX Florida 2025 AI Conference will take place this Saturday, November 8th, in USF’s Engineering Building II, in the Hall of Flags. It runs from 11 a.m. to 5 p.m. and will be followed by…

TechX After Dark, a social/fundraising event running from 6 p.m. to 8 p.m., with appetizers and a cash bar.

This event charges admission:

  • FREE for IEEE-CS members
  • $10 for students
  • $20 for professionals

 

Categories
Artificial Intelligence Hardware Programming What I’m Up To

One last endorsement for the ZGX Nano AI workstation

Today’s my last day in my role as the developer advocate for HP’s GB10-powered AI workstation, the ZGX Nano. As I’ve written before, I’m grateful to have had the the opportunity to talk about this amazing little machine.

Of course, you could expect me to talk about how good the ZGX Nano is; after all, I’m paid to do so — at least until 5 p.m. Eastern today. But what if a notable AI expert also sang its praises?

That notable expert is Sebastian Raschka (pictured above), author of a book I’m working my way through right now: Build a Large Language Model (from Scratch), and it’s quite good. He’s also working on a follow-up book, Build a Reasoning Model (from Scratch).

Sebastian has been experimenting on NVIDIA’s DGX Spark, which has the same specs as the ZGX Nano (as well as a few other similar small desktop computers built around the NVIDIA’s GB10 “superchip”), and he’s published his observations on his blog in a post titled DGX Spark and Mac Mini for Local PyTorch Development. He ran some benchmark AI programs comparing his Mac Mini M4 computer (a fine developer platform, by the bye) and the NVIDIA H100 GPU (and NVIDIA’s A100 GPU when an H100 wasn’t available), pictured below:

Keep in mind that the version of the H100 that comes with 80GB of VRAM sells for about $30,000, which is why most people don’t buy one, but instead rent time on it from server farms, typically at about $2/hour.

Let me begin from the end of Raschka’s article, where he writes his conclusions:

Overall, the DGX Spark seems to be a neat little workstation that can sit quietly next to a Mac Mini. It has a similarly small form factor, but with more GPU memory and of course (and importantly!) CUDA support.

I previously had a Lambda workstation with 4 GTX 1080Ti GPUs in 2018. I needed the machine for my research, but the noise and heat in my office was intolerable, which is why I had to eventually move the machine to a dedicated server room at UW-Madison. After that, I didn’t consider buying another GPU workstation but solely relied on cloud GPUs. (I would perhaps only consider it again if I moved into a house with a big basement and a walled-off spare room.) The DGX Spark, in contrast, is definitely quiet enough for office use. Even under full load it’s barely audible.

It also ships with software that makes remote use seamless and you can connect directly from a Mac without extra peripherals or SSH tunneling. That’s a huge plus for quick experiments throughout the day.

But, of course, it’s not a replacement for A100 or H100 GPUs when it comes to large-scale training.
I see it more as a development and prototyping system, which lets me offload experiments without overheating my Mac. I consider it as an in-between machine that I can use for smaller runs, and testing models in CUDA, before running them on cloud GPUs.

In short: If you don’t expect miracles or full A100/H100-level performance, the DGX Spark is a nice machine for local inference and small-scale fine-tuning at home.

You might as well replace “DGX Spark” in his article with “ZGX Nano” — the hardware specs are the same. The ZGX Nano shines with HP’s exclusive ZGX Toolkit, a Visual Studio Code extension that lets you configure, manage, and deploy to the ZGX Nano. This lets you use your favorite development machine and coding environment to write code, and then use the ZGX Nano as a companion device / on-premises server.

The article features graphs showing his benchmarking results…

In his first set of benchmarks, he took a home-built 600 million parameter LLM — the kind that you learn how to build in his book, Build a Large Language Model (from Scratch) — and ran it on his Mac Mini M4, the ZGX Nano’s twin cousin, and an H100 from a cloud provider. From his observations, you can conclude that:

  • With smaller models, the ZGX Nano can match a Mac Mini M4. Both can crunch about 45 tokens per second with 20 billion parameter m0dels.
  • The ZGX Nano has the advantage of coming with 128GB  of VRAM, meaning that it can handle larger models than the MacMini could, as it’s limited by memory.

Raschka’s second set of benchmarks tested how the Mac Mini, the ZGX Nano’s twin cousin, and the H100 handle two variants of a model that have been presented with MATH-500, a collection of 500 mathematical word problems:

  • The base variant, which was a standard LLM that gives short, direct answers
  • The reasoning variant, which was a version of the base model that was modified to “think out loud” through problems step-by-step

He ran two versions of this benchmark. The first was the sequential test, where the model was presented on MATH-500 question at a time. From the results, you can expect the ZGX Nano to perform almost as well as the H100, but at a significantly smaller fraction of the cost! It also runs circles around the Mac Mini.

In the second version of the benchmark, the batch test, the model was served 128 questions at the same time, to simulate serving multiple users at once and to. test memory bandwidth and parallel processing.

This is a situation where the H100 would vastly outperform the ZGX Nano thanks to the H100’s much better memory bandwidth. However, the ZGX Nano isn’t for doing inference at production scale; it’s for developers to try out their ideas on a system that’s powerful enough to get a better sense of how they’d operate in the real world, and do so affordably.

Finally, with the third benchmark, Rashcka trained and fine-tuned a model. Note that this time, the data center GPU was the A100 instead of the H100 due to availability.

This benchmark tests training and fine-tuning performance. It compares how fast you can modify and improve an AI model on the Mac Mini M4 vs. the ZGX Nano’s twin vs. an A100 GPU. He presents three scenarios in training and fine-tuning a 355 million parameter model:

  1. Pre-training (3a in the graphs above): Training a model from scratch on raw text
  2. SFT, or Supervised fine-tuning (3b): Teaching an existing model to follow instructions
  3. DPO (direct preference optimization), or preference Tuning (3c): Teaching the model which responses are “better” using preference data

All these benchmarks say what I’ve been saying: the ZGX Nano lets you do real model training locally and economically. You get a lot of bang for your ZGX Nano buck.

As with a lot of development workflows, where there’s a development database and a production database, you don’t need production scale for every experiment. The ZGX Nano gives you a working local training environment that isn’t glacially slow or massively expensive.

Want to know more? Go straight to the source and check out Raschka’s article, DGX Spark and Mac Mini for Local PyTorch Development.

And with this article, I end my stint as the “spokesmodel” for the ZGX Nano. It’s not the end of my work in AI; just the end of this particular phase.

Keep watching this blog, as well as the Global Nerdy YouTube channel, for more!

Categories
Artificial Intelligence Hardware What I’m Up To

Talking about HP’s ZGX Nano on the “Intelligent Machines” podcast

On Wednesday, HP’s Andrew Hawthorn (Product Manager and Planner for HP’s Z AI hardward) and I appeared on the Intelligent Machines podcast to talk about the computer that I’m doing developer relations consulting for: HP’s ZGX Nano.

You can watch the episode here. We appear at the start, and we’re on for the first 35 minutes:

A few details about the ZGX Nano:

  • It’s built around the NVIDIA GB10 Grace Blackwell “superchip,” which combines a 20-core Grace CPU and a GPU based on NVIDIA’s Blackwell architecture.

  • Also built into the GB10 chip is a lot of RAM: 128 GB of LPDDR5X coherent memory shared between CPU and GPU, which helps avoid the kind of memory bottlenecks that arise when the CPU and GPU each have their own memory (and usually, the GPU has considerably less memory than the CPU).
NVIDIA GB10 SoC (system on a chip).
  • It can perform up to about 1000 TOPS (trillions of operations per second) or 1015 operations per second and can handle model sizes of up to 200 billion parameters.

  • Want to work on bigger models? By connecting two ZGX Nanos together using the 200 gigabit per second ConnectX-7 interface, you can scale up to work on models with 400 billion parameters.

  • ZGX Nano’s operating system in NVIDIA’s DGX OS, which is a version of Ubuntu Linux with additional tweaking to take advantage of the underlying GB10 hardware.

Some topics we discussed:

  • Model sizes and AI workloads are getting bigger, and developers are getting more and more constrained by factors such as:
    • Increasing or unpredictable cloud costs
    • Latency
    • Data movement
  • There’s an opportunity to “bring serious AI compute to the desk” so that teams can prototype their AI applications  and iterate locally
  • The ZGX Nano isn’t meant to replace large datacenter clusters for full training of massive models, It’s aimed at “the earlier parts of the pipeline,” where developers do prototyping, fine-tuning, smaller deployments, inference, and model evaluation
  • The Nano’s 128 gigabytes of unified memory gets around the issues of bottlenecks with distinct CPU memory and GPU memory allowing bigger models to be loaded in a local box without “paging to cloud” or being forced into distributed setups early
  • While the cloud remains dominant, there are real benefits to local compute:
    • Shorter iteration loops
    • Immediate control, data-privacy
    • Less dependence on remote queueing
  • We expect that many AI development workflows will hybridize: a mix of local box and cloud/back-end
  • The target users include:
    • AI/ML researchers
    • Developers building generative AI tools
    • Internal data-science teams fine-tuning models for enterprise use-cases (e.g., inside a retail, insurance or e-commerce firm).
    • Maker/developer-communities
  • The ZGX Nano is part of the “local-to-cloud” continuum
  • The Nano won’t cover all AI development…
    • For training truly massive models, beyond the low hundreds of billions of parameters, the datacenter/cloud will still dominate
    • ZGX Nano’s use case is “serious but not massive” local workloads
    • Is it for you? Look at model size, number of iterations per week, data sensitivity, latency needs, and cloud cost profile

One thing I brought up that seemed to capture the imagination of hosts Leo Laporte, Paris Martineau, and Mike Elgan was the MCP server that I demonstrated a couple of months ago at the Tampa Bay Artificial Intelligence Meetup: Too Many Cats.

Too Many Cats is an MCP server that an LLM can call upon to determine if a household has too many cats, given the number of humans and cats.

Here’s the code for a Too Many Cats MCP server that runs on your computer and works with a local CLaude client:

from typing import TypedDict
from mcp.server.fastmcp import FastMCP

mcp = FastMCP(name="Too Many Cats?")

class CatAnalysis(TypedDict):
    too_many_cats: bool
    human_cat_ratio: float  

@mcp.tool(
    annotations={
        "title": "Find Out If You Have Too Many Cats",
        "readOnlyHint": True,
        "openWorldHint": False
    }
)
def determine_if_too_many_cats(cat_count: int, human_count: int) -> CatAnalysis:
    """Determines if you have too many cats based on the number of cats and a human-cat ratio."""
    human_cat_ratio = cat_count / human_count if human_count > 0 else 0
    too_many_cats = human_cat_ratio >= 3.0
    return CatAnalysis(
        too_many_cats=too_many_cats,
        human_cat_ratio=human_cat_ratio
    )

if __name__ == "__main__":
    # Initialize and run the server
    mcp.run(transport='stdio')

I’ll cover writing MCP servers in more detail on the Global Nerdy YouTube channel — watch this space!

Categories
Artificial Intelligence Humor

Where to get Michael Carducci’s “You wouldn’t steal the sum total of human knowledge…” T-shirt

I’ve already fielded a couple of questions about where to get the T-shirt that Michael Carducci wore at his Tampa Java User Group / Tampa Bay AI Meetup / Tampa Devs talk last week — the one with that parodies the Motion Picture Association’s “You wouldn’t steal a car” ad:

You can get the T-shirt online from Webbed Briefs’ store for £25 (US$33.54 at the time of writing):

And while you’re here, please enjoy The IT Crowd’s parody of that ad:

Categories
Artificial Intelligence Meetups Tampa Bay What I’m Up To

Scenes from last night’s “Architecture Patterns for AI-Powered Applications” meetup with Michael Carducci

Last night, we had a “standing room only” crowd at Michael Carducci’s presentation, Architecture Patterns for AI-Powered Applications, which was held jointly by Tampa Java User Group, Tampa Devs, and Tampa Bay Artificial Intelligence Meetup (which Anitra and I co-organize).

This article is a summary of the talk, complete with all the photos I took from the front row and afterparty.

The event was held at Kforce HQ, home of Tampa Bay’s meetup venue with the cushiest seats (full disclosure: I’m a Kforce consultant employee), and the food was provided by the cushiest NoSQL database platform, Couchbase!

Michael Carducci is many things: engaging speaker, funny guy, professional magician, and (of course) a software architect.

While he has extensive experience building systems for Very Big Organizations, the system-building journey he shared was a little more personal — it was about his SaaS CRM platform for a demographic he knows well: professional entertainers. He’s been maintaining it over the past 20 years, and it served as the primary example throughout his talk.

Michael’s central theme for his presentation was the gap between proof-of-concept AI implementations and production-ready systems, and it’s a bigger gap than you might initially think.

He emphasized that while adding basic AI functionality might take only 15 minutes to code, it’s a completely different thing to create a robust, secure, and cost-effective production system. That requires  additional careful architectural consideration.

Here’s a quote to remember:

“architecture [is the] essence of the software; everything it can do beyond providing the defined features and functions.”

— “Mastering Software Architecture” by Michael Carducci

A good chunk of the talk was about “ilities” — non-functional requirements that become architecturally significant when integrating AI.

These “ilities” are…

  • Cost – AI API costs can escalate quickly, especially as models chain together
  • Accuracy – Dealing with hallucinations and non-deterministic outputs

  • Security – Preventing prompt injection and model jailbreaking
  • Privacy – Managing data leakage and training data concerns

  • Latency & Throughput – Performance impacts of multiple model calls
  • Observability – Monitoring what’s happening in AI interactions

  • Simplicity / Complexity – Managing the increasing technical stack

And then he walked us through some patterns he encountered while building his application, starting with the “send an email” functionality:

The “send an email” function has an “make AI write the message for me” button, which necessitates an AI “guardrails” pattern:

And adding more AI features, such as having the AI-generated emails “sound” more like the user by having it review the user’s previous emails, called for using different architectural patterns.

And with more architectural patterns come different tradeoffs.

In the end, there was a progression of implementations from simple to increasingly complex. (It’s no wonder “on time, under budget” is considered a miracle these days)…

Stage 1: Basic Integration

  • Simple pass-through to OpenAI API
  • Minimal code (15 minutes to implement)
  • Poor security, no observability, privacy risks

Stage 2: Adding Guardrails

  • Input and output guardrails using additional LLMs
  • Prompt templates to provide context
  • Triple the API costs and latency

Stage 3: Personalization

  • Adding user writing style examples
  • Building data pipelines to extract relevant context
  • Dealing with token optimization challenges

Stage 4: Advanced Approaches

  • Fine-tuning models per customer
  • Context caching strategies
  • Hosting internal LLM services
  • Full MLOps implementation

This led to Michael talking about doing architecture in the broader enterprise context:

  • Organizations have fragmented information ecosystems
  • Oragnizational data is spread across multiple systems after mergers and acquisitions
  • Sophisticated information retrieval has to be implemented before AI can be effective
  • “Garbage in, garbage out” still applies — in fact, even more so with AI

He detailed his experience building an 85-microservice pipeline for document processing:

  • Choreographed approach: Microservices respond to events independently
  • Benefits: Flexibility, easy to add new capabilities
  • Challenges: No definitive end state, potential for infinite loops, ordering issues
  • Alternative: Orchestrated approach with a mediator (more control but less flexibility)

He could’ve gone on for longer, but we were “at time,” so he wrapped up with some concepts worth our exploring afterwards:

  • JSON-LD: JSON with Linked Data, providing context to structured data
  • Schema.org: Standardized vocabulary for semantic meaning
  • Graph RAG: Connecting LLMs directly to knowledge graphs
  • Hypermedia APIs: Self-describing APIs that adapt without redeployment

He also talked about how models trained on JSON-LD can automatically understand and connect data using standardized vocabularies, enabling more sophisticated AI integrations.

What’s a summary of a talk without some takeaways? here are mine:

  • Architecture is fundamentally about trade-offs! Every decision that strengthens one quality attribute weakens others; you need to decide which ones are important for the problems you’re trying to solve.
  • Effective architects need breadth over depth. Instead of being “T-shaped,” which many people call the ideal “skill geometry” for individual developers, the architect needs to be more of a “broken comb.”
  • AI integration is more than just functionality. It’s architecturally significant and requires careful planning
  • Standards from the past are relevant again! Like Jason Voorhees, they keep coming back. Technologies like RDF and JSON-LD, once considered ahead of their time, are now crucial for AI.
  • The chat interface is just the beginning! Yes, it’s the one everyone understands because it’s how the current wave of AI became popular, but serious AI integration requires thoughtful architectural patterns.

Here’s the summary of patterns Michael talked about:

  • Prompt Template Pattern
  • Guardrails Pattern
  • Context-enrichment & Caching
  • Composite Patterns
  • Model Tuning
  • Pipeline Pattern
  • Encoder-decoder pattern
  • Choreographed and Orchestrated Event-driven Patterns
  • RAG
  • Self-RAG
  • Corrective-RAG
  • Agentic RAG
  • Agent-Ready APIs

And once the presentation was done, a number of us reconvened at Colony Grill, the nearby pizza and beer place, where we continued with conversations and card tricks.

My thanks to Michael Carducci for coming to Tampa, Tampa JUG and Ammar Yusuf for organizing, Hallie Stone and Couchbase for the food, Kforce for the space (and hey, for the job), and to everyone who attended for making the event so great!

Categories
Artificial Intelligence Humor

An even better joke about OpenAI’s “erotica for verified adults” announcement

Yesterday, I came up with a joke in response to OpenAI CEO Sam Altman’s tweet about adding “erotica for verified adults” to an upcoming version of ChatGPT. This morning, I came up with a better one, and here it is:

Screenshot of Techmeme article on Sam Altman’s announcement that a future version of ChatGPT will add “erotica for verified adults” with a caption that reads “Maybe ‘AGI’ is really short for ‘Artificial GENITAL Intelligence.’”
Categories
Artificial Intelligence Humor

OpenAI finally figured out what REALLY we want in an AI chatbot

I’d rather not link to X, so here’s a screenshot of Sam Altman’s tweet where he announced the upcoming changes, followed by the text of the tweet:

In the tweet:

We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.

Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.

In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).

In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.