Categories
What I’m Up To Work

My speedrun as the HP ZGX Nano Developer Advocate: Two months, one podcast, zero regrets

Just over two months after my announcement that I was doing developer relations for HP’s ZGX Nano AI workstation — an NVIDIA-powered, book-sized desktop computer specifically made for AI application development and edge computing — HP ended the Kforce contract for the ZGX Nano program, so my last day is Friday.

In my all-too-brief time working with HP, I got a lot done, including…

I landed the ZGX Nano appearance on Intelligent Machines

On the very day I announced that I was doing developer relations for the ZGX Nano, I got an email that began with this paragraph:

I’m Anthony, a producer with the TWiT.tv network. Jeff Jarvis mentioned you’re “a cool dude” from the early blogging days (and apparently serenaded some Bloggercons?), but more importantly, we saw you just started doing developer relations for HP’s ZGX Nano. We’d love to have you on our podcast Intelligent Machines to discuss this shift toward local AI computing.

First of all: Thanks, Jeff! I owe you one.

Second: I didn’t pitch TWiT. TWiT pitched me, as soon as they found out! This wasn’t the outcome of HP’s product marketing department contacting media outlets. Instead, it’s because Jeff knows me, and he knew I was the right person to explain this new AI hardware to their audience:

I generated earned media for HP without a single pitch, press release, or PR agency. My personal brand amplified HP’s brand, and maybe it can amplify your company’s brand too!

And finally: I’m just great at explaining complex technical topics in a way that people can understand. Don’t take my word for it; take Leo Laporte’s:

In case you need some stats:

  • TWiT Network (home of Intelligent Machines): 25+ million downloads annually
  • Cost of equivalent advertising slot: $double digit thousands
  • Time from my hire to major media appearance: 8 weeks
  • Number of PR pitches sent: 0
  • Value of authentic relationships: Priceless

I built page-one visibility for a brand-new product —organically

Do a Google search on the term zgx nano (without the quotes) and while you might see slightly different results from mine, you should find that this blog, Global Nerdy, is on the first page of results:

Tap to try out a Google search for zgx nano for yourself.

The screenshot above was taken on the evening of Monday, October 27, and two of the articles on this blog are the first two search results after HP.

My content gets found. Within 8 weeks of starting work with HP, my coverage of the ZGX Nano achieved first-page Google ranking, competing directly with HP’s official pages and major tech publications. This organic reach is what modern developer relations looks like: authentic content that both developers and search algorithms trust.

With me, you’re not just getting a developer advocate, but someone with a tech blog going back nearly two decades and with the domain authority to take on  with Fortune 500 companies on Google. My Global Nerdy posts about ZGX Nano rank on page one because Google trusts content I’ve been building since 2006.

I enabled the Sales team to go from zero to hero

On day one, I was given two priorities:

  • First, provide enablement for the Sales team and give them the knowledge and selling points they need to be effective when talking to customers about the ZGX Nano.
  • Support developers who were interested in the ZGX Nano, or even just AI application development. Unfortunately, I’m not going to get to execute this phase.

But I got pretty far with that first phase! In less than eight weeks, I built a sales enablement foundation for a brand-new AI workstation with scant documentation. I created 50+ pages of technical documentation that gave HP’s global sales force what they needed to sell a new product in a new category.

Some of my big quantifiable achievements in sales enablement:

  • 25+ technical objections anticipated and addressed
    • Created comprehensive FAQ covering everything from architecture to ROI calculations
    • Translated GB10 superchip complexity into sales-friendly language
    • Provided competitive differentiation against NVIDIA DGX Spark, Dell, and Lenovo
  • 12 industry verticals mapped with 60+ business impact scenarios”
    • Developed go-to-market strategy for each vertical (healthcare to gaming)
    • Created specific ROI talking points for each industry
    • Identified 5 business impacts per vertical = 60 total selling points
  • Turned “It’s just another GB10 machine” into “Here’s why HP wins”
    • Differentiated commodity hardware through software story (ZGX Toolkit)
    • Created objection handling that transforms skepticism into sales
    • Armed sales with “Why HP and not NVIDIA direct””messaging

I’m available starting next week!

All told, it was 2 months, 1 podcast…and ZERO regrets. I enjoyed the work, and I’m grateful to have been selected to be the developer spokesmodel for an amazing AI computer.

I don’t think of this as a termination. It was a high-intensity proof-of-concept for my ability to help launch a new device with little guidance (in fact, the manager who hired me moved to another company on my first week). They asked; I delivered. Now, I’m looking for the next impossible mission.

As I wrote at the start of this article, my last day is on Friday — yes, I wrap up on Halloween — and as of Monday next week, I’m available!

I’m now looking for my next Developer Advocate role. Who needs someone who can…

  • Land major podcast appearances on Day One?
  • Has enough SEO know-how and influence to get you to Page One?
  • Can enable your sales and marketing teams with technical material, explained in a non-techie-friendly way?

If you’re looking for such a person, either on a full-time or consulting basis, set up an appointment with me on my calendar.

Let’s talk!

Categories
Artificial Intelligence Hardware What I’m Up To

Talking about HP’s ZGX Nano on the “Intelligent Machines” podcast

On Wednesday, HP’s Andrew Hawthorn (Product Manager and Planner for HP’s Z AI hardward) and I appeared on the Intelligent Machines podcast to talk about the computer that I’m doing developer relations consulting for: HP’s ZGX Nano.

You can watch the episode here. We appear at the start, and we’re on for the first 35 minutes:

A few details about the ZGX Nano:

  • It’s built around the NVIDIA GB10 Grace Blackwell “superchip,” which combines a 20-core Grace CPU and a GPU based on NVIDIA’s Blackwell architecture.

  • Also built into the GB10 chip is a lot of RAM: 128 GB of LPDDR5X coherent memory shared between CPU and GPU, which helps avoid the kind of memory bottlenecks that arise when the CPU and GPU each have their own memory (and usually, the GPU has considerably less memory than the CPU).
NVIDIA GB10 SoC (system on a chip).
  • It can perform up to about 1000 TOPS (trillions of operations per second) or 1015 operations per second and can handle model sizes of up to 200 billion parameters.

  • Want to work on bigger models? By connecting two ZGX Nanos together using the 200 gigabit per second ConnectX-7 interface, you can scale up to work on models with 400 billion parameters.

  • ZGX Nano’s operating system in NVIDIA’s DGX OS, which is a version of Ubuntu Linux with additional tweaking to take advantage of the underlying GB10 hardware.

Some topics we discussed:

  • Model sizes and AI workloads are getting bigger, and developers are getting more and more constrained by factors such as:
    • Increasing or unpredictable cloud costs
    • Latency
    • Data movement
  • There’s an opportunity to “bring serious AI compute to the desk” so that teams can prototype their AI applications  and iterate locally
  • The ZGX Nano isn’t meant to replace large datacenter clusters for full training of massive models, It’s aimed at “the earlier parts of the pipeline,” where developers do prototyping, fine-tuning, smaller deployments, inference, and model evaluation
  • The Nano’s 128 gigabytes of unified memory gets around the issues of bottlenecks with distinct CPU memory and GPU memory allowing bigger models to be loaded in a local box without “paging to cloud” or being forced into distributed setups early
  • While the cloud remains dominant, there are real benefits to local compute:
    • Shorter iteration loops
    • Immediate control, data-privacy
    • Less dependence on remote queueing
  • We expect that many AI development workflows will hybridize: a mix of local box and cloud/back-end
  • The target users include:
    • AI/ML researchers
    • Developers building generative AI tools
    • Internal data-science teams fine-tuning models for enterprise use-cases (e.g., inside a retail, insurance or e-commerce firm).
    • Maker/developer-communities
  • The ZGX Nano is part of the “local-to-cloud” continuum
  • The Nano won’t cover all AI development…
    • For training truly massive models, beyond the low hundreds of billions of parameters, the datacenter/cloud will still dominate
    • ZGX Nano’s use case is “serious but not massive” local workloads
    • Is it for you? Look at model size, number of iterations per week, data sensitivity, latency needs, and cloud cost profile

One thing I brought up that seemed to capture the imagination of hosts Leo Laporte, Paris Martineau, and Mike Elgan was the MCP server that I demonstrated a couple of months ago at the Tampa Bay Artificial Intelligence Meetup: Too Many Cats.

Too Many Cats is an MCP server that an LLM can call upon to determine if a household has too many cats, given the number of humans and cats.

Here’s the code for a Too Many Cats MCP server that runs on your computer and works with a local CLaude client:

from typing import TypedDict
from mcp.server.fastmcp import FastMCP

mcp = FastMCP(name="Too Many Cats?")

class CatAnalysis(TypedDict):
    too_many_cats: bool
    human_cat_ratio: float  

@mcp.tool(
    annotations={
        "title": "Find Out If You Have Too Many Cats",
        "readOnlyHint": True,
        "openWorldHint": False
    }
)
def determine_if_too_many_cats(cat_count: int, human_count: int) -> CatAnalysis:
    """Determines if you have too many cats based on the number of cats and a human-cat ratio."""
    human_cat_ratio = cat_count / human_count if human_count > 0 else 0
    too_many_cats = human_cat_ratio >= 3.0
    return CatAnalysis(
        too_many_cats=too_many_cats,
        human_cat_ratio=human_cat_ratio
    )

if __name__ == "__main__":
    # Initialize and run the server
    mcp.run(transport='stdio')

I’ll cover writing MCP servers in more detail on the Global Nerdy YouTube channel — watch this space!

Categories
Artificial Intelligence Hardware What I’m Up To

Specs for NVIDIA’s GB10 chip, which powers HP’s ZGX Nano G1n AI workstation

I’m currently working with Kforce as a developer relations consultant for HP’s new tiny desktop AI powerhouse, the ZGX Nano (also known as the ZGX Nano G1n). If you’ve wondered about the chip powering this machine, this article’s for you!

The chip powering the ZGX Nano is NVIDIA’s GB10, a combination CPU and GPU where “GB” stands for “Grace Blackwell.” The chip’s two names stand for each of its parts…

Grace: The CPU

The part named “Grace” is an ARM CPU with 20 cores, arranged in ARM’s big.LITTLE (DynamIQ) architecture, which is a mix of different kinds of cores for a balance of performance and efficiency:

    • 10 Cortex-X925 cores. These are the “performance” cores, which are also sometimes called the “big cores.” They’re designed for maximum single-thread speed, higher clock frequencies, and aggressive out-of-order execution, their job is to handle bursty, compute-intensive workloads such as gaming and rendering, and on the ZGX Nano, they’ll be used for AI inference.
    • 10 Cortex-A725 cores. These are the “efficiency” cores, which are sometimes called the “little cores.” They’re designed for sustained performance per watt, running at lower power and lower clock frequencies. Their job is to handle background tasks, low-intensity threads, or workloads where power efficiency and temperature control matter more than peak speed.

Blackwell: The GPU

The part named “Blackwell’ is NVIDIA’s GPU, which has the following components:

    • 6144 neural shading units, which act as SIMD (single-instruction, multiple data) processors that act as “generalists,” switching between standard graphics math and AI-style operations. They’re useful for AI models where the workloads aren’t uniform, or with irregular matrix operations that don’t map neatly into 16-by-16 blocks.
    • 384 tensor cores, which are specialized matrix-multiply-accumulate (MMA) units. They perform the most common operation in deep learning, C = A × B + C, across thousands of small matrix tiles in parallel. They do so using mixed-precision arithmetic, where there are different precisions for inputs, products, and accumulations.
    • 384 texture mapping units (TMUs). These can quickly sample data from memory and do quick processing on that data. In graphics, these capabilities are use to resize, rotate, and transform bitmap images, and then paint them onto 3D objects. When used for AI, these capabilities are used to perform bilinear interpolation (used by convolutional neural network layers and transformers) and sample AI data.
    • 48 render output units (ROPs). In a GPU, the ROPs are the final stage in the graphics pipeline — they convert computed fragments into final pixels stored in VRAM. When used for AI, ROPs provide a way to quickly write the processing results to memory and perform fast calculations of weighted sums (which is an operation that happens with all sorts of machine learning).

128 GB of unified RAM

There’s 128GB of LPDDR5X-9400 RAM built into the chip, a mobile-class DRAM type designed for high bandwidth and energy efficiency:

  • The “9400” in the name refers to its memory bandwidth (the speed at which the CPU/GPU can move data between memory and on-chip compute units) of 9.4 Gb/s per pin. Across a 256-bit bus, this provides almost 300 GB/s peak bandwidth

  • LPDDR5X is more power-efficient than HBM but slower; it’s ideal for compact AI systems or edge devices (like the ZGX Nano!) rather than full datacenter GPUs.

As unified memory, the RAM is shared by both the Grace (CPU) and Blackwell (GPU) portions of the chip. That’s enough memory for:

  • Running large-language-model inference up to 200 billion parameters with 4-bit weights

  • Medium-scale training or fine-tuning tasks

  • Data-intensive edge analytics, vision, or robotics AI

Because the memory is unified, it means that the CPU and GPU share a single physical pool of RAM, which eliminates explicit data copies.

The RAM is linked to the CPU and GPU sections using NVIDIA’s C2C (chip-to-chip) NVLINK , their low-power interconnector that lets CPU/GPU memory traffic move at up to 600 GB/s aggregate. That’s faster than PCIe 5! This improves latency and bandwidth for workloads that constantly exchange data between CPU preprocessing and GPU inference/training kernels.

Double the power with ConnectX

If the power of a single ZGX Nano wasn’t enough, there’s NVIDIA’s ConnectX technology, which is based on a NIC that provides a pair of 200 GbE ports, enabling the chaining/scaling out of workload across  two GB10-based units. The doubles the processing power, allowing you to run models with up to 400 billion parameters!

The GB10-powered ZGX Nano is a pretty impressive beast, and I look forward to getting my hands on it!

 

Categories
Artificial Intelligence Hardware

HP’s ZGX Nano G1n AI workstation: A sneak peek!

I’ll be talking about HP’s upcoming ZGX Nano G1n AI workstation soon, but in the meantime, here’s HP’s Brian Allen providing a sneak preview of the ZGX Nano at last week’s HP event in New York.

Categories
Artificial Intelligence Hardware What I’m Up To

Quick announcement: I’m doing developer relations for HP’s new ZGX Nano AI computer!

Just so you know: today’s my first day at Kforce doing developer relations for HP! More specifically, for HP’s ZGX Nano, a tiny computer designed specifically for running large AI models right on your desktop…and not on someone else’s computers!

The ZGX Nano packs a ridiculous amount of power into a tiny space…

Powered by NVIDIA’s GB10 GPU and a 20-core ARM CPU sharing 128GB of RAM, the ZGX Nano performs at 1,000 teraflops (1 petaflop), which is 1015 floating-point operations per second. It’ll support an AI model taking in 200 billion parameters — 400 billion if you connect two ZGX Nanos together.

I’m getting set up for day one on the job as I write this, so I’m keeping this post short and ending with this gem from a little while back: HP’s Rules of the Garage: