LLMs like ChatGPT, Claude, Gemini, DeepSeek, and others are great for taking in natural language questions and producing complex but useful natural language replies.
However, they can’t do anything except output text. without the help of additional code. Until recently, the way to connect LLMs to other systems was to develop custom code for each LLM and each online system it connects to, which is an “m by n” problem.
One solution is MCP, short for “Model Context Protocol.”Introduced to world by Anthropic not too long ago, MCP is an open source, open standard that gives AI models connection to external data sources and tools. You could describe MCP as “a universal API for AIs to interact with online services.”
We’ll explain what it is, why it was made and how it came about, and then show you actual working code by building a simple MCP server connected to an LLM. Joey will demonstrate a couple of examples of MCP in action, and they’ll be examples that you can take home and experiment with!
Bring your laptop — you’ll have the opportunity to try out the MCP demos for yourself!
And yes, there’ll be food, and water and soft drinks will be provided. If that doesn’t work for you, feel free to bring your own.
Want to register (it’s free)? You can do so at either of the following Meetup pages:
For the next little while, I’m going to share stories about my current coding projects. This first one is about a quick evening project I did the other night that features a Raspberry Pi, a dollar-store USB power pack, Python, and a little vibe coding.
The inspiration
Original cover of Andy Weir’s novel, “The Martian.”
My fictional engineer hero is Mark Watney, the protagonist of Andy Weir’s self-published sci-fi novel turned big hit, The Martian. A good chunk of the story is about how Watney, an astronaut stranded on Mars, would overcome problems by cobbling together some tool or device, using only the gear available at the Mars habitat, his technical know-how, and creative thinking. My love for the way Watney does things is so strong that I play the audiobook version as “background noise” whenever I’m working on a problem that seems intractable.
While the movie version adds a line that captures what Watney does throughout The Martian — “I’m gonna have to science the shit out of this” — it condenses a lot of what he has to do, and covers only a small fraction of the clever guerilla engineering that he does in the novel on which it was based.
The book version has Watney narrating the problems he faces, and how he uses the available equipment and material to overcome them, which often involved adapting gear for purposes they weren’t made for, such as attaching solar cells meant for the habitat to the Mars Rover to extend its driving range.
The idea
I’d been meaning to do some projects where I’d attach sensors to my old Raspberry Pi 3B and set it up somewhere outside. Of course, you can’t count on having an electrical outlet nearby in the great outdoors. However, the Raspberry Pi 3 series takes its power via a USB micro port.
This led to a couple of questions:
Could I power it with a one of those USB power packs that are typically used to recharge phones while on the go?
If so, how long could a USB power pack keep my Raspberry Pi running?
It was time to ask some questions and get the answer empirically!
Question 1: Can you run a Raspberry Pi from a USB power pack?
The short answer: yes. I plugged my least-powerful power pack, one those cheap ones that you’d expect to find at Dollar General or Five Below:
I don’t recall buying it. It’s probably something that a friend or family member gave me. People often give me their electronics cast-offs, and I either find a use for them or recycle them.
Fortunately, no matter how cheap these units are, they usually have their specs printed somewhere on their body:
The power pack outputs 800 milliamps (mA), which is enough to run a Raspberry Pi 3B, especially if I remove any USB devices and don’t connect it to a monitor. The Pi draws about 250 mA when idle, which I figured would give me plenty of “headroom” for when the Pi would be running my little program.
According to the specs printed on the power pack, its battery capacity is 2200 milliamp-hours (mAH). That means it should be able to run a device that consumes 2200 milliamps for an hour, or a device that consumes 1100 milliamps for two hours, or a device that consumes 550 milliamps for four hours.
Of course, this was an old power pack of unknown provenance that had been through an unknown number of power cycles. Its capacity was probably far less than advertised.
Here’s the Raspberry Pi, with some parts labeled:
I plugged the power pack into Raspberry Pi, and it booted. The OS displayed a “low voltage warning” message on the desktop as soon as it had completed booted…
…but it worked!
Question 2: How long will the Raspberry Pi run on this power pack, and how do you find out?
Before I give you my answer to the second question, give it some thought. How would you test how long a programmable device can run on a power pack?
I wrote this Python script:
// uptime.py
import time
with open("uptime.txt", "a") as f:
while True:
time.sleep(5)
display_time = time.ctime(time.time())
f.write(f"{display_time}\n")
print(display_time)
Here’s what the script does:
It opens a file named uptime.txt if one exists. If there is no such file, it creates that file.
It starts an infinite loop that appends the current time to the file every five seconds.
Note that the code appends the current time to the file, resulting in a file full of timestamps that looks like this…
Wed May 28 14:16:28 2025
Wed May 28 14:16:33 2025
Wed May 28 14:16:38 2025
…instead of a file containing the most recent timestamp.
There’s a reason for this: if I simply had the application write to the file so that it contained only the most recent timestamp, there’s a chance that the power might go out in the middle of writing to the file, which means there’s a chance that the program would fail to write the current time, and I’d end up with a blank file.
By adding the latest timestamp to the end of a file full of timestamps every 5 seconds, I get around the problem of the power going out in the middle of the file operation. At worst, I’ll have a timestamp from 5 seconds before the Raspberry Pi lost power.
The test
I fired up the Raspberry Pi while it was connected to its regular power adapter, entered the script and saved it, powered it down, and then connected it to the power pack:
I then turned it on, hooked it up to a keyboard and monitor just long enough to start the script, then disconnected the keyboard and monitor. I then left the Raspberry Pi alone until its power light went out, which indicated that it had run the battery dry.
I ran this test a couple of times, and on average got 57 minutes of uptime.
Question 3: Would I get more uptime if I wrote the program in C instead of Python?
CPUs are really large collections of transistors:
In the case of the Raspberry Pi 3 B’s ARM Cortex A53 CPU, there are about 260 million of these transistors, each one being a tiny switch that uses a tiny bit of electricity. The harder you work them, the more power they use. Some of that power is released as heat, which is why your computer gets warm, especially if you’ve got it doing something that requires a lot of computation (such as videogames, mining cryptocurrency, or training and running artificial intelligence models).
You’ve probably seen this chart that’s been floating around the internet since its publication in the 2017 paper Energy Efficiency across Programming Languages. It shows how much energy, time, and memory a number of programming languages use to perform roughly the same tasks, and it appears that Python does poorly for energy efficiency:
In order to be an easier language to read and program in, with more expressiveness and flexibility than C Python makes it easier on programmer by putting more of the work on the processor. That’s why it’s a power hog. On the other hand, C, which is almost assembly language, is at the top of the power efficiency list.
So my thought was: what if I wrote the uptime program in C and ran it — would I get more uptime?
The C program, version 1
I was about to start coding in C when it occurred to me that while I knew how to write to a file in it, I couldn’t remember how to get the time.
I was about to look it up, when I thought: why not have Claude do it?
So I opened a browser to Claude and entered:
I need some C code that will append the current date and time to a file named “uptime.txt” every 5 seconds.
It provided me with the following code:
// uptime.c
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <signal.h>
#include <stdlib.h>
volatile int running = 1;
void signal_handler(int sig) {
running = 0;
printf("\nReceived signal %d, shutting down gracefully...\n", sig);
}
int main() {
FILE *file;
time_t raw_time;
struct tm *time_info;
char time_string[100];
// Set up signal handler for graceful shutdown
signal(SIGINT, signal_handler);
signal(SIGTERM, signal_handler);
printf("Starting uptime logger. Writing to uptime.txt every 5 seconds.\n");
printf("Press Ctrl+C to stop.\n\n");
while (running) {
// Get current time
time(&raw_time);
time_info = localtime(&raw_time);
// Format time string
strftime(time_string, sizeof(time_string), "%Y-%m-%d %H:%M:%S", time_info);
// Open file in append mode
file = fopen("uptime.txt", "a");
if (file == NULL) {
perror("Error opening file");
return 1;
}
// Write timestamp to file
fprintf(file, "%s\n", time_string);
fclose(file);
printf("Logged: %s\n", time_string);
// Wait 5 seconds
sleep(5);
}
printf("Uptime logger stopped.\n");
return 0;
}
The C program, version 2
After looking over the code, I noticed something and decided to ask Claude about it:
What part of this code requires stdlib.h?
Claude’s response:
Good catch! Looking at the code, stdlib.h is only needed for the exit() function, but I’m not actually using exit() anywhere in the program.
The return statements use the standard return mechanism, and the signal handler just sets the running flag to 0 rather than calling exit(). So stdlib.h can be removed from the includes.
Let this be a lesson: always double-check the code an AI gives you.
It then provided this code:
#include <stdio.h>
#include <time.h>
#include <unistd.h>
#include <signal.h>
volatile int running = 1;
void signal_handler(int sig) {
running = 0;
printf("\nReceived signal %d, shutting down gracefully...\n", sig);
}
int main() {
FILE *file;
time_t raw_time;
struct tm *time_info;
char time_string[100];
// Set up signal handler for graceful shutdown
signal(SIGINT, signal_handler);
signal(SIGTERM, signal_handler);
printf("Starting uptime logger. Writing to uptime.txt every 5 seconds.\n");
printf("Press Ctrl+C to stop.\n\n");
while (running) {
// Get current time
time(&raw_time);
time_info = localtime(&raw_time);
// Format time string
strftime(time_string, sizeof(time_string), "%Y-%m-%d %H:%M:%S", time_info);
// Open file in append mode
file = fopen("uptime.txt", "a");
if (file == NULL) {
perror("Error opening file");
return 1;
}
// Write timestamp to file
fprintf(file, "%s\n", time_string);
fclose(file);
printf("Logged: %s\n", time_string);
// Wait 5 seconds
sleep(5);
}
printf("Uptime logger stopped.\n");
return 0;
}
The C program, version 3
This was an improvement, but there was still a little too much “gold plating” for my needs. I entered the following into Claude:
I don’t need the graceful exit, because this program is expected to run until the computer runs out of power.
Here’s the code it provided in its response:
#include <stdio.h>
#include <time.h>
#include <unistd.h>
int main() {
FILE *file;
time_t raw_time;
struct tm *time_info;
char time_string[100];
printf("Starting uptime logger. Writing to uptime.txt every 5 seconds.\n\n");
while (1) {
// Get current time
time(&raw_time);
time_info = localtime(&raw_time);
// Format time string
strftime(time_string, sizeof(time_string), "%Y-%m-%d %H:%M:%S", time_info);
// Open file in append mode
file = fopen("uptime.txt", "a");
if (file == NULL) {
perror("Error opening file");
return 1;
}
// Write timestamp to file
fprintf(file, "%s\n", time_string);
fclose(file);
printf("Logged: %s\n", time_string);
// Wait 5 seconds
sleep(5);
}
return 0;
}
Much better.
Once I got past this regularly recurring feeling…
…I compiled the program by entering this line in the terminal…
gcc -o uptime uptime.c
./uptime
…to compile and run it.
I ran this experiment a couple of times, and on average got 89 minutes of uptime. So yes, writing — or better yet, vibe coding — the “uptime” program in C got me more power efficiency and uptime!
Question 4: Did Claude “know” what I was trying to do?
I figured there was more than enough context for Claude to “know” what the “uptime” program was for, so I asked it:
Based on the information I’ve provided in this conversation, what do you think this program is for?
Here’s its complete reply:
Based on the context clues, this appears to be a simple system uptime monitor or power outage detector.
The key indicators are:
You mentioned it’s “expected to run until the computer runs out of power”
It logs timestamps every 5 seconds to track when the system is running
You don’t want graceful exit handling since power loss would be abrupt
The file is named “uptime.txt”
This suggests you want to track how long your system stays powered on, possibly to:
Monitor power reliability/outages in your area
Track system uptime for reliability analysis
Have a record of when power failures occur (gaps in the timestamps would indicate outages)
Monitor an unattended system or server
When power is restored and the system boots back up, you could restart this program and it would continue appending to the same file, giving you a complete timeline of system availability with clear gaps showing when outages occurred.
Not bad! In fact, the use case in that final paragraph was one I hadn’t thought of, which is a primary reason why I like using AIs for my work, which is:
No matter how smart, inventive, or creative you are (or think you are), you will never be able to write a list of ideas that you would never think of.
Lessons learned
Yes, you can power a Raspberry Pi — at least a Raspberry Pi 3 — with a USB power pack, even a cheap one. I knew that you could power smaller boards this way, but you have to remember that a Raspberry Pi 3 is on par with a baseline 2010-era laptop, and those had much bigger batteries.
Claude’s C code generation is pretty good, but you still need to be a programmer to correct it, and while it will show you how to compile and run a C program, it’s a good idea to know how to do so.
Claude can figure out what a program is for, given enough context.
Watch this space — I’ll be posting more articles like this!
Let me introduce you to a shorthand term that I think will be useful soon: a12y, which is short for automatability, which is the ability to be automated.
(The term a12y indicates that the first letter is a, the final letter is y, and there are 12 letters between them. There’s a similar, better-known. shorthand term, a11y, which is short for accessibility.)
Automation is nothing new. It’s one of the reasons we use technology — from mechanical devices to computers to software and online services — to perform tasks with to reduce the work we have to do, or even eliminate the work entirely.
In the Python courses I’ve taught a few times at Computer Coach, I’ve covered how you can use Python to automate simple day-to-day work tasks and provided examples from one of the course’s core textbooks, Automate the Boring Stuff with Python (the entire book is available to read online for free!).
I’ve also created a number of Python automations that I use regularly. You’ve even seen some of their output if you’re a regular reader of this blog, since the weekly list of Tampa Bay tech, entrepreneur, and nerd events is generated by my automation that scrapes Meetup pages.
MCP is the latest buzzword in both AI and automation, or a12y with AI. Short for Model Context Protocol (and not Master Control Program in the Tron movies), MCP is a standardized way for AI models to go beyond simply generating answers and interact with external tools and data sources, such as APIs, databases, file systems, or anything else that’s connected to the internet and can accept commands to perform actions.
Simply put, it’s the next step in the path to creating AI agents that can perform tasks autonomously.
(Come to think of it, a10y might be a good shorthand for autonomously.)
We’ll cover all sorts of a12y topics in the upcoming Tampa Bay Python meetups! I’m currently working on the details of booking meetup space and getting some food and drink sponsors, but they’ll be happening soon. Watch this blog, the Tampa Bay Python Meetup page, and my LinkedIn for announcements!
Here’s the schedule for AI Codecon, which is still being finalized as I write this:
Introduction, with Tim O’Reilly (10 minutes)
Gergely “Pragmatic Engineer” Orosz and Addy Osmani Fireside Chat (20 minutes) Addy Osmani for an insightful discussion on the evolving role of AI in software engineering and how it’s paving the way for a new era of agentic, “AI-first” development.
Vibe Coding: More Experiments, More Care – Kent Beck (15 minutes) Augmented coding deprecates formerly leveraged skills such as language expertise, and amplifies vision, strategy, task breakdown, and feedback loops. Kent Beck, creator of Extreme Programming, tells you what he’s doing and the principles guiding his choices.
Junior Developers and Generative AI – Camille Fournier, Avi Flombaum, and Maxi Ferreira (15 minutes) Is bypassing junior engineers a recipe for short-term gain but long-term instability? Or is it a necessary evolution in a high-efficiency world? Hear three experts discuss the trade-offs in team composition, mentorship, and organizational health in an AI-augmented industry.
My LLM Codegen Workflow at the Moment – Harper Reed (15 minutes)
Technologist Harper Reed takes you through his LLM-based code generation workflow and shows how to integrate various tools like Claude and Aider, gaining insights into optimizing LLMs for real-world development scenarios, leading to faster and more reliable code production.
Jay Parikh and Gergely Orosz Fireside Chat (15 minutes) Jay Parikh, executive vice president at Microsoft, and Gergely Orosz, author of The Pragmatic Engineer, discuss AI’s role as the “third runtime,” the lessons from past technological shifts, and why software development isn’t disappearing—it’s evolving.
The Role of Developer Skills in Today’s AI-Assisted World – Birgitta Böckeler (15 minutes) Birgitta Böckeler, global lead for AI-assisted software delivery at Thoughtworks, highlights instances where human intervention remains essential, based on firsthand experiences. These examples can inform how far we are from “hands-free” AI-generated software and the skills that remain essential, even with AI in the copilot seat.
Modern Day Mashups: How AI Agents Are Reviving the Programmable Web – Angie Jones (5 minutes) Angie Jones, global vice president of developer relations at Block, explores how AI agents are bringing fun and creativity back to software development and giving new life to the “programmable web.”
Tipping AI Code Generation on its Side – Craig McLuckie (5 minutes) The current wave of AI code generation tools are closed, vertically integrated solutions. The next wave will be open, horizontally aligned systems. Craig McLuckie explores this transformation, why it needs to happen, and how it will be led by the community.
Prompt Engineering as a Core Dev Skill: Techniques for Getting High-Quality Code from LLMs – Patty O’Callaghan (5 minutes)
Patty O’Callaghan highlights practical techniques to help teams generate high-quality code with AI tools, including an “architecture-first” prompting method that ensures AI-generated code aligns with existing systems, contextual scaffolding techniques to help LLMs work with complex codebases, and the use of task-specific prompts for coding, debugging, and refactoring.
Chip Huyen and swyx Fireside Chat (20 minutes) Chip Huyen will delve [Aha! An AI wrote this! — Joey] into the practical challenges and emerging best practices for building real-world AI applications, with a focus on how foundation models are enabling a new era of autonomous agents.
Bridging the AI Learning Gap: Teaching Developers to Think with AI – Andrew Stellman (15 minutes) Andrew Stellman, software developer and author of Head First C#, shares lessons from Sens-AI, a learning path built specifically for early-career developers, and offers insights into the gap between junior and senior engineers.
Lessons Learned Vibe Coding and Vibe Debugging a Chrome Extension with Windsurf – Iyanuoluwa Ajao (5 minutes) Software and AI engineer Iyanuoluwa Ajao explores the quirks of extension development and how to vibe code one from scratch. You’ll learn how chrome extensions work under the hood, how to vibe code an extension by thinking in flows and files, and how to vibe debug using dependency mapping and other techniques.
Designing Intelligent AI for Autonomous Action – Nikola Balic (5 minutes)
Nikola Balic, head of growth at VC-funded startup Daytona, will show through case studies like AI-powered code generation and autonomous coding, you’ll learn key patterns for balancing speed, safety, and strategic decision-making—and gain a road map for catapulting legacy systems into agent-driven platforms.
Secure the AI: Protect the Electric Sheep – Brett Smith (5 minutes) Distinguished software architect, engineer, and developer Brett Smith discusses AI security risks to the software supply chain, covering attack vectors, how they relate to the OWASP Top 10 for LLMs, and how they tie into scenarios in CI/CD pipelines. You’ll learn techniques for closing the attack vectors and protecting your pipelines, software, and customers.
How Does GenAI Affect Developer Productivity? – Chelsea Troy (15 minutes) The advent of consumer-facing generative models in 2021 catalyzed a massive experiment in production on our technical landscape. A few years in, we’re starting to see published research on the results of that experiment. Join Chelsea Troy, leader of Mozilla’s MLOps team, for a tour through the current findings and a few summative thoughts about the future.
Eval Engineering: The End of Machine Learning Engineering as We Know It – Lili Jiang (15 minutes)
Lili Jiang, former Waymo evaluation leader, reveals how LLMs are transforming ML engineering. Discover why evaluation is becoming the new frontier of ML expertise, how eval metrics are evolving into sophisticated algorithms, and why measuring deltas instead of absolute performance creates powerful development flywheels.
Ben Affleck’s interview scene from “Good Will Hunting”.
I’m working on some new videos for the Global Nerdy YouTube channel, and one of them is on answering questions in the dreaded “behavioral interview.” That’s the interview where you’re asked questions like “Tell me about a time when you had to deal with a conflict on your team. How did you handle it?”
To help come up with ideas for the video about behavioral interviews, I posed some questions to LLMs. As I like to say, no matter how creative or clever you are, it is by definition impossible to make a list of ideas that you’d never come up with yourself.
So I posed this question to Claude, which for some reason, seems to know stuff about me:
How would you describe a behavioral interview, if you had to describe it in the style of Joey deVilla?
And wow, was the response snark-tastic:
Here’s the text of the response:
Joey deVilla’s Guide to Behavioral Interviews
Imagine me adjusting my accordion straps and coffee mug before dropping this knowledge bomb…
So you wanna know about behavioral interviews? Those corporate rituals where they ask you to “Tell me about a time when…” instead of just asking if you can do the damn job?
Behavioral interviews are the tech industry’s version of “Show, don’t tell” – except instead of showing your actual skills, you’re telling stories about when you supposedly used them! It’s like your professional life as fan fiction, where the hiring manager is desperately hoping you’re not writing in the genre of fantasy.
These interviews operate on the radical premise that past behavior predicts future performance. Revolutionary, I know! It’s like assuming that because I once played “Lady of Spain” on my accordion at a Tampa dev meetup, I’ll do it again given the chance and sufficient sangria.
The formula they want is STAR: Situation, Task, Action, Result. But what they’re REALLY looking for is STARBS: Situation, Task, Action, Result, Bull$#!t – that special sauce where you frame yourself as the hero who saved the project while maintaining perfect diplomatic relations with that passive-aggressive PM who kept moving the goalposts.
The trick isn’t just recounting what happened – it’s storytelling with a corporate-friendly narrative arc. You need conflict (but not TOO much conflict), challenges (that showcase your skills), and resolution (where you emerge victorious but humble).
Remember: in behavioral interviews, there are no failures – only “growth opportunities” and “valuable lessons” that conveniently transformed you into exactly the employee they’re looking for!
Adjusts glasses, takes dramatic sip of coffee
Now, if you’ll excuse me, I need to go practice my own STAR stories for when someone inevitably asks me about “a time I demonstrated adaptability in a challenging situation” – which is code for “that time I coded for 36 hours straight because someone in management promised an impossible deadline.”
My questions, after reading Claude’s output:
Am I really that much of a smart-ass?
“STARBS” — Situation, Task, Actions, Result, Bullshit — is pretty clever. Whose writing did Claude lift that from?
So I did some web searches using terms like starbs star situation task action result bullshit, and couldn’t find any instance of STARBS. It seems to be one of those effectively new creations that an autocomplete-on-steroids can generate.
I’m not gonna lie: I kind of like STARBS, and wish that I’d come up with it myself.
(If you’re interested, see if you can do some searches to find other instances of the term STARBS.)