Red rows mean I’m out of the running, orange means that I’ve applied but nobody’s gotten back to me other than with an automated email, yellow means I had an initial screener interview but then things stopped because of the holidays, and green means active and in progress.
Pictured above is a version of my job search spreadsheet with a couple of columns hidden and some details redacted. But despite the missing info, it still has useful data points for you, namely:
You have better odds with a referral. You probably know this, but it’s worth repeating. Every referral so far has at least resulted in an initial “HR screener” interview, and two of them have resulted in final interviews (which I’ve denoted in the spreadsheet using the videogame term BOSS FIGHT!).
To stand out to recruiters, you need to be a certified yapper on LinkedIn! (This is a sticker on my Windows laptop.)
Being noisy on LinkedIn pays off. Did you know there are Recruiter versions of LinkedIn? There’s Recruiter Lite, which can cost up to $2,000 annually, and then there’s Corporate version, which is said to sell for about $10,000 to $12,000 per year per seat.
Recruiters get paid when they match people looking for jobs with employers looking to fill positions, so they’re willing to shell out lots of money for a specialized version of LinkedIn, provided that they get king-sized multiples of that money by finding the right match for their clients. Think of LinkedIn as a search engine for job candidates.
In the spreadsheet pictured above, note than 6 out of 30 opportunities — that’s one in five — is marked in the How it started column as Recruiter found me. They found me on LinkedIn because I post and comment regularly on AI, Python, and technology in general, which in turn generates “signal” on LinkedIn for those topics that clearly points to me. Long story short: You want to get found by recruiters on LinkedIn? You have to post on topics relevant to the job you’re looking for on LinkedIn.
You can’t see it in the spreadsheet, but I totally broke the “No more than 2 pages” resume rule. The resumes I submitted to all of the prospects in the spreadsheet — including those in which I’m at the BOSS FIGHT! stage — were 5 pages long.The trick is that my resumes, while long, answer the question “What does this candidate bring to the table?” and that’s really the question recruiters and hiring managers want answered. I customize each resume for each prospect with the assistance of Claude, and it’s worked out quite well for me.I’m betting that you’re reeeeally curious right now, so here’s one of the resumes for one of the BOSS FIGHT! prospects. I hope you find it useful!
Bonus 4th observation
See the row above? That’s an opportunity where I’m going to do a final interview that I applied to, cold, with just a resume (and yes, it was a five-pager) and a cover letter.
I didn’t have a referral, and with this particular one, I applied via LinkedIn and not via the company site because that was the only place to do it. And yet I got that initial interview, which led to all the follow-up interviews. According to the recruiter, it was a combination of the resume, cover letter, and LinkedIn presence.
In the ten business days leading up to Christmas Eve and every day since Sunday, January 4, I have either been doing at least one of these a day:
Being interviewed for a job
Doing a “take-home assignment” for a job
Recording a podcast
The trick to staying on top of all this activity is to have good notes at the ready. Ideally, these are hand-written notes and diagrams like the ones pictured above. I find that I remember what I wrote better when I write it out by hand; it’s probably because it’s slower and gives me more time to commit what I’m writing to memory.
I’ve also suspected that the act of forming different letters in the process of handwriting (unlike typing, where every keypress feels the same, and it’s only the location of the key on the keyboard that feels different) helps you remember what you wrote. There’s some research that agrees with my hypothesis.
However, with 14 interviews in December and at least five scheduled for this week, I had a lot of notes to crank out. Writing them all out by hand would be too slow, so I often resorted to the next-best thing: typing them in…
…and printing them out. I could look at a screen, but paper notes allow me to keep my gaze in the general direction of the camera. Using them means that I’m not relying on the keyboard or mouse to go through my notes, and it very clearly shows that I’m not relying on AI during the interview.
These are photos of just a few of the notes I used in my most recent interviews. A lot of my notes include situations that I can cite when answering “behavioral interview” questions — those questions that start with that dreaded phrase, “Tell me about a time when…”
If you watch just one AI video before Christmas, make it lecture 9 from AI pioneer Andrew Ng’s CS230 class at Stanford, which is a brutally honest playbook for navigating a career in Artificial Intelligence.
Worth reading: AI and ML for Coders in PyTorch, Laurence Moroney’s latest book, and Andrew Ng wrote the foreword! I’m working my way through this right now.
The class starts with Ng sharing some of his thoughts about the AI job market before handing the reins over to guest speaker Laurence Moroney, Director of AI at Arm, who offered the students a grounded, strategic view of the shifting landscape, the commoditization of coding, and the bifurcation of the AI industry.
Here are my notes from the video. They’re a good guide, but the video is so packed with info that you really should watch it to get the most from it!
The golden age of the “product engineer”
Ng opened the session with optimism, saying that this current moment is the “best time ever” to build with AI. He cited research suggesting that every 7 months, the complexity of tasks AI can handle doubles. He also argued that the barrier to entry for building powerful software has collapsed.
Speed is the new currency! The velocity at which software can be written has changed largely due to AI coding assistants. Ng admitted that keeping up with these tools is exhausting (his “favorite tool” changes every three to six months), but it’s non-negotiable. He noted that being even “half a generation behind” on these tools results in a significant productivity drop. The modern AI developer needs to be hyper-adaptive, constantly relearning their workflow to maintain speed.
The bottleneck has shifted to what to build. As writing code becomes cheaper and faster, the bottleneck in software development shifts from implementation to specification.
Ng highlighted a rising trend in Silicon Valley: the collapse of the Engineer and Product Manager (PM) roles. Traditionally, companies operated with a ratio of one PM to every 4–8 engineers. Now, Ng sees teams trending toward 1:1 or even collapsing the roles entirely. Engineers who can talk to users, empathize with their needs, and decide what to build are becoming the most valuable assets in the industry. The ability to write code is no longer enough; you must also possess the product instinct to direct that code toward solving real problems.
The company you keep: Ng’s final piece of advice focused on network effects. He argued that your rate of learning is predicted heavily by the five people you interact with most. He warned against the allure of “hot logos” and joining a “company of the moment” just for the brand name and prestige-by-association. He shared a cautionary tale of a top student who joined a “hot AI brand” only to be assigned to a backend Java payment processing team for a year. Instead, Ng advised optimizing for the team rather than the company. A smaller, less famous company with a brilliant, supportive team will often accelerate your career faster than being a cog in a prestigious machine.
Surviving the market correction
Ng handed over the stage to Moroney, who started by presenting the harsh realities of the job market. He characterized the current era (2024–2025) as “The Great Adjustment,” following the over-hiring frenzy of the post-pandemic boom.
The three pillars of success To survive in a market where “entry-level positions feel scarce,” Moroney outlined three non-negotiable pillars for candidates:
Understanding in depth: You can’t just rely on high-level APIs. You need academic depth combined with a “finger on the pulse” of what is actually working in the industry versus what is hype.
Business focus: This is the most critical shift. The era of “coolness for coolness’ sake” is over. Companies are ruthlessly focused on the bottom line.Moroney put a spin on the classic advice, “Dress for the job you want, not the job you have,” and suggested that if you’re a job-seeker, that you “not let your output be for the job you have, but for the job you want.” He based this on his own experience of landing a role at Google not by preparing to answer brain teasers, but by building a stock prediction app on their cloud infrastructure before the interview.
Bias towards delivery: Ideas are cheap; execution is everything. In a world of “vibe coding” (a term he doesn’t like — he prefers something more line “prompting code into existence” or “prompt coding”), what will set you apart is the ability to actually ship reliable, production-grade software.
The trap of “vibe coding” and technical debt: Moroney addressed the phenomenon of using LLMs to generate entire applications. They may be powerful, but he warned that they also create massive “technical debt.”
The 4 Realities of Modern AI Work
Moroney outlined four harsh realities that define the current workspace, warning that the “coolness for coolness’ sake” era is over. These realities represent a shift in what companies now demand from engineers.
Business focus is non-negotiable. Moroney noted a significant cultural “pendulum swing” in Silicon Valley. For years, companies over-indexed on allowing employees to bring their “whole selves” to work, which often prioritized internal activism over business goals. That era is ending. Today, the focus is strictly on the bottom line. He warned that while supporting causes is important, in the professional sphere, “business focus has become non-negotiable.” Engineers must align their output directly with business value to survive.
2. Risk mitigation is the job. When interviewing, the number one skill to demonstrate is not just coding, but the ability to identify and manage the risks of deploying AI. Moroney described the transition from heuristic computing (traditional code) to intelligent computing (AI) as inherently risky. Companies are looking for “Trusted Advisors” who can articulate the dangers of a model (hallucinations, security flaws, or brand damage) and offer concrete strategies to mitigate them.
3. Responsibility is evolving. “Responsible AI” has moved from abstract social ideals to hardline brand protection. Moroney shared a candid behind-the-scenes look at the Google Gemini image generation controversy (where the model refused to generate images of Caucasian people due to over-tuned safety filters). He argued that responsibility is no longer just about “fairness” in a fluffy sense; it is about preventing catastrophic reputational damage. A “responsible” engineer now ensures the model doesn’t just avoid bias, but actually works as intended without embarrassing the company.
4. Learning from mistakes is constant. Because the industry is moving so fast, mistakes are inevitable. Moroney emphasized that the ability to “learn from mistakes” and, crucially, to “give grace” to colleagues when they fail is a requirement. In an environment where even the biggest tech giants stumble publicly (as seen with the Gemini launch), the ability to iterate quickly after a failure is more valuable than trying to be perfect on the first try.
Technical debt
Just like a mortgage, debt isn’t inherently bad, but you must be able to service it. He defined the new role of the senior engineer as a “trusted advisor.” If a VP prompts an app into existence over a weekend, it is the senior engineer’s job to understand the security risks, maintainability, and hidden bugs within that spaghetti code. You must be the one who understands the implications of the generated code, not just the one who generated it.
The dot-com parallel: Moroney drew a sharp parallel between the current AI frenzy and the Dot-Com bubble of the late 1990s. He acknowledged that while we are undoubtedly in a financial bubble, with venture capital pouring billions into startups with zero revenue, he emphasizes that this does not imply the technology itself is a sham.
Just as the internet fundamentally changed the world despite the 2000 crash wiping out “tourist” companies, AI is a genuine technological shift that is here to stay. He warns students to distinguish between the valuation bubble (which will burst) and the utility curve (which will keep rising), advising them to ignore the stock prices and focus entirely on the tangible value the technology provides.
The bursting of this bubble, which Moroney terms “The Great Adjustment,” marks the end of the “growth at all costs” era. He argues that the days of raising millions on a “cool demo” or “vibes” are over. The market is violently correcting toward unit economics, meaning AI companies must now prove they can make more money than they burn on compute costs. For engineers, this signals a critical shift in career strategy: job security no longer comes from working on the flashiest new model, but from building unglamorous, profitable applications that survive the coming purge of unprofitable startups.
The industry is splitting into two distinct paths:
“Big AI”: The AI made by massive, centralized players such as OpenAI, Google, and Anthropic, who are chasing after AGI. This relies on ever-larger models hosted in the cloud.
“Small AI”: AI systems that are based on open-weight (he prefers “open-weight” to “open source” when describing AI models), self-hosted, and on-device models. Moroney also calls this “self-hosted AI.”
Moroney urged the class to diversify their skills. Don’t just learn how to call an API; learn how to optimize a 7-billion parameter model to run on a laptop CPU. That is where the uncrowded opportunity lies.
Intent: Understanding exactly what the user wants.
Planning: Breaking that intent down into steps.
Tools: Giving the model access to specific capabilities (search, code execution).
Reflection: Checking if the result met the intent. He shared a demo of a movie-making tool where simply adding this agentic loop transformed a hallucinated, glitchy video into a coherent scene with emotional depth.
Conclusion: Work hard
I’ll conclude this set of notes with what Ng said at the conclusion of his introduction to the lecture, which he described as “politically incorrect”: Work hard.
While he acknowledged that not everyone is in a situation where they can do so, he pointed out that among his most successful PhD students, the common denominator was an incredible work ethic: nights, weekends, and the “2 AM hyperparameter tuning.”
In a world drowning in hype, Ng’s and Moroney’s “brutally honest” playbook is actually quite simple:
Use the best tools to move fast
Understand the business problem you’re trying to solve, and understand it deeply.
Ignore the noise of social media and the trends being hyped there. Build things that actually work.
And finally, to quote Ng: “Between watching some dumb TV show versus finding your agentic coder on a weekend to try something… I’m going to choose the latter almost every time.”
I’ve been doing a fair number of client proposal and job interview presentations lately, and one of the most-enjoyed slides is the very first one I show, pictured above. Everybody loves Bender!
Feel free to borrow this one for your own presentations.
Pictured above is a photo of a page from Dawn, an English-language newspaper based in Pakistan. Take a look at the highlighted paragraph at the end of the article titled Auto sales rev up in October:
If you want, I can create an even snappier “front-page style” version with punchy one-line stats and a bold, infographic-ready layout — perfect for maximum reader impact. Do you want me to do that next?
That, of course, is the result of indiscriminately copying and pasting the output of an LLM, which is something I like to call “response injection.” It’s also a career-limiting move.
This report published in today’s Dawn was originally edited using AI, which is in violation of our current AI policy. The policy is available on our website and can be reviewed here. The original report also carried AI-generated artefact text from the editing process, which has been edited out in the digital version. The matter is being investigated, and the violation of AI policy is regretted. — Editor
I have nothing against using AI as a writing assistant. It’s fantastic for checking spelling, grammar, and flow, it can help you out of writer’s block, and it can do something that you could never do, no matter how smart or creative you are: it can come up with ideas you’d never come up with.
So yes, use AI, but you have to do some of the work, and you have to double-check it before putting that work out in the world!