If you watch just one AI video before Christmas, make it lecture 9 from AI pioneer Andrew Ng’s CS230 class at Stanford, which is a brutally honest playbook for navigating a career in Artificial Intelligence.
You can watch the video of the lecture on YouTube.
The class starts with Ng sharing some of his thoughts about the AI job market before handing the reins over to guest speaker Laurence Moroney, Director of AI at Arm, who offered the students a grounded, strategic view of the shifting landscape, the commoditization of coding, and the bifurcation of the AI industry.
Here are my notes from the video. They’re a good guide, but the video is so packed with info that you really should watch it to get the most from it!
The golden age of the “product engineer”
Ng opened the session with optimism, saying that this current moment is the “best time ever” to build with AI. He cited research suggesting that every 7 months, the complexity of tasks AI can handle doubles. He also argued that the barrier to entry for building powerful software has collapsed.
Speed is the new currency! The velocity at which software can be written has changed largely due to AI coding assistants. Ng admitted that keeping up with these tools is exhausting (his “favorite tool” changes every three to six months), but it’s non-negotiable. He noted that being even “half a generation behind” on these tools results in a significant productivity drop. The modern AI developer needs to be hyper-adaptive, constantly relearning their workflow to maintain speed.
The bottleneck has shifted to what to build. As writing code becomes cheaper and faster, the bottleneck in software development shifts from implementation to specification.
Ng highlighted a rising trend in Silicon Valley: the collapse of the Engineer and Product Manager (PM) roles. Traditionally, companies operated with a ratio of one PM to every 4–8 engineers. Now, Ng sees teams trending toward 1:1 or even collapsing the roles entirely. Engineers who can talk to users, empathize with their needs, and decide what to build are becoming the most valuable assets in the industry. The ability to write code is no longer enough; you must also possess the product instinct to direct that code toward solving real problems.
The company you keep: Ng’s final piece of advice focused on network effects. He argued that your rate of learning is predicted heavily by the five people you interact with most. He warned against the allure of “hot logos” and joining a “company of the moment” just for the brand name and prestige-by-association. He shared a cautionary tale of a top student who joined a “hot AI brand” only to be assigned to a backend Java payment processing team for a year. Instead, Ng advised optimizing for the team rather than the company. A smaller, less famous company with a brilliant, supportive team will often accelerate your career faster than being a cog in a prestigious machine.
Surviving the market correction
Ng handed over the stage to Moroney, who started by presenting the harsh realities of the job market. He characterized the current era (2024–2025) as “The Great Adjustment,” following the over-hiring frenzy of the post-pandemic boom.
The three pillars of success To survive in a market where “entry-level positions feel scarce,” Moroney outlined three non-negotiable pillars for candidates:
- Understanding in depth: You can’t just rely on high-level APIs. You need academic depth combined with a “finger on the pulse” of what is actually working in the industry versus what is hype.
- Business focus: This is the most critical shift. The era of “coolness for coolness’ sake” is over. Companies are ruthlessly focused on the bottom line.Moroney put a spin on the classic advice, “Dress for the job you want, not the job you have,” and suggested that if you’re a job-seeker, that you “not let your output be for the job you have, but for the job you want.” He based this on his own experience of landing a role at Google not by preparing to answer brain teasers, but by building a stock prediction app on their cloud infrastructure before the interview.
- Bias towards delivery: Ideas are cheap; execution is everything. In a world of “vibe coding” (a term he doesn’t like — he prefers something more line “prompting code into existence” or “prompt coding”), what will set you apart is the ability to actually ship reliable, production-grade software.
The trap of “vibe coding” and technical debt: Moroney addressed the phenomenon of using LLMs to generate entire applications. They may be powerful, but he warned that they also create massive “technical debt.”
The 4 Realities of Modern AI Work
Moroney outlined four harsh realities that define the current workspace, warning that the “coolness for coolness’ sake” era is over. These realities represent a shift in what companies now demand from engineers.
Business focus is non-negotiable. Moroney noted a significant cultural “pendulum swing” in Silicon Valley. For years, companies over-indexed on allowing employees to bring their “whole selves” to work, which often prioritized internal activism over business goals. That era is ending. Today, the focus is strictly on the bottom line. He warned that while supporting causes is important, in the professional sphere, “business focus has become non-negotiable.” Engineers must align their output directly with business value to survive.
2. Risk mitigation is the job. When interviewing, the number one skill to demonstrate is not just coding, but the ability to identify and manage the risks of deploying AI. Moroney described the transition from heuristic computing (traditional code) to intelligent computing (AI) as inherently risky. Companies are looking for “Trusted Advisors” who can articulate the dangers of a model (hallucinations, security flaws, or brand damage) and offer concrete strategies to mitigate them.
3. Responsibility is evolving. “Responsible AI” has moved from abstract social ideals to hardline brand protection. Moroney shared a candid behind-the-scenes look at the Google Gemini image generation controversy (where the model refused to generate images of Caucasian people due to over-tuned safety filters). He argued that responsibility is no longer just about “fairness” in a fluffy sense; it is about preventing catastrophic reputational damage. A “responsible” engineer now ensures the model doesn’t just avoid bias, but actually works as intended without embarrassing the company.
4. Learning from mistakes is constant. Because the industry is moving so fast, mistakes are inevitable. Moroney emphasized that the ability to “learn from mistakes” and, crucially, to “give grace” to colleagues when they fail is a requirement. In an environment where even the biggest tech giants stumble publicly (as seen with the Gemini launch), the ability to iterate quickly after a failure is more valuable than trying to be perfect on the first try.
Technical debt
Just like a mortgage, debt isn’t inherently bad, but you must be able to service it. He defined the new role of the senior engineer as a “trusted advisor.” If a VP prompts an app into existence over a weekend, it is the senior engineer’s job to understand the security risks, maintainability, and hidden bugs within that spaghetti code. You must be the one who understands the implications of the generated code, not just the one who generated it.
The dot-com parallel: Moroney drew a sharp parallel between the current AI frenzy and the Dot-Com bubble of the late 1990s. He acknowledged that while we are undoubtedly in a financial bubble, with venture capital pouring billions into startups with zero revenue, he emphasizes that this does not imply the technology itself is a sham.
Just as the internet fundamentally changed the world despite the 2000 crash wiping out “tourist” companies, AI is a genuine technological shift that is here to stay. He warns students to distinguish between the valuation bubble (which will burst) and the utility curve (which will keep rising), advising them to ignore the stock prices and focus entirely on the tangible value the technology provides.
The bursting of this bubble, which Moroney terms “The Great Adjustment,” marks the end of the “growth at all costs” era. He argues that the days of raising millions on a “cool demo” or “vibes” are over. The market is violently correcting toward unit economics, meaning AI companies must now prove they can make more money than they burn on compute costs. For engineers, this signals a critical shift in career strategy: job security no longer comes from working on the flashiest new model, but from building unglamorous, profitable applications that survive the coming purge of unprofitable startups.
Future-proofing: “Big AI ” vs. “Small AI”
Perhaps the most strategic insight from the lecture was Moroney’s prediction of a coming “bifurcation” in the AI industry over the next five years.
The industry is splitting into two distinct paths:
- “Big AI”: The AI made by massive, centralized players such as OpenAI, Google, and Anthropic, who are chasing after AGI. This relies on ever-larger models hosted in the cloud.
- “Small AI”: AI systems that are based on open-weight (he prefers “open-weight” to “open source” when describing AI models), self-hosted, and on-device models. Moroney also calls this “self-hosted AI.”
Moroney is bullish on “Small AI.” He explained that many industries are very protective of their intellectual property, such as movie/television studios and law firms. These business will will never send their intellectual property to a centralized model like GPT-4 due to privacy and IP concerns. This creates a massive, underserved market for engineers who can fine-tune small models to run locally on a device or private server.
Moroney urged the class to diversify their skills. Don’t just learn how to call an API; learn how to optimize a 7-billion parameter model to run on a laptop CPU. That is where the uncrowded opportunity lies.
Agentic Workflows: The “How” of Future Engineering: Moroney’s advice was to stop thinking of agents as magic and start treating them as a rigorous engineering workflow consisting of four steps:
- Intent: Understanding exactly what the user wants.
- Planning: Breaking that intent down into steps.
- Tools: Giving the model access to specific capabilities (search, code execution).
- Reflection: Checking if the result met the intent. He shared a demo of a movie-making tool where simply adding this agentic loop transformed a hallucinated, glitchy video into a coherent scene with emotional depth.
Conclusion: Work hard
I’ll conclude this set of notes with what Ng said at the conclusion of his introduction to the lecture, which he described as “politically incorrect”: Work hard.
While he acknowledged that not everyone is in a situation where they can do so, he pointed out that among his most successful PhD students, the common denominator was an incredible work ethic: nights, weekends, and the “2 AM hyperparameter tuning.”
In a world drowning in hype, Ng’s and Moroney’s “brutally honest” playbook is actually quite simple:
- Use the best tools to move fast
- Understand the business problem you’re trying to solve, and understand it deeply.
- Ignore the noise of social media and the trends being hyped there. Build things that actually work.
- And finally, to quote Ng: “Between watching some dumb TV show versus finding your agentic coder on a weekend to try something… I’m going to choose the latter almost every time.”













































































































