Last week, Anitra and I attended both the Dev/Nexus conference and its companion conference, Advantage, an AI conference for CTOs, CIOs, VPs of Engineering, and other technical lead-types, which took place the day before Dev/Nexus. My thanks to Pratik Patel for the guest passes to both conferences!
I took copious notes and photos of all the Advantage talks and will be posting summaries here. This set of notes is from the second talk, AI Architecture for Tech Leaders: Building Blocks for AI Applications, presented by Pratik Patel.
Pratik Patel is VP of Developer Relations at Azul Systems. An all around software and hardware nerd with experience in the healthcare, telecom, financial services, and startup sectors.
And here’s the abstract of his talk:
The AI space is moving incredibly fast, it seems new methodologies and technologies are coming every week. How’s a technology leader (whether your a VP Engineering, Software Dev Manager or Team Lead) supposed to understand what are the true building blocks for this new class of applications. How do you scope an AI development project, both in terms of developer time and cloud & AI infrastructure? Should you buy AI hardware or pay for API access to OpenAI, Claude, Gemini, etc? Do you have sensitive information that you want to keep from leaking out to an external LLM provider? In this session, we’ll tackle these issues and also discuss the evolution of applications and the difference between: existing applications that have added AI capability as an accessory this new class of applications that are built with AI in mind from the start This session is intended to be interactive – I’ll start by laying the foundation for building AI applications today, and we’ll discuss the experiences of the tech leaders in the room so everyone can share and learn from each other.
My notes from Pratik’s talk are below.
Note: You can find a more developer-focused version of this talk in an earlier posting, from when Pratik came to Tampa to deliver this talk for the Tampa Bay Java User Group and Tampa Bay AI Meetup.
Skate to where the puck Is going

Pratik opened with the AI version of the Wayne Gretzky line: don’t build for where AI is today, build for where it will be in six to twelve months. The pressure many tech leaders currently feel to add AI to everything so the organization can say it’s doing AI is producing a wave of surface-level implementations that won’t hold up. Sprinkling a chatbot on top of an existing application is not a strategy, but a reaction.
The analogy Pratik kept returning to was the shift from manual, infrequent deployments to cloud-native, continuously-delivered software. That transition wasn’t just about adopting new tools. It required a fundamental rethinking of how teams design, build, and release software. Organizations that made that leap early didn’t just move faster; they built a compounding capability advantage. Pratik’s argument is that we’re at a similar inflection point with AI, and the leaders who recognize it now will be the ones whose systems look prescient rather than antiquated in two years.
AI-native vs. AI-augmented: A critical distinction
The conceptual core of Pratik’s talk is the difference between bolting AI onto an existing application and building an AI-native one from the ground up. An AI-native application doesn’t just use AI as a feature, and is organized around AI’s ability to learn, adapt, and act autonomously. Those three verbs matter. Most of what organizations are building today qualifies as AI-augmented at best: an agent that can act, but that doesn’t genuinely learn from interactions or adapt its behavior without human intervention.
Pratik illustrated this with a content management system example. A traditional CMS requires humans to manually tag articles. An AI-native CMS handles tagging automatically, continuously improves based on feedback, and integrates that intelligence into the editorial workflow without requiring a separate AI plugin to be configured and maintained. The business value isn’t just efficiency, but that the system gets better over time in a way that a bolted-on tool never will.
His hotel booking example pushed the concept further. A truly AI-native booking platform wouldn’t just filter hotels by amenities; it would learn individual user preferences from past behavior, weight them against contextual signals, and surface recommendations that reflect both explicit preferences and inferred ones. More importantly, it would adapt its pricing and inventory strategies automatically in response to real-world events (examples: a competitor hotel going offline for renovations, a major sporting event driving demand) without requiring a human to catch the signal and manually adjust rates.
Foundational data strategy is the real competitive moat
Pratik was clear that all the architectural sophistication in the world collapses without a serious approach to data. The core question every leader should be asking is “Is the data your organization holds actually usable by an AI system?” Not just stored somewhere, but clean, current, structured in ways that a model can reason about, and governed in ways that ensure its quality over time. Most companies, when they’re honest, have to answer that question with “not really.”
The cultural shift required here is moving from a “collect it and figure it out later” mentality to a data-first culture where data quality is treated as a continuous engineering concern, not a cleanup project. Pratik framed this as the AI equivalent of the DevOps automation mindset: just as teams had to change how they thought about deployment.
Instead of thinking of deployment as a periodic event but as a constant, automated process, teams now need to think about data not as a byproduct of operations but as the fuel that makes AI systems defensible.
Unstructured data adds another layer of complexity. RAG is the most common approach to incorporating things like PDFs and documents into an AI system, but Pratik was careful to note that “just do RAG” massively undersells the challenge. He’s catalogued over 36 distinct RAG implementation techniques, each with different trade-offs around chunking strategies, retrieval quality, and error rates. Leaders who treat RAG as a checkbox rather than an engineering discipline will find their AI systems returning confidently wrong answers from their own documents.
The AI-native development lifecycle
Building AI-native systems requires updating how teams think about the software development lifecycle itself. Pratik drew a direct parallel to the DevOps transformation: just as continuous integration and deployment automated away the pain of manual releases, AI-native development requires building automation into the AI feedback loop, from code generation assistance to automated testing of non-deterministic outputs.
The trickiest part of this is monitoring. Traditional software testing assumes deterministic behavior: you give it inputs, you check the outputs against known values. AI systems don’t work that way.
Pratik described two approaches that are gaining traction:
- Human-in-the-loop feedback: the five-star rating prompt that many AI products now show after a response, which feeds real quality signals back into the system.
- “LLM as judge”: using a second AI model (potentially a smaller, cheaper one) to evaluate the outputs of your primary model, essentially automating quality checks at scale.
The practical implication for tech leaders is that shipping an AI-native application is not a one-time event followed by monitoring dashboards. It requires building the infrastructure for continuous retraining, output validation, and drift detection from day one. The underlying model, the data it draws from, and the world it’s reasoning about all change over time. A system that doesn’t account for that will quietly degrade in ways that are hard to detect until users start complaining.

