Understanding Data Complexity in Enterprise AI Systems with John Willis
What would Founder, Systems Architect, and Deeptech Strategist, John Willis, like you to understand? The history of AI and the risks associated with data exhaust. Willis recently published Rebels of Reason, an AI history told through the stories of lesser-known technologists who built the foundation for modern-day AI. And one of the biggest threats he’s flagging for enterprise AI users is data exhaust, byproduct info that's generated while using digital systems. From manufacturing process details to AI model configurations, this data trail is growing fast and can be collected and exploited. Also in this episode, you’ll hear about: Why legacy systems and tech debt still haunt AI adoption How the NORMAL stack embeds AI governance into every layer of the enterprise Why RAG isn’t dead If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
--------
1:27:58
Navigating AI Evaluation and Observability with Atin Sanyal
As GenAI tools have become more intelligent and robust, the reliability of their output has decreased. Enter Galileo: an AI reliability platform designed for GenAI applications. Atin Sanyal — the Co-founder and CTO of Galileo — has a background building machine learning tech at Uber and Apple. One innovative technique that sets Galileo apart is ChainPoll — their hallucination detection methodology that uses consensus scoring and prompts the LLM to outline its step-by-step reasoning process. In this episode, hosts Aaron Fulkerson and Mark Hinkle talk to Atin about: What evaluation agents are, and why they get smarter over time How Galileo helps enterprises evolve their own AI quality metrics Why data quality and confidential computing will become increasingly important to enterprises building AI systems If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
--------
52:49
On the Cutting Edge of Agentic AI with João Moura
João Moura built the agentic AI open source framework, CrewAI, with a central vision: to make agent creation feel borderline magical. Inspired by the Ruby on Rails philosophy of “convention over configuration,” CrewAI is designed for speed, clarity, and ease of use. What we love about João’s approach to agentic AI is that he’s not just disrupting the market. He’s shifting the way enterprise leaders think about long-term workflows and processes, too. In this episode, hosts Aaron Fulkerson and Mark Hinkle talk to João about: Why interoperability across your stack is essential for scaling AI The importance of building secure AI systems How managers are uniquely equipped to build effective agents The dangers of “agent washing" and how to see through the hype If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
--------
1:13:34
How AI is reshaping enterprise technology with James Kaplan
James Kaplan has been consulting enterprise clients on how to adopt and extract value from cutting-edge technologies for over 25 years. His dual role as Partner at McKinsey and CTO at McKinsey Technology gives him a unique vantage point into how CIOs and CTOs can harness AI to drive automation, increase productivity, and enhance data security. In this episode, host Aaron Fulkerson talks to James about: How AI is changing the enterprise technology landscape What enterprises should be thinking about as they’re adopting and building AI systems How to avoid the “IT doom loop" If you want to learn more about AI and how to benefit from it responsibly, visit opaque.co
--------
1:32:14
Building transparent, open-source AI with Sriram Raghavan
Hosts Aaron Fulkerson and Mark Hinkle sit down with Sriram Raghavan, Vice President of IBM Research AI, at the AllThingsOpen.ai conference. As an experienced industry expert, Sriram leads a global team of over 750 research scientists and engineers focused on advancing AI. Sriram and IBM believe that the future of AI is open source. That’s why IBM recently contributed three AI projects — Docling, Data Prep Kit, and BeeAI — to the Linux Foundation. And while there is a lot of hype around LLMs right now, IBM is leaning into smaller “fit-for-purpose” models that give developers choice. In the episode, Sriram also digs into: IBM’s approach to open-source models What innovations he sees coming for enterprise AI How his team is approaching responsible, secure AI
AI Confidential is a podcast about AI and how to benefit from it responsibly. We speak with leaders from Microsoft, Accenture, NVIDIA, Google Cloud — and more — about how confidential AI is reshaping business.