PodcastsNoticiasThe New Stack Podcast

The New Stack Podcast

The New Stack
The New Stack Podcast
Último episodio

381 episodios

  • The New Stack Podcast

    Why long-running AI agents break on HTTP and how Ably is fixing it

    06/05/2026 | 31 min
    In this episode ofThe New Stack Makers, Matthew O’Riordan, CEO of Ably, explains how infrastructure originally built for human collaboration is now well-suited for long-running AI agents. While Ably initially resisted positioning itself as an AI company, the rise of agents that reason, call tools, and operate over extended periods revealed a natural fit for its real-time communication platform.

    O’Riordan highlights the limitations of HTTP for these use cases. While effective for short, request-response interactions, HTTP struggles with persistent, stateful experiences—such as handling dropped connections, multi-device usage, or mid-task interruptions. To address this, a new “durable session” layer is emerging, enabling continuous synchronization between agents and users through shared state, presence, and recovery mechanisms.

    Ably’s solution, AI Transport, augments existing architectures by keeping HTTP for requests while shifting responses to durable sessions. Features like mutable message streams and “live objects” allow seamless reconnection and collaboration. The goal is to provide a drop-in layer that developers can adopt without rethinking their stack—moving beyond traditional pub/sub models.

    Learn more from The New Stack around Ably and AI Transport: 

    How MCP Uses Streamable HTTP for Real-Time AI Tool Interaction

    Ably Touts Real-Time Starter Kits for Vercel and Netlify

    AI Agents Need Help. Here’s 4 Ways To Ship Software Reliably

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Why the Linux Foundation adopted MCP, with Jim Zemlin and Mazin Gilbert

    06/05/2026 | 32 min
    Agentic AI is advancing rapidly, with open-source projects racing to keep pace with real-world deployment. To accelerate progress, the Linux Foundation consolidated key technologies—Model Context Protocol (MCP), Goose, and AGENTS.md—under the newly formed Agentic AI Foundation (AAIF) in late 2025. At the MCP Dev Summit in New York City, Linux Foundation CEO Jim Zemlin and newly appointed AAIF executive director Mazin Gilbert discussed this transition. Zemlin explained that leading both organizations was unsustainable, prompting a careful search for a leader with both technical expertise and collaborative leadership skills.

    Gilbert now takes on the challenge of guiding AAIF as it shapes the emerging agentic AI ecosystem. While the foundation currently oversees three projects, its broader mission involves defining the future architecture of agent-driven systems—deciding what to build, when, and why. These decisions will influence the trajectory of open-source AI development. The conversation also highlights the importance of open collaboration, funding dynamics, and early adopters in shaping the agentic stack’s evolution.

     

    Learn more from The New Stack around the latest in open-source projects and The Linux Foundation: 

    Anthropic Donates the MCP Protocol to the Agentic AI Foundation

    SAFE-MCP, a Community-Built Framework for AI Agent Security

    Google Donates the Agent2Agent Protocol to the Linux Foundation

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Fresh data has us asking, does AI demand Kubernetes?

    01/05/2026 | 23 min
    Kubernetes is rapidly emerging as the de facto operating system for AI, with two-thirds of organizations using it for generative AI inference and 82% adopting it in production. Its ecosystem — including tools like Kubeflow — enables organizations to build, scale, and retain control of AI systems through open, community-driven infrastructure. Bob Killen of CNCF and Liam Bollmann-Dodd of SlashData shared insights from recent reports showing that AI success still hinges on strong engineering fundamentals—especially internal developer platforms and overall developer experience.

    While AI-generated code accelerates development, it shifts bottlenecks to DevOps, reliability, and security, increasing operational complexity. As a result, operator experience and well-defined guardrails have become critical to safely scaling AI. These controls help constrain both human and AI developers, reducing risk while enabling speed. At the same time, organizations are evolving team structures, expanding platform engineering groups to support internal users more effectively. Despite growing complexity, the core lesson remains consistent: open source innovation thrives on people, processes, and collaboration as much as on technology itself.

    Learn more from The New Stack around the latest in Kubernetes and its emergence as an operating system for AI: 

    Kubernetes and AI: Are They a Fit?

    How AI Is Pushing Kubernetes Storage Beyond Its Limits

    Kubernetes and AI Are Shaping the Next Generation of Platforms

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    How SUSE positions itself as the infrastructure layer for the AI era

    30/04/2026 | 26 min
    In this episode ofThe New Stack Makers,Pete Smailsoutlines howSUSEis evolving from its Linux roots into an AI-native infrastructure platform. Speaking atKubeCon + CloudNativeCon Europe 2026, Smails explains the company’s strategy to unify AI, containers and virtual machines on a single open, enterprise-ready foundation. Central to this isSUSE Rancher Prime, which enables consistent orchestration across hybrid and multi-cloud environments, alongsideSUSE Virtualizationfor modernizing legacy systems.

    A key innovation is “Liz,” a context-aware AI agent embedded in Rancher Prime that helps engineers identify vulnerabilities, troubleshoot deployments and interact with infrastructure using natural language. Unlike generic AI tools, Liz understands real-time cluster states and uses Model Context Protocol to deliver actionable insights.

    Smails emphasizes developer experience as critical to adoption, highlighting Rancher Developer Access for simplified local Kubernetes workflows. Overall, SUSE aims to deliver secure, automated infrastructure that reduces complexity while accelerating cloud-native and AI adoption.

    Learn more from The New Stack around the latest around SUSE: 

    SUSE Displays Enhanced Enterprise Linux at SECESSION

    SUSE Launches a Sovereign Premium Support Service for EU Customers

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Cut AI token usage by 96%? Here’s how AWS Strands Agents does it.

    29/04/2026 | 28 min
    In this episode of The New Stack Makers, AWS developer advocate Morgan Willis demonstrates Strands Agents, an open source agentic framework with rapid adoption since its launch. Using a simple accounting API, she walks through three approaches to retrieving a customer’s latest invoice, highlighting how design choices dramatically impact efficiency. The initial method maps each API endpoint to a separate tool, requiring five chained calls and consuming about 52,000 tokens. By shifting to intent-based tools—focused on outcomes rather than individual data operations—the same task is completed in a single call using just 2,000 tokens, improving both efficiency and reasoning.

    In a third iteration, tools are hosted on a remote MCP server via AWS Agent Core Gateway, with semantic search limiting the agent’s toolset to only what’s relevant per query, further reducing token usage. Willis emphasizes that narrowly scoped agents outperform general-purpose ones, delivering better speed, accuracy, and context efficiency. Designing smaller, specialized agents with tailored tools is key as tool ecosystems expand.

    Learn more from The New Stack around the latest with Strands and MCP:

    AWS Launches Its Take on an Open Source AI Agents SDK

    What Is MCP? Game Changer or Just More Hype?

    MCP’s biggest growing pains for production use will soon be solved

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Más podcasts de Noticias

Acerca de The New Stack Podcast

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Sitio web del podcast

Escucha The New Stack Podcast, Huevos Revueltos con Política y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

The New Stack Podcast: Podcasts del grupo

Aplicaciones
Redes sociales
v8.8.14| © 2007-2026 radio.de GmbH
Generated: 5/6/2026 - 11:51:11 PM