PodcastsNoticiasThe New Stack Podcast

The New Stack Podcast

The New Stack
The New Stack Podcast
Último episodio

370 episodios

  • The New Stack Podcast

    As agentic AI explodes, Amazon doubles down on MCP

    16/04/2026 | 24 min
    At the MCP Summit inNew York City,Clare LiguoriofAmazon Web Servicesdiscussed the rapid rise of theModel Context Protocol(MCP), now a leading way to connect AI agents with tools and data. Originally developed byAnthropicand later transferred to theLinux Foundation, MCP has seen surging enterprise adoption as agentic AI expands.

    Liguori highlighted her dual role shaping MCP’s evolving specification, including work on integrating webhooks, events, and notifications to support always-on AI agents. AWS has actively contributed features like Tasks and Elicitations and offers managed MCP servers, positioning itself as both contributor and experimental platform for emerging capabilities.

    This collaboration illustrates how corporate involvement can accelerate open-source innovation and adoption. Looking ahead, MCP’s role as connective infrastructure for AI agents is expected to grow, especially as tools become more accessible. With broader adoption of AI development platforms across non-engineering roles, MCP could help extend automation beyond tech teams to businesses of all sizes.

    Learn more from The New Stack about the latest around Model Context Protocol(MCP): 

    MCP: The Missing Link Between AI Agents and APIs

    Beyond the vibe code: The steep mountain MCP must climb to reach production

    MCP is everywhere, but don’t panic. Here’s why your existing APIs still matter.

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    A year in, Google wants its Axion processors to feel like a scheduling decision

    15/04/2026 | 22 min
    At KubeCon Europe, Google Cloud’s Jago Macleod and Abdel Sghiouar argued that adopting Arm for Kubernetes workloads has shifted from a complex migration to a practical, low-friction choice. After a year of production use, Google’s custom Arm-based Axion processors—powering C4A and N4A instances—are positioned as broadly viable for most containerized applications, offering strong gains in performance, cost efficiency, and energy usage compared to x86.

    Rather than requiring a full overhaul, moving to Arm typically involves recompiling containers for a multi-architecture target and gradually rolling out via Kubernetes practices like canary deployments. While edge cases exist, they are relatively uncommon.

    A key enabler is GKE’s compute classes, which allow workloads to express preferences across VM types, turning infrastructure decisions into automated scheduling choices rather than manual provisioning.

    Ultimately, the conversation points to a larger constraint: energy. As AI workloads grow, efficiency—measured in “tokens per watt”—is emerging as the defining metric, with cost savings translating directly into greater compute capacity.

    Learn more from The New Stack about the latest developments around Google’s work with Axion: 

    Arm: See a Demo About Migrating a x86-Based App to ARM64 

    Do All Your AI Workloads Actually Require Expensive GPUs? 
    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Can you make Kubernetes invisible? Here's why AWS is on a mission to do it.

    14/04/2026 | 23 min
    In this episode ofThe New Stack Makers, Jesse Butler, principal product manager for AWS Elastic Kubernetes Service, shares his vision for simplifying cloud-native computing. Since joining AWS in 2020, Butler has focused on making Kubernetes easier to use, emphasizing open-source as a democratizing force. He highlights the role of the Cloud Native Computing Foundation (CNCF) in standardizing and governing open ecosystems while balancing community-driven innovation with commercial contributions.

    Butler describes Kubernetes as widely adopted—used in production by around 80% of enterprises—yet still overly complex. His goal is to make it “invisible,” much like Linux, by abstracting and consolidating services. He points to projects like Karpenter, which enables real-time node provisioning for efficient scaling; Kro, which simplifies resource orchestration; and Cedar, a flexible policy engine for fine-grained authorization.

    He underscores the importance of open-source contributors, noting their critical yet often underappreciated role. Looking ahead, Butler envisions a future where automation and human collaboration further enhance usability and innovation in open-source software.

    Learn more from The New Stack about the latest around AWS Elastic Kubernetes Service

    2026 Will Be the Year of Agentic Workloads in Production on Amazon EKS

    Amazon EKS Auto Mode wants to end Kubernetes toil — one node at a time

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    The next stages of AI conformance in the cloud-native, open-source world

    09/04/2026 | 25 min
    Running AI models on Kubernetes has historically been inconsistent, with workloads behaving differently across cloud providers due to variations in GPUs, networking, and autoscaling. As organizations move AI from experimentation to production, standardization has become critical. In this episode of The New Stack Makers, Jonathan Bryce, Executive Director of The Cloud Native Computing Foundation shared that the Foundation’s Kubernetes AI conformance program aims to solve this by ensuring portability, predictability, and production readiness for AI workloads across environments.

    The initiative reflects a broader industry shift: AI is moving from training-heavy workloads to inference at scale, with inference expected to dominate compute usage by the end of the decade. Unlike batch-based training, inference requires real-time, always-on performance, making Kubernetes an attractive platform due to its elasticity, GPU-aware autoscaling, and observability.

    The conformance program establishes baseline standards for handling accelerators like GPUs and TPUs, reducing vendor lock-in and simplifying deployment. Early adopters include major cloud providers and ecosystem players, while new projects like llm-d aim to bridge orchestration and inference. As requirements evolve, ongoing collaboration and recertification will ensure the standards stay aligned with real-world needs.

    Learn more from The New Stack about the latest developments around The Cloud Native Computing Foundation’s Kubernetes AI conformance program:

    CNCF: Kubernetes is ‘foundational’ infrastructure for AI

    Kubernetes Gets an AI Conformance Program — and VMware Is Already On Board

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Microsoft wants to make service mesh invisible

    08/04/2026 | 21 min
    At KubeCon EU 2026, Mitch Connors of Microsoft outlined a vision to make service meshes effectively invisible to users. Now working on Azure Kubernetes Application Network, a fully managed service built on Istio’s ambient mode, Connors aims to deliver core capabilities like mTLS without requiring users to engage with the complexity traditionally associated with service meshes. Ambient mode eliminates sidecar upgrade challenges by shifting functionality to node-level and waypoint proxies, though adoption still faces hurdles, including lagging CVE patching.

    Connors emphasized that AI workloads are reshaping network demands, as request variability in large language models requires smarter routing and resource management. Istio is addressing this through a two-speed model: stable APIs for reliability and experimental integrations like Agent Gateway for emerging AI protocols. Features such as inference-aware routing and policy enforcement for approved LLM endpoints highlight the mesh’s growing role in AI governance.

    With multi-cluster support and GPU scarcity driving workload mobility, Microsoft’s approach bets that simplifying and abstracting the mesh will broaden adoption while meeting the evolving needs of AI-driven systems.

    Learn more from The New Stack about service meshes: 

    The Hidden Costs of Service Meshes

    All the Things a Service Mesh Can Do

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

Más podcasts de Noticias

Acerca de The New Stack Podcast

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Sitio web del podcast

Escucha The New Stack Podcast, The Daily y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

The New Stack Podcast: Podcasts del grupo

Aplicaciones
Redes sociales
v8.8.10| © 2007-2026 radio.de GmbH
Generated: 4/17/2026 - 1:09:19 AM