PodcastsTecnologíaThe Daily AI Show

The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show
Último episodio

714 episodios

  • The Daily AI Show

    Gemini 3.1, Codespark Demo & Apple AI Rumors

    19/2/2026 | 54 min
    Beth Lyons and Karl Yeh open with rumors around Apple exploring multiple AI wearables, including smart glasses, an AI pin/pendant, and AI-enhanced AirPods. They discuss ByteDance’s “Seed Dance” and the practical limits of enforcement once generative model capabilities are widely available. The episode then shifts into workflow and tooling: a Figma + Claude “code to canvas” concept and a Codex Spark speed demo for processing transcripts and producing structured outputs. They close by pointing viewers to try Gemini in AI Studio and tease a follow-up discussion (including Google Lyria) for the next show.

    Key Points Discussed

    00:00:17 Opening + what to expect today
    00:01:31 Apple rumored AI wearables: smart glasses, pin/pendant, AI AirPods
    00:10:29 ByteDance “Seed Dance” safeguards + cease-and-desist discussion
    00:12:19 Access friction for Chinese services + “wait until it lands elsewhere” approach
    00:15:32 Figma + Claude “code to canvas” workflow (dev → design handoff)
    00:35:19 “Finished” cues/notifications for agent workflows (with jokes)
    00:36:41 Codex Spark speed demo begins
    00:38:32 Measuring the run: results in ~10 seconds + what it’s doing
    00:48:56 A 5-stage workflow framing: brainstorming → planning → work → review → compound
    00:50:45 Gemini 3.1 in Google/AI Studio + staying current vs. slower on-prem timelines
    00:53:48 Wrap-up: “go try Gemini,” tease Google Lyria for tomorrow, goodbye

    The Daily AI Show Co Hosts: Beth Lyons, Karl Yeh
  • The Daily AI Show

    AI Firefighting, Sonnet 4.6, and RNA Breakthroughs

    18/2/2026 | 1 h 4 min
    This episode covers a wide range of AI developments, starting with an AI-powered firefighting robot swarm achieving high simulated success rates. The hosts examine Claude Sonnet 4.6 outperforming Opus 4.6 in certain benchmarks, pricing differences, and the broader model competition landscape including Alibaba’s Qwen 3.5. They discuss Ethan Mollick’s framework for understanding the agentic AI era and explore Meta’s patent for posthumous digital personas. The show concludes with an AI in Science segment highlighting DRFOLD-II, a new deep learning system for RNA structure prediction.

    Key Points Discussed

    00:00:00 AI Firefighting Robot Swarm Achieves 99.67% Success

    00:15:52 Claude Sonnet 4.6 vs Opus 4.6: Benchmarks and Pricing Debate

    00:26:41 Prompt Repetition Improves Non-Reasoning Models

    00:29:06 Alibaba Qwen 3.5 and Open-Source Agentic Competition

    00:32:48 Ethan Mollick’s Agentic AI Framework (Models, Apps, Harnesses)

    00:39:57 Meta’s Patent for AI That Posts After You Die

    00:44:18 NotebookLM Adds Prompt-Based Slide Revisions and PowerPoint Export

    00:46:03 AI in Science: Neuromorphic Computing Advances

    00:48:03 DRFOLD-II: AI-Powered RNA Structure Prediction

    01:05:47 What the Hosts and Community Are Building
  • The Daily AI Show

    Grok 4 2, Robot Dancers, and the China Acceleration

    18/2/2026 | 1 h
    Tuesday’s show covered a wide sweep of AI infrastructure and competitive dynamics. The crew discussed Grok 4.2’s quiet release, rapid advances in humanoid robotics from China, the OpenAI–DeepSeek distillation dispute, and the fast-moving OpenClaw ecosystem. The conversation then widened into WebMCP, the future of websites in an agent-driven world, data center politics, and new AI science breakthroughs in physics and bioacoustics. The throughline was clear: agents are shifting from experiments to infrastructure.

    Key Points Discussed

    00:00:18 👋 Opening, OpenClaw follow-up and Peter’s comments about joining OpenAI

    00:04:25 🤖 Grok 4.2 beta release and “for agents” confusion

    00:07:35 🦾 China’s Unitree humanoid robot dance comparison, 2025 vs 2026

    00:12:10 🎢 Entertainment implications, Orlando, theme parks, and robotics

    00:15:50 ⚔️ Anthropic Pentagon contract tension and autonomous weapons ethics

    00:20:45 🧠 Moonshot launches Kimi Claw, browser-based OpenClaw deployment

    00:26:30 📱 Telegram, Slack, and why agents connect to messaging platforms

    00:31:10 🏗️ WebMCP discussion, how agents interact with websites structurally

    00:36:20 🌐 The future of websites in an agent-first world

    00:41:15 🏭 New York Times data center story, local politics and infrastructure strain

    00:45:30 🔬 AI science segment, novel theoretical physics result via GPT-VI

    00:49:40 🐋 DeepMind bioacoustic model, bird-trained system classifying whale sounds

    00:53:10 🕵️ OpenAI accuses DeepSeek of model distillation and output extraction

    00:57:20 🏁 Wrap-up, 4,000 subscriber milestone, sign-off

    The Daily AI Show Co Hosts: Andy Halliday, Beth Lyons, Brian Maucere, and Karl Yeh
  • The Daily AI Show

    WebMCP, A Standard for Agents to Use the Web

    16/2/2026 | 55 min
    Monday’s episode focused on agent infrastructure becoming real infrastructure. The crew covered the OpenClaw creator joining OpenAI, why persistent agents change cost and workflow design, Google’s WebMCP standard for structured website actions, Cloudflare’s Markdown for Agents, and a Wharton discussion on “cognitive surrender” as people offload more thinking to AI.

    Key Points Discussed

    00:00:18 👋 Opening, Presidents Day context
    00:02:17 🧩 OpenClaw introduced, why it matters now
    00:04:53 🏢 OpenAI hiring angle, why the OpenClaw creator move matters
    00:09:15 💾 MyClaw and persistent memory, token costs and tradeoffs
    00:14:49 🧱 Early agent infrastructure, Mac Mini builds, skill hubs
    00:16:30 💬 WhatsApp access and why messaging channels matter
    00:20:02 🔁 “Joining OpenAI” referenced directly, implications discussed
    00:25:11 🌐 Google WebMCP, what it is and why it reduces brittle browsing
    00:29:12 📝 Cloudflare Markdown for Agents, token reduction and structured pages
    00:38:01 🧍 Human-in-the-loop tension, efficiency vs control
    00:42:28 🎓 Wharton segment begins, Thinking Fast, Slow, and Artificial discussed
    00:44:28 🧠 Cognitive surrender, what it means and why it is risky
    00:54:51 🐱 KatGPT mention and closing items
    00:55:03 🏁 Wrap-up and sign-off

    The Daily AI Show Co Hosts: Brian Maucere, Beth Lyons, and Andy Halliday
  • The Daily AI Show

    The Sorting or Shaping Conundrum

    14/2/2026 | 21 min
    College has always sold two products at once, even if we only talk about one. The first is shaping. You learn, you practice, you get feedback, you improve, and you leave more capable than when you arrived.
    The second is sorting. You proved you can survive a long system, hit deadlines, work with others, navigate bureaucracy, and keep going when it gets tedious. Employers used the degree as a shortcut for both.
    AI puts pressure on each product in a different way. Agents make “shaping” cheaper and faster outside school. A motivated person can learn, build, and iterate at a pace that no syllabus can match. At the same time, agents flood the world with output. When everyone can generate a report, a slide deck, a prototype, or a legal draft in hours, output stops signaling competence. That makes sorting feel more valuable, not less, because organizations still need a defensible way to pick humans for roles that carry responsibility.
    So college faces a quiet identity crisis. If the shaping part no longer differentiates students, and the sorting part becomes the main value, the degree shifts from education to gatekeeping. People already worry that college costs too much for what it teaches. AI adds a sharper edge to that worry. If the most important skill becomes judgment, responsibility, and the ability to direct and verify agent work, then the question becomes whether college can shape that, or whether it only sorts for people who can endure the system.
    The Conundrum:
    In an agent-driven economy, does college become more valuable because sorting is the scarce function, a trusted filter for who gets access to opportunity and decision rights when output is cheap and abundant, or does college become less valuable because shaping is the scarce function, and the market stops paying for filters that do not reliably produce better judgment, better accountability, and better real-world performance? If AI keeps compressing skill-building outside institutions, should a degree be treated as proof of capability, or as proof you fit the system, even if that proves the wrong thing.

Más podcasts de Tecnología

Acerca de The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Sitio web del podcast

Escucha The Daily AI Show, Platzi Podcast y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.6.0 | © 2007-2026 radio.de GmbH
Generated: 2/20/2026 - 8:30:12 PM