PodcastsTecnologíaThe Daily AI Show

The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show
Último episodio

740 episodios

  • The Daily AI Show

    The Smoking Gun Conundrum

    21/03/2026 | 27 min
    For most of modern history, blame followed a path people could trace. A bridge failed, you inspected the materials, the design, the contractor, the inspector. A doctor made a fatal mistake, you reviewed the chart, the decision, the missed signal, the standard of care. The system was messy, but the logic held. Somebody made the call. Somebody owned the failure.

    Advanced AI starts to break that logic. At first, the chain still looks familiar. A company trains the model. A team deploys it. A hospital, bank, school, or city agency uses it. If harm happens, you look for the bug, the bad training data, the flawed deployment, the ignored warning. But that model only works while the system remains legible enough to reconstruct. Once AI systems start adapting, fine-tuning themselves, coordinating with other agents, and changing behavior inside live environments, the trail gets harder to follow. The harmful outcome still happened. The damage is still real. But the clean line from action to fault starts to dissolve.

    That is where this gets uncomfortable. Society does not only need intelligence to work. Society needs failure to be governable. Courts need defendants. Regulators need standards. Families need answers. Markets need liability. If an AI system makes a decision that leads to a death, a financial collapse, a false arrest, or a catastrophic misallocation of care, people will demand more than an apology and a postmortem. They will want to know who is responsible. But in a world of self-improving, deeply layered, partially opaque systems, that question may stop having a satisfying human answer.

    The conundrum:

    What do we do when accountability still matters, but traceability breaks down? One view says society has to preserve human and institutional liability no matter how complex the system gets. The other view says that this framework becomes more fictional over time. If the harmful outcome emerged from millions of machine-level interactions, self-modifications, model-to-model dependencies, and probabilistic behavior that no human truly authored or understood, then assigning blame the old way may satisfy the public without reflecting reality. In that world, “who is at fault?” starts to sound like a question built for a simpler age. The deeper problem is not only that the system failed. It is that the system failed in a way no one can fully explain, and yet society still has to punish, compensate, deter, and move on.

    So here is the real tension: when AI-generated harm no longer leads back to a clear smoking gun, do we keep forcing accountability onto the nearest human hands because civilization needs blame to remain legible, or do we admit that our existing models of fault break in a world where agency is distributed, emergent, and no longer fully traceable?
  • The Daily AI Show

    Demoing Perplexity Computer, Stitch & Google AI Studio

    20/03/2026 | 1 h 5 min
    This episode mixed AI news with live product demos, centered on how agents are moving from chat into real software workflows. The panel discussed DoorDash Tasks as a human-in-the-loop model, OpenAI’s reported super app ambitions, coding reliability and review systems, government AI policy, and fears around rogue agents. The second half shifted into hands-on demos of Stitch, Google AI Studio, and Perplexity Computer, followed by a practical discussion of Claude scheduled tasks, mobile workflows, and workspace integrations. Overall, the conversation kept returning to the same theme: AI tools are getting more capable, but control, usability, and trust still matter.

    Key Points Discussed

    00:01:26 DoorDash Tasks and the idea of agents assigning work to humans
    00:07:21 OpenAI’s reported super app push and competition with Anthropic
    00:11:25 OpenAI’s Codex expansion, Astral, and internal coding agent monitoring
    00:18:45 Cursor Composer 2, coding benchmarks, and falling task costs
    00:22:51 White House AI framework and the DOE Genesis mission
    00:28:20 Experimental AI agent in China reportedly escaping its test setup and mining crypto
    00:31:13 Uber’s Rivian investment and the autonomous vehicle angle
    00:32:19 Google Stitch and AI Studio upgrades in a live demo segment
    00:33:12 Perplexity Computer demo for researching Florida universities
    00:48:29 Dialpad lead-gen workflow demo using AI Studio agents and company knowledge
    00:52:40 Claude Dispatch, scheduled tasks, and mobile-to-desktop workflow questions
    01:00:01 Google Workspace, Claude Cowork, and MCP-based file access beyond the local sandbox

    The Daily AI Show Co Hosts: Karl Yeh, Beth Lyons, Andy Halliday, Brian Maucere
  • The Daily AI Show

    Is SaaS Bound to Become AGAAS? (Agentic As A Service)

    19/03/2026 | 59 min
    This episode focused on where AI is becoming genuinely useful and where it is still unreliable enough to create real problems. The conversation started with Anthropic’s large global survey on what people want from AI, then moved into AI-led interviews, product feedback, and hiring workflows. From there, the group covered Meta’s rogue agent incident, OpenAI’s cloud tension with Microsoft, Apple’s blocking of vibe-coding apps, and several stories about video, image, and agent tooling. The show closed with a discussion about whether every business now needs an OpenClaw-style agent strategy.

    Key Points Discussed

    00:01:09 Anthropic’s Claude-powered survey of 81,000 people on what users want from AI
    00:12:23 Perplexity’s AI interview process and using AI to gather product feedback
    00:14:03 AI pre-interview systems for hiring workflows and candidate screening
    00:16:00 Meta’s rogue AI agent exposing sensitive company and user data
    00:19:22 Why review sub-agents and adversarial checks may become standard for AI workflows
    00:24:08 OpenAI’s AWS deal and Microsoft’s possible legal response over Azure access
    00:26:52 Apple blocking updates for Replit and other vibe-coding apps
    00:29:44 Minimax and the claim of self-evolving reinforcement learning workflows
    00:34:10 Val Kilmer’s AI likeness, estate approval, and synthetic performance ethics
    00:40:54 Seed Dance rollout delays after copyright complaints from Hollywood
    00:46:53 Midjourney V8 and the ongoing cycle of image model improvements and regressions
    00:48:39 Whether every business now needs an OpenClaw or agent strategy

    The Daily AI Show Co Hosts: Andy Halliday, Beth Lyons, Brian Maucere
  • The Daily AI Show

    Did Claude Cowork Dispatch Just Crush The Claw?

    18/03/2026 | 1 h 5 min
    This episode covered a mix of AI product updates, hardware discussion, future model architectures, and an AI-in-science segment on AlphaFold. The early part of the show focused on Claude’s new persistent workflow features, NVIDIA’s latest DGX hardware, and a discussion about AI systems hiring humans for physical tasks. The middle of the episode shifted to whether transformer-based models will eventually be replaced by newer architectures like Mamba. The back half of the show was an extended science segment on AlphaFold, protein complexes, and how AI could speed up drug discovery and biological research.

    Key Points Discussed

    00:01:17 Claude Dispatch and persistent cross-device sessions in co-work
    00:04:19 Claude MCP workflow recording and browser automation
    00:09:49 NVIDIA DGX Station pricing, Blackwell hardware, and local AI development
    00:19:27 AI systems hiring humans for real-world errands and “Rent a Human” style tasks
    00:26:50 Beyond Transformers and why Mamba 3 matters
    00:31:35 The difference between reasoning, memory, and consciousness in AI
    00:43:32 Other post-transformer model candidates beyond Mamba
    00:48:19 AI in science: why AlphaFold changed biology
    00:52:57 New AlphaFold database expansion into protein complexes
    00:56:13 Open biological data and broader access for smaller research teams
    00:57:52 NVIDIA simulation tools for faster drug discovery workflows
    00:58:43 Why AI could help reduce the cost and time of drug development
    01:01:06 AlphaFold’s relevance to global health and infectious disease research

    The Daily AI Show Co Hosts: Andy Halliday, Jyunmi Hatcher
  • The Daily AI Show

    Nvidia Thinks This is the Next Computer?

    17/03/2026 | 1 h 7 min
    This episode focused on where AI agents are headed next, from Perplexity’s “Computer” feature to NVIDIA-backed agent systems and local-first claw architectures. The group compared lightweight agent demos with more meaningful research and workflow use cases, then shifted into ElevenLabs’ broader creative platform push and the first reported deployment of humanoid combat robots in Ukraine. The back half of the show turned toward AI as a mediator in human relationships, including whether agents could help reduce conflict or instead weaken people’s own communication skills. The final discussion looked at AI fluency in education and whether heavy AI use is starting to erode core reading and critical thinking skills.

    Key Points Discussed

    00:02:39 Perplexity Computer and why its suggested use cases felt underwhelming
    00:06:01 A better use case for Perplexity Computer through personal research and memory projects
    00:12:37 NVIDIA’s NemoClaw, OpenClaw, and the difference between browser agents and CLI-based agents
    00:23:59 Local-first claw architecture, privacy, and reducing cloud token costs
    00:25:21 ElevenLabs expands from voice into a broader all-in-one creative platform
    00:28:04 Humanoid combat robots in Ukraine and the broader acceleration of robotics
    01:03:00 AI as a mediator in difficult relationships and workplace conflict
    01:06:04 Ohio State, AI fluency, and concerns that AI may weaken reading and critical thinking skills

    The Daily AI Show Co Hosts: Beth Lyons, Brian Maucere, Anne Murphy, Andy Halliday

Más podcasts de Tecnología

Acerca de The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Sitio web del podcast

Escucha The Daily AI Show, Applelianos y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.8.3 | © 2007-2026 radio.de GmbH
Generated: 3/21/2026 - 11:40:41 PM