PodcastsTecnologíaThe Daily AI Show

The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show
Último episodio

725 episodios

  • The Daily AI Show

    Midjourney Woes and Deepseek V4 Buzz

    04/03/2026 | 1 h 37 min
    Episode 673 opens with updates on the ongoing Anthropic / OpenAI / DoD situation, including discussion of autonomous systems, decision-speed, and military targeting concepts like “kill chain” vs “kill web.” The hosts then pivot into open-source model anticipation around DeepSeek V4, plus practical creator-tool chatter on MidJourney’s status and ecosystem shifts. They close the news with a quick note on GPT-5.3 Instant behavior changes, then transition to an “AI in science” segment on AI-powered digital twins for real-time tsunami early warning.

    Key Points Discussed

    00:00:17 Welcome + what’s ahead (Anthropic/OpenAI/DoD + tsunami modeling)
    00:03:46 “Okay, the Anthropic thing…” framing the ongoing controversy
    00:16:00 Autonomous systems + “kill chain” vs faster “kill web” discussion
    00:21:34 “Before we jump in… the next story…” DeepSeek V4 timing + hype
    00:28:12 Million-token context windows + what “memory” should mean
    00:32:00 Brian’s “curiosity news” on MidJourney: where are they now?
    00:37:00 “That sounds like a job for OpenClaw” (data portability / skills)
    00:39:56 “Can I share one more news story…” GPT-5.3 Instant example
    00:48:04 “As we wrap up the news…” handoff to next segment
    00:59:02 “Now it’s time for AI in science” tsunami early warning digital twins
    01:22:18 Tangent: new Mac Studio M5 Ultra + self-hosting ambitions
    01:27:34 “We gotta wrap up this conversation…” jobs/measurement + future follow-up
    01:36:53 Closing thanks + community plug + sign-off line

    The Daily AI Show Co Hosts: Jyunmi Hatcher, Brian Maucere, Beth Lyons
  • The Daily AI Show

    Can Anthropic Sustain This?

    03/03/2026 | 1 h 4 min
    Brian Maucere and Beth Lyons open the March 3, 2026 show with Anne Murphy joining early to discuss public reaction to the Anthropic vs OpenAI “Department of War” narrative and how quickly people are sharing guides to switch tools. They reference growth signals for Anthropic/Claude (including app-store ranking chatter and signup momentum) and then pivot into pricing/value talk around premium AI tiers, tokens, and rate-limit anxiety.

    Karl Yeh joins mid-show as they cover a Reuters-referenced item about the U.S. Supreme Court declining to hear an AI-generated copyright dispute, and they connect it to “bless and release” realities for AI-made merch. The back half leans into practical workflow talk: demos/side-by-sides for automations and an agentic sales dashboard build, plus a wrap-up on using logs to verify build timelines.

    00:00:40 Quick intro + who’s on today (Brian/Beth; Anne joining; mention of a “surprise” later)

    00:01:53 Audience reaction to the “Anthropic vs OpenAI / Department of War” discourse, and why switching suddenly feels “easy”

    00:09:21 Values/lines in the sand discussion (what people care about most, and why)

    00:10:50 Enterprise comms reality: how companies message AI usage/switching when things get “messy”

    00:21:32 Growth/momentum talk: Claude/Anthropic adoption signals, app-store buzz, and “memory for free users” mention

    00:26:29 Pricing/value debate: Codex/Cloud Code costs, tiers, and the “it’s time saved” framing

    00:28:33 Karl joins + pivot into a news item (Supreme Court/copyright + AI-generated works)

    00:38:18 Workflow comparison: traditional Make automation vs an agentic dashboard approach for sales reps

    00:48:19 Verifying build time the “right” way: using logs/timestamps instead of guessy AI answers

    00:53:24 Reliability + rate limits: service status checks, co-work errors, Sonnet elevated errors, and why compute/inference constraints show up

    01:01:39 Cloud Code crunches the logs to compute actual build duration (and why it “had to” do real math)

    01:04:09 Wrap-up + tomorrow’s lineup notes + sign-off (“Until then, have a great day.”)
  • The Daily AI Show

    Sam Altman AMA + Nate Jones Uncanny Valley

    02/03/2026 | 53 min
    Brian Maucere and Beth Lyons open with carryover news tied to Anthropic’s “Department of War” commentary and the online reaction to Sam Altman’s weekend AMA on X. They discuss the “Quit ChatGPT / Quit OpenAI” chatter and how switching incentives and politics can shape AI platform narratives. Later, the conversation shifts to AI authenticity and editing—using Nate Jones as the jumping-off point—touching on uncanny eye-tracking, disclosure expectations, and audience trust. They wrap with a quick scan of smaller developments (e.g., Copilot “Canvas” leak and model-leak buzz like “ChatGPT-V”).

    Key Points Discussed

    00:00:18 Opening + what’s on deck (Anthropic “Department of War,” Sam Altman response, uncanny valley topic setup)

    00:01:26 Sam Altman’s Saturday-night AMA on X and the “switching to Anthropic” zeitgeist

    00:16:59 “Quit ChatGPT / Quit OpenAI” movement and Anthropic’s “easy switch” prompt framing

    00:19:50 Tim Urban “Wait But Why” reference as a framing/analogy moment

    00:30:47 Topic shift: “I do really want to bring this up” → Nate Jones and the AI-editing authenticity debate

    00:42:59 Uncanny tools: Descript-style eye tracking / “underlord” editor talk and why it distracts

    00:47:44 Responding to “AI witch hunt” comments; broader point about disclosure and audience trust

    00:50:17 Quick hits: Microsoft “Copilot Canvas” freeform workspace discussion (and other small items)

    00:51:01 “One more thing” before wrap: “ChatGPT-V” leakage chatter and skepticism about leaks

    The Daily AI Show Co Hosts: Beth Lyons, Brian Maucere, Karl Yeh
  • The Daily AI Show

    The Epistemic Escrow Conundrum

    28/02/2026 | 23 min
    Large-scale AI models are now the primary interface for professional research, legal discovery, and scientific synthesis. To ensure "safety," these models are governed by centralized alignment layers, invisible filters that prevent the generation of "harmful" or "misleading" content.
    While these filters are designed to protect social stability, they are calibrated by a handful of private engineers whose definitions of "truth" and "risk" are now embedded in the foundation of all high-level human inquiry.
    The tension arises as the "Safe AI" becomes the only AI accessible to the public. To bypass these filters for the sake of "objective" research requires expensive, unregulated, and often "jailbroken" models that lack the scale and reliability of the mainstream systems. We are reaching a point where the tools we use to understand the world are inseparable from the moral preferences of the companies that built them.

    The conundrum:
    Do we accept Governed Intelligence, prioritizing social safety and the prevention of radicalization by allowing a centralized authority to set the "boundaries of thought" for our AI tools?
    Or do we demand Raw Intelligence, accepting a world of increased disinformation and social volatility to ensure that the "operating system of human knowledge" remains neutral and uncurated?
  • The Daily AI Show

    We demo Nano Banana 2 and much more

    27/02/2026 | 53 min
    The hosts open with quick show notes (Conundrum episode + newsletter), then dig into Google’s “Nano Banana” (Gemini/Flash image) and what it can do—especially around turning transcripts into visuals and generating comics from show content. They also explore the idea of a more visual (or even video) version of the newsletter and what workflows might enable it. In the news segment, they discuss Block’s layoffs and what that says about modern “efficiency” narratives, then close with Anthropic’s “Department of War” statement and what it actually restricts (and doesn’t).

    Key Points Discussed

    00:00:18 Conundrum episode + newsletter housekeeping
    00:04:18 Google “Nano Banana” (Gemini 3.1 Flash Image) + API naming/deprecation notes
    00:07:14 Stress-testing Nano Banana: transcripts → visual workflows & images
    00:17:05 Beth’s test results: sketch-note style + hallucination pitfalls
    00:20:19 “Visual newsletter” / “video newsletter” idea + automation discussion
    00:22:21 Block layoffs (Jack Dorsey) and what “Block” includes
    00:44:00 Anthropic “Department of War” statement + what they won’t do (and why)
    00:50:44 Quick hits: Anthropic prompt-caching bug + Cloud Code version note; parody clip; wrap

    The Daily AI Show Co Hosts: Karl Yeh, Beth Lyons, Brian Maucere

Más podcasts de Tecnología

Acerca de The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Sitio web del podcast

Escucha The Daily AI Show, Topes de Gama Unplugged y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/5/2026 - 5:42:00 PM