Powered by RND
PodcastsTecnologíaThe Daily AI Show
Escucha The Daily AI Show en la aplicación
Escucha The Daily AI Show en la aplicación
(898)(249 730)
Favoritos
Despertador
Sleep timer

The Daily AI Show

Podcast The Daily AI Show
The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy pr...

Episodios disponibles

5 de 448
  • The AI Soulmate Conundrum
    In a future not far off, artificial intelligence has quietly collected the most intimate data from billions of people. It has observed how your body responds to conflict, how your voice changes when you're hurt, which words you return to when you're hopeful or afraid. It has done the same for everyone else. With enough data, it claims, love is no longer a mystery. It is a pattern, waiting to be matched.One day, the AI offers you a name. A face. A person. The system predicts that this match is your highest probability for a long, fulfilling relationship. Couples who accept these matches experience fewer divorces, less conflict, and greater overall well-being. The AI is not always right, but it is more right than any other method humans have ever used to find love.But here is the twist. Your match may come from a different country, speak a language you don’t know, or hold beliefs that conflict with your own. They might not match the gender or personality type you thought you were drawn to. Your friends may not understand. Your family may not approve. You might not either, at first. And yet, the data says this is the person who will love you best, and whom you will most likely grow to love in return.If you accept the match, you are trusting that the deepest truth about who you are can be known by a system that sees what you cannot. But if you reject it, you do so knowing you may never experience love that comes this close to certainty.The conundrum:If AI offers you the person most likely to love and understand you for the rest of your life, but that match challenges your sense of identity, your beliefs, or your community, do you follow it anyway and risk everything familiar in exchange for deep connection? Or do you walk away, holding on to the version of love you always believed in, even if it means never finding it?This podcast is created by AI. We used ChatGPT, Perplexity and Google NotebookLM's audio overview to create the conversation you are hearing. We do not make any claims to the validity of the information provided and see this as an experiment around deep discussions fully generated by AI.
    --------  
    18:04
  • How Google Quietly Became an AI Superpower (Ep. 440)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comWith the release of Gemini 2.5, expanded integration across Google Workspace, new agent tools, and support for open protocols like MCP, Google is making a serious case as an AI superpower. The show breaks down what’s real, what still feels clunky, and where Google might actually pull ahead.Key Points DiscussedGemini 2.5 shows improved writing, code generation, and multimodal capabilities, but responses still sometimes end early or hallucinate limits.AAI Studio offers a smoother, more integrated experience than regular Gemini Advanced. All chats save directly to Google Drive, making organization easier.Google’s AI now interprets YouTube videos with timestamps and extracts contextual insights when paired with transcripts.Google Labs tools like Career Dreamer, YouTube Conversational AI, VideoFX, and Illuminate show practical use cases from education to slide decks to summarizing videos.The team showcased how Gemini models handle creative image generation using temperature settings to control fidelity and style.Google Workspace now embeds Gemini directly across tools, with a stronger push into Docs, Sheets, and Slides.Google Cloud’s Vertex AI now supports a growing list of generative models including Veo, Chirp (voice), and Lyra (music).Project Mariner, Google’s operator-style browsing agent, adds automated web interaction features using Gemini.Google DeepMind, YouTube, Fitbit, Nest, Waymo, and others create a wide base for Gemini to embed across industries.Google now officially supports Model Context Protocol (MCP), allowing standardized interaction between agents and tools.The Agent SDK, Agent-to-Agent (A2A) protocol, and Workspace Flows give developers the power to build, deploy, and orchestrate intelligent AI agents.#GoogleAI #Gemini25 #MCP #A2A #WorkspaceAI #AAIStudio #VideoFX #AIsearch #VertexAI #GoogleNext #AgentSDK #FirebaseStudio #Waymo #GoogleDeepMindTimestamps & Topics00:00:00 🚀 Intro: Is Google becoming an AI superpower?00:01:41 💬 New Slack community announcement00:03:51 🌐 Gemini 2.5 first impressions00:05:17 📁 AAI Studio integrates with Google Drive00:07:46 🎥 YouTube video analysis with timestamps00:10:13 🧠 LLMs stop short without warning00:13:31 🧪 Model settings and temperature experiments00:16:09 🧊 Controlling image consistency in generation00:18:07 🐻 A surprise polar bear and meta image failures00:19:27 🛠️ Google Labs overview and experiment walkthroughs00:20:50 🎓 Career Dreamer as a career discovery tool00:23:16 🖼️ Slide deck generator with voice and video00:24:43 🧭 Illuminate for short AI video summaries00:26:04 🔧 Project Mariner brings browser agents to Chrome00:30:00 🗂️ Silent drops and Google’s update culture00:31:39 🧩 Workspace integration, Lyra, Veo, Chirp, and Vertex AI00:34:17 🛡️ Unified security and AI-enhanced networking00:36:45 🤖 Agent SDK, A2A, and MCP officially backed by Google00:40:50 🔄 Firebase Studio and cross-system automation00:42:59 🔄 Workspace Flows for document orchestration00:45:06 📉 API pricing tests with OpenRouter00:46:37 🧪 N8N MCP nodes in preview00:48:12 💰 Google's flexible API cost structures00:49:41 🧠 Context window skepticism and RAG debates00:51:04 🎬 VideoFX demo with newsletter examples00:53:54 🚘 Waymo, DeepMind, YouTube, Nest, and Google’s reach00:55:43 ⚠️ Weak interconnectivity across Google teams00:58:03 📊 Sheets, Colab, and on-demand data analysts01:00:04 😤 Microsoft Copilot vs Google Gemini frustrations01:01:29 🎓 Upcoming SciFi AI Show and community wrap-upThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    1:02:47
  • Keeping Up With AI Without Burning Out (Ep. 439)
    Want to keep the conversation going?Join our Slack community at dailyaishowcommunity.comThe Daily AI Show team covers this week’s biggest AI stories, from OpenAI’s hardware push and Shopify’s AI-first hiring policy to breakthroughs in soft robotics and Google's latest updates. They also spotlight new tools like Higgsfield for AI video and growing traction for model context protocol (MCP) as the next API evolution.Key Points DiscussedOpenAI is reportedly investing $500 million into a hardware partnership with Jony Ive, signaling a push toward AI-native devices.Shopify’s CEO told staff to prove AI can’t do the job before requesting new hires. It sparked debate about AI-driven efficiency vs. job creation.The panel explored the limits of automation in trade jobs like plumbing and roadwork, and whether AI plus robotics will close that gap over time.11Labs and Supabase launched official Model Context Protocol (MCP) servers, making it easier for tools like Claude to interact via natural language.Google announced Ironwood, its 7th-gen TPU optimized for inference, and Gemini 2.5, which adds controllable output and dynamic behavior.Reddit will start integrating Gemini into its platform and feeding data back to Google for training purposes.Intel and TSMC announced a joint venture, with TSMC taking a 20% stake in Intel’s chipmaking facilities to expand U.S.-based semiconductor production.OpenAI quietly launched Academy, offering live and on-demand AI education for developers, nonprofits, and educators.Higgsfield, a new video generation tool, impressed the panel with fluid motion, accurate physics, and natural character behavior.Meta’s Llama 4 faced scrutiny over benchmarks and internal drama, but Llama 3 continues to power open models from DeepSeek, NVIDIA, and others.Google’s AI search mode now handles complex queries and follows conversational context. The team debated how ads and SEO will evolve as AI-generated answers push organic results further down.A Penn State team developed a soft robot that can scale down for internal medicine delivery or scale up for rescue missions in disaster zones.Hashtags#AInews #OpenAI #ShopifyAI #ModelContextProtocol #Gemini25 #GoogleAI #AIsearch #Llama4 #Intel #TSMC #Higgsfield #11Labs #SoftRobots #AIvideo #ClaudeTimestamps & Topics00:00:00 🗞️ OpenAI eyes $500M hardware investment with Jony Ive00:04:14 👔 Shopify CEO pushes AI-first hiring00:13:42 🔧 Debating automation and the future of trade jobs00:20:23 📞 11Labs launches MCP integration for voice agents00:24:13 🗄️ Supabase adds MCP server for database access00:26:31 🧠 Intel and TSMC partner on chip production00:30:04 🧮 Google announces Ironwood TPU and Gemini 2.500:33:09 📱 Gemini 2.5 gets research mode and Reddit integration00:36:14 🎥 Higgsfield shows off impressive AI video realism00:38:41 📉 Meta’s Llama 4 faces internal challenges, Llama 3 powers open tools00:44:38 📊 Google’s AI Search and the future of organic results00:54:15 🎓 OpenAI launches Academy for live and recorded AI education00:55:31 🧪 Penn State builds scalable soft robot for rescue and medicineThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    56:06
  • AI News: OpenAI's BIG Hardware Move And More! (Ep. 438)
    The Daily AI Show team covers this week’s biggest AI stories, from OpenAI’s hardware push and Shopify’s AI-first hiring policy to breakthroughs in soft robotics and Google's latest updates. They also spotlight new tools like Higgsfield for AI video and growing traction for model context protocol (MCP) as the next API evolution.Key Points DiscussedOpenAI is reportedly investing $500 million into a hardware partnership with Jony Ive, signaling a push toward AI-native devices.Shopify’s CEO told staff to prove AI can’t do the job before requesting new hires. It sparked debate about AI-driven efficiency vs. job creation.The panel explored the limits of automation in trade jobs like plumbing and roadwork, and whether AI plus robotics will close that gap over time.11Labs and Supabase launched official Model Context Protocol (MCP) servers, making it easier for tools like Claude to interact via natural language.Google announced Ironwood, its 7th-gen TPU optimized for inference, and Gemini 2.5, which adds controllable output and dynamic behavior.Reddit will start integrating Gemini into its platform and feeding data back to Google for training purposes.Intel and TSMC announced a joint venture, with TSMC taking a 20% stake in Intel’s chipmaking facilities to expand U.S.-based semiconductor production.OpenAI quietly launched Academy, offering live and on-demand AI education for developers, nonprofits, and educators.Higgsfield, a new video generation tool, impressed the panel with fluid motion, accurate physics, and natural character behavior.Meta’s Llama 4 faced scrutiny over benchmarks and internal drama, but Llama 3 continues to power open models from DeepSeek, NVIDIA, and others.Google’s AI search mode now handles complex queries and follows conversational context. The team debated how ads and SEO will evolve as AI-generated answers push organic results further down.A Penn State team developed a soft robot that can scale down for internal medicine delivery or scale up for rescue missions in disaster zones.#AInews #OpenAI #ShopifyAI #ModelContextProtocol #Gemini25 #GoogleAI #AIsearch #Llama4 #Intel #TSMC #Higgsfield #11Labs #SoftRobots #AIvideo #ClaudeTimestamps & Topics00:00:00 🗞️ OpenAI eyes $500M hardware investment with Jony Ive00:04:14 👔 Shopify CEO pushes AI-first hiring00:13:42 🔧 Debating automation and the future of trade jobs00:20:23 📞 11Labs launches MCP integration for voice agents00:24:13 🗄️ Supabase adds MCP server for database access00:26:31 🧠 Intel and TSMC partner on chip production00:30:04 🧮 Google announces Ironwood TPU and Gemini 2.500:33:09 📱 Gemini 2.5 gets research mode and Reddit integration00:36:14 🎥 Higgsfield shows off impressive AI video realism00:38:41 📉 Meta’s Llama 4 faces internal challenges, Llama 3 powers open tools00:44:38 📊 Google’s AI Search and the future of organic results00:54:15 🎓 OpenAI launches Academy for live and recorded AI education00:55:31 🧪 Penn State builds scalable soft robot for rescue and medicineThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    1:00:35
  • Can AI Think Before It Speaks? (Ep. 437)
    The team breaks down Anthropic’s new research paper, Tracing the Thoughts of a Language Model, which offers rare insight into how large language models process information. Using a replacement model and attribution graphs, Anthropic tries to understand how Claude actually “thinks.” The show unpacks key findings, philosophical questions, and the implications for future AI design.Key Points DiscussedAnthropic studied its smallest model, Haiku, using a tool called a replacement model to understand internal decision-making paths.Attribution graphs show how specific features activate as the model forms an answer, with many features pulling from multilingual patterns.The research shows Claude plans ahead more than expected. In poetry generation, it preselects rhyming words and builds toward them, rather than solving it at the end.The paper challenges assumptions about LLMs being purely token-to-token predictors. Instead, they show signs of planning, contextual reasoning, and even a form of strategy.Language-agnostic pathways were a surprise: Claude used words from various languages (including Chinese and Japanese) to form responses to English queries.This multilingual feature behavior raised questions about how human brains might also use internal translation or conceptual bridges unconsciously.The team likens the research to the invention of a microscope for AI cognition, revealing previously invisible structures in model thinking.They discussed how growing an AI might be more like cultivating a tree or garden than programming a machine. Inputs, pruning, and training shapes each model uniquely.Beth and Jyunmi highlighted the gap between proprietary research and open sharing, emphasizing the need for more transparent AI science.The show closed by comparing this level of research to studying human cognition, and how AI could be used to better understand our own thinking.Hashtags#Anthropic #Claude3Haiku #AIresearch #AttributionGraphs #MultilingualAI #LLMthinking #LLMinterpretability #AIplanning #AIphilosophy #BlackBoxAITimestamps & Topics00:00:00 🧠 Intro to Anthropic’s paper on model thinking00:03:12 📊 Overview of attribution graphs and methodology00:06:06 🌐 Multilingual pathways in Claude’s thought process00:08:31 🧠 What is Claude “thinking” when answering?00:12:30 🔁 Comparing Claude’s process to human cognition00:18:11 🌍 Language as a flexible layer, not a barrier00:25:45 📝 How Claude writes poetry by planning rhymes00:28:23 🔬 Microscopic insights from AI interpretability00:29:59 🤔 Emergent behaviors in intelligence models00:33:22 🔒 Calls for more research transparency and sharing00:35:35 🎶 Set-up and payoff in AI-generated rhyming00:39:29 🌱 Growing vs programming AI as a development model00:44:26 🍎 Analogies from agriculture and bonsai pruning00:45:52 🌀 Cyclical learning between humans and AI00:47:08 🎯 Constitutional AI and baked-in intention00:53:10 📚 Recap of the paper’s key discoveries00:55:07 🗣️ AI recognizing rhyme and sound without hearing00:56:17 🔗 Invitation to join the DAS community Slack00:57:26 📅 Preview of the week’s upcoming episodesThe Daily AI Show Co-Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Eran Malloch, Jyunmi Hatcher, and Karl Yeh
    --------  
    57:28

Más podcasts de Tecnología

Acerca de The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Sitio web del podcast

Escucha The Daily AI Show, Top Noticias Tech y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v7.14.0 | © 2007-2025 radio.de GmbH
Generated: 4/13/2025 - 6:33:07 AM