Powered by RND

Deep Papers

Arize AI
Deep Papers
Último episodio

Episodios disponibles

5 de 58
  • Meta AI Researcher Explains ARE and Gaia2: Scaling Up Agent Environments and Evaluations
    In our latest paper reading, we had the pleasure of hosting Grégoire Mialon — Research Scientist at Meta Superintelligence Labs — to walk us through Meta AI’s groundbreaking paper titled “ARE: scaling up agent environments and evaluations" and the new ARE and Gaia2 frameworks.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
    --------  
    22:34
  • Georgia Tech's Santosh Vempala Explains Why Language Models Hallucinate, His Research With OpenAI
    Santosh Vempala, Frederick Storey II Chair of Computing and Distinguished Professor in the School of Computer Science at Georgia Tech, explains his paper co-authored by OpenAI's Adam Tauman Kalai, Ofir Nachum, and Edwin Zhang. Read the paper: Sign up for future AI research paper readings and author office hours. See LLM hallucination examples here for context.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
    --------  
    31:24
  • Atropos Health’s Arjun Mukerji, PhD, Explains RWESummary: A Framework and Test for Choosing LLMs to Summarize Real-World Evidence (RWE) Studies
    Large language models are increasingly used to turn complex study output into plain-English summaries. But how do we know which models are safest and most reliable for healthcare? In this most recent community AI research paper reading, Arjun Mukerji, PhD – Staff Data Scientist at Atropos Health – walks us through RWESummary, a new benchmark designed to evaluate LLMs on summarizing real-world evidence from structured study output — an important but often under-tested scenario compared to the typical “summarize this PDF” task.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
    --------  
    26:22
  • Stan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon Walks Us Through His New Paper
    This episode dives into "Category-Theoretic Analysis of Inter-Agent Communication and Mutual Understanding Metric in Recursive Consciousness." The paper presents an extension of the Recursive Consciousness framework to analyze communication between agents and the inevitable loss of meaning in translation. We're thrilled to feature the paper's author, Stan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon, to walk us through the research and its implications.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
    --------  
    48:11
  • Small Language Models are the Future of Agentic AI
    We had the privilege of hosting Peter Belcak – an AI Researcher working on the reliability and efficiency of agentic systems at NVIDIA – who walked us through his new paper making the rounds in AI circles titled “Small Language Models are the Future of Agentic AI.”The paper posits that small language models (SLMs) are sufficiently powerful, inherently more suitable, and necessarily more economical for many invocations in agentic systems, and are therefore the future of agentic AI. The authors’ argumentation is grounded in the current level of capabilities exhibited by SLMs, the common architectures of agentic systems, and the economy of LM deployment. The authors further argue that in situations where general-purpose conversational abilities are essential, heterogeneous agentic systems (i.e., agents invoking multiple different models) are the natural choice. They discuss the potential barriers for the adoption of SLMs in agentic systems and outline a general LLM-to-SLM agent conversion algorithm.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
    --------  
    31:15

Más podcasts de Economía y empresa

Acerca de Deep Papers

Deep Papers is a podcast series featuring deep dives on today’s most important AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.
Sitio web del podcast

Escucha Deep Papers, Libros para Emprendedores y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/14/2025 - 7:53:17 AM