Powered by RND
PodcastsTecnologíaFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Último episodio

Episodios disponibles

5 de 482
  • What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)
    Karl Koch is founder of the AI Whistleblower Initiative. He joins the podcast to discuss transparency and protections for AI insiders who spot safety risks. We explore current company policies, legal gaps, how to evaluate disclosure decisions, and whistleblowing as a backstop when oversight fails. The conversation covers practical guidance for potential whistleblowers and challenges of maintaining transparency as AI development accelerates.LINKS:About the AI Whistleblower InitiativeKarl KochPRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(00:55) Starting the Whistleblower Initiative(05:43) Current State of Protections(13:04) Path to Optimal Policies(23:28) A Whistleblower's First Steps(32:29) Life After Whistleblowing(39:24) Evaluating Company Policies(48:19) Alternatives to Whistleblowing(55:24) High-Stakes Future Scenarios(01:02:27) AI and National SecuritySOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyPDISCLAIMERS: - AIWI does not request, encourage or counsel potential whistleblowers or listeners of this podcast to act unlawfully. - This is not legal advice and if you, the listener, find yourself needing legal counsel, please visit https://aiwi.org/contact-hub/ for detailed profiles of the world's leading whistleblower support organizations.
    --------  
    1:08:16
  • Can Machines Be Truly Creative? (with Maya Ackerman)
    Maya Ackerman is an AI researcher, co-founder and CEO of WaveAI, and author of the book "Creative Machines: AI, Art & Us." She joins the podcast to discuss creativity in humans and machines. We explore defining creativity as novel and valuable output, why evolution qualifies as creative, and how AI alignment can reduce machine creativity. The conversation covers humble creative machines versus all-knowing oracles, hallucination's role in thought, and human-AI collaboration strategies that elevate rather than replace human capabilities.LINKS:- Maya Ackerman: https://en.wikipedia.org/wiki/Maya_Ackerman- Creative Machines: AI, Art & Us: https://maya-ackerman.com/creative-machines-book/PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:00) Defining Human Creativity(02:58) Machine and AI Creativity(06:25) Measuring Subjective Creativity(10:07) Creativity in Animals(13:43) Alignment Damages Creativity(19:09) Creativity is Hallucination(26:13) Humble Creative Machines(30:50) Incentives and Replacement(40:36) Analogies for the Future(43:57) Collaborating with AI(52:20) Reinforcement Learning & Slop(55:59) AI in EducationSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
    --------  
    1:01:51
  • From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)
    Parmy Olson is a technology columnist at Bloomberg and the author of Supremacy, which won the 2024 Financial Times Business Book of the Year. She joins the podcast to discuss the transformation of AI companies from research labs to product businesses. We explore how funding pressures have changed company missions, the role of personalities versus innovation, the challenges faced by safety teams, and power consolidation in the industry.LINKS:- Parmy Olson on X (Twitter): https://x.com/parmy- Parmy Olson’s Bloomberg columns: https://www.bloomberg.com/opinion/authors/AVYbUyZve-8/parmy-olson- Supremacy (book): https://www.panmacmillan.com/authors/parmy-olson/supremacy/9781035038244PRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) Episode Preview(01:18) Introducing Parmy Olson(02:37) Personalities Driving AI(06:45) From Research to Products(12:45) Has the Mission Changed?(19:43) The Role of Regulators(21:44) Skepticism of AI Utopia(28:00) The Human Cost(33:48) Embracing Controversy(40:51) The Role of Journalism(41:40) Big Tech's InfluenceSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
    --------  
    46:37
  • Can Defense in Depth Work for AI? (with Adam Gleave)
    Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building.LINKS:Adam Gleave - https://www.gleave.meFAR.AI - https://www.far.aiThe Cognitive Revolution Podcast - https://www.cognitiverevolution.aiPRODUCED BY:https://aipodcast.ingCHAPTERS:(00:00) A Positive Post-AGI Vision(10:07) Surviving Gradual Disempowerment(16:34) Defining Powerful AIs(27:02) Solving Continual Learning(35:49) The Just-in-Time Safety Problem(42:14) Can Defense-in-Depth Work?(49:18) Fixing Alignment Problems(58:03) Safer Training Formulas(01:02:24) The Role of Interpretability(01:09:25) FAR.AI's Vertically Integrated Approach(01:14:14) Hiring at FAR.AI(01:16:02) The Future of GovernanceSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
    --------  
    1:18:35
  • How We Keep Humans in Control of AI (with Beatrice Erkers)
    Beatrice works at the Foresight Institute running their Existential Hope program. She joins the podcast to discuss the AI pathways project, which explores two alternative scenarios to the default race toward AGI. We examine tool AI, which prioritizes human oversight and democratic control, and d/acc, which emphasizes decentralized, defensive development. The conversation covers trade-offs between safety and speed, how these pathways could be combined, and what different stakeholders can do to steer toward more positive AI futures.LINKS:AI Pathways - https://ai-pathways.existentialhope.comBeatrice Erkers - https://www.existentialhope.com/team/beatrice-erkersCHAPTERS:(00:00) Episode Preview(01:10) Introduction and Background(05:40) AI Pathways Project(11:10) Defining Tool AI(17:40) Tool AI Benefits(23:10) D/acc Pathway Explained(29:10) Decentralization Trade-offs(35:10) Combining Both Pathways(40:10) Uncertainties and Concerns(45:10) Future Evolution(01:01:21) Funding PilotsPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
    --------  
    1:06:45

Más podcasts de Tecnología

Acerca de Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Sitio web del podcast

Escucha Future of Life Institute Podcast, 🎙️ Cosas de programadores | Podcast Desarrollo de Software | campusMVP.es y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

Future of Life Institute Podcast: Podcasts del grupo

Aplicaciones
Redes sociales
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/9/2025 - 2:29:56 AM