Powered by RND
PodcastsNoticiasAI Safety Fundamentals

AI Safety Fundamentals

BlueDot Impact
AI Safety Fundamentals
Último episodio

Episodios disponibles

5 de 171
  • AI Emergency Preparedness: Examining the Federal Government's Ability to Detect and Respond to AI-Related National Security Threats
    By Akash Wasil et al.This paper uses scenario planning to show how governments could prepare for AI emergencies. The authors examine three plausible disasters: losing control of AI, AI model theft, and bioweapon creation. They then expose gaps in current preparedness systems, and propose specific government reforms including embedding auditors inside AI companies and creating emergency response units.Source:https://arxiv.org/pdf/2407.17347A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
    --------  
    9:44
  • A Playbook for Securing AI Model Weights
    By Sella Nevo et al.In this report, RAND researchers identify real-world attack methods that malicious actors could use to steal AI model weights. They propose a five-level security framework that AI companies could implement to defend against different threats, from amateur hackers to nation-state operations.Source:https://www.rand.org/pubs/research_briefs/RBA2849-1.htmlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
    --------  
    19:56
  • Introduction to AI Control
    By Sarah Hastings-WoodhouseAI Control is a research agenda that aims to prevent misaligned AI systems from causing harm. It is different from AI alignment, which aims to ensure that systems act in the best interests of their users. Put simply, aligned AIs do not want to harm humans, whereas controlled AIs can't harm humans, even if they want to.Source:https://bluedot.org/blog/ai-controlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
    --------  
    10:19
  • Resilience and Adaptation to Advanced AI
    By Jamie BernardiJamie Bernardi argues that we can't rely solely on model safeguards to ensure AI safety. Instead, he proposes "AI resilience": building society's capacity to detect misuse, defend against harmful AI applications, and reduce the damage caused when dangerous AI capabilities spread beyond a government or company's control.Source: https://airesilience.substack.com/p/resilience-and-adaptation-to-advanced?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
    --------  
    13:42
  • The Project: Situational Awareness
    By Leopold AschenbrennerA former OpenAI researcher argues that private AI companies cannot safely develop superintelligence due to security vulnerabilities and competitive pressures that override safety. He argues that a government-led 'AGI Project' is inevitable and necessary to prevent adversaries stealing the AI systems, or losing human control over the technology.Source:https://situational-awareness.ai/the-project/?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
    --------  
    32:04

Más podcasts de Noticias

Acerca de AI Safety Fundamentals

Listen to resources from the AI Safety Fundamentals courses!https://aisafetyfundamentals.com/
Sitio web del podcast

Escucha AI Safety Fundamentals, CNN 5 Cosas y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

AI Safety Fundamentals: Podcasts del grupo

Aplicaciones
Redes sociales
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 9/19/2025 - 3:49:49 AM