By Samuel HammondSource: https://www.secondbest.ca/p/ai-and-leviathan-part-iA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
--------
14:43
--------
14:43
d/acc: One Year Later
By Vitalik ButerinEthereum founder Vitalik Buterin describes how democratic, defensive and decentralised technologies could distribute AI's power across society rather than concentrating it, offering a middle path between unchecked technical acceleration and authoritarian control.Source:https://vitalik.eth.limo/general/2025/01/05/dacc2.htmlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
--------
43:12
--------
43:12
AI Emergency Preparedness: Examining the Federal Government's Ability to Detect and Respond to AI-Related National Security Threats
By Akash Wasil et al.This paper uses scenario planning to show how governments could prepare for AI emergencies. The authors examine three plausible disasters: losing control of AI, AI model theft, and bioweapon creation. They then expose gaps in current preparedness systems, and propose specific government reforms including embedding auditors inside AI companies and creating emergency response units.Source:https://arxiv.org/pdf/2407.17347A podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
--------
9:44
--------
9:44
A Playbook for Securing AI Model Weights
By Sella Nevo et al.In this report, RAND researchers identify real-world attack methods that malicious actors could use to steal AI model weights. They propose a five-level security framework that AI companies could implement to defend against different threats, from amateur hackers to nation-state operations.Source:https://www.rand.org/pubs/research_briefs/RBA2849-1.htmlA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
--------
19:56
--------
19:56
Resilience and Adaptation to Advanced AI
By Jamie BernardiJamie Bernardi argues that we can't rely solely on model safeguards to ensure AI safety. Instead, he proposes "AI resilience": building society's capacity to detect misuse, defend against harmful AI applications, and reduce the damage caused when dangerous AI capabilities spread beyond a government or company's control.Source: https://airesilience.substack.com/p/resilience-and-adaptation-to-advanced?utm_source=bluedot-impactA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.