Powered by RND
PodcastsMaatschappij & cultuurAI Safety Newsletter

AI Safety Newsletter

Center for AI Safety
AI Safety Newsletter
Nieuwste aflevering

Beschikbare afleveringen

5 van 64
  • AISN #58: Senate Removes State AI Regulation Moratorium
    Plus: Judges Split on Whether Training AI on Copyrighted Material is Fair Use. In this edition: The Senate removes a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI; two federal judges split on whether training AI on copyrighted books in fair use. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Senate Removes State AI Regulation Moratorium The Senate removed a provision from Republican's “Big Beautiful Bill” aimed at restricting states from regulating AI. The moratorium would have prohibited states from receiving federal broadband expansion funds if they regulated AI—however, it faced procedural and political challenges in the Senate, and was ultimately removed in a vote of 99-1. Here's what happened. A watered-down moratorium cleared the Byrd Rule. In an attempt to bypass the Byrd Rule, which prohibits policy provisions in budget bills, the Senate Commerce Committee revised the [...] ---Outline:(00:35) Senate Removes State AI Regulation Moratorium(03:04) Judges Split on Whether Training AI on Copyrighted Material is Fair Use(07:19) In Other News--- First published: July 3rd, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-58-senate-removes --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    9:04
  • AISN #57: The RAISE Act
    In this edition: The New York Legislature passes an act regulating frontier AI—but it may not be signed into law for some time. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The RAISE Act New York may soon become the first state to regulate frontier AI systems. On June 12, the state's legislature passed the Responsible AI Safety and Education (RAISE) Act. If New York Governor Kathy Hochul signs it into law, the RAISE Act will be the most significant state AI legislation in the U.S. New York's RAISE Act imposes four guardrails on frontier labs: developers must publish a safety plan, hold back unreasonably risky models, disclose major incidents, and face penalties for non-compliance. Publish and maintain a safety plan. Before deployment, developers must post a redacted “safety and security protocol,” transmit the plan to both the attorney general and the [...] ---Outline:(00:21) The RAISE Act(04:43) In Other News--- First published: June 17th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-57-the-raise --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    7:12
  • AISN #56: Google Releases Veo 3
    Plus, Opus 4 Demonstrates the Fragility of Voluntary Governance. In this edition: Google released a frontier video generation model at its annual developer conference; Anthropic's Claude Opus 4 demonstrates the danger of relying on voluntary governance. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. Google Releases Veo 3 Last week, Google made several AI announcements at I/O 2025, its annual developer conference. An announcement of particular note is Veo 3, Google's newest video generation model. Frontier video and audio generation. Veo 3 outperforms other models on human preference benchmarks, and generates both audio and video.Google showcasing a video generated with Veo 3. (Source) If you just look at benchmarks, Veo 3 is a substantial improvement over other systems. But relative benchmark improvement only tells part of the story—the absolute capabilities of systems ultimately determine their usefulness. Veo 3 looks like a marked qualitative [...] ---Outline:(00:33) Google Releases Veo 3(03:25) Opus 4 Demonstrates the Fragility of Voluntary Governance--- First published: May 28th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-56-google-releases --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    8:37
  • AISN #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States
    Plus, Bills on Whistleblower Protections, Chip Location Verification, and State Preemption. In this edition: The Trump Administration rescinds the Biden-era AI diffusion rule and sells AI chips to the UAE and Saudi Arabia; Federal lawmakers propose legislation on AI whistleblowers, location verification for AI chips, and prohibiting states from regulating AI. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. The Center for AI Safety is also excited to announce the Summer session of our AI Safety, Ethics, and Society course, running from June 23 to September 14. The course, based on our recently published textbook, is open to participants from all disciplines and countries, and is designed to accommodate full-time work or study. Applications for the Summer 2025 course are now open. The final application deadline is May 30th. Visit the course website to learn more and apply. Trump Administration Rescinds AI Diffusion [...] ---Outline:(01:12) Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States(04:14) Bills on Whistleblower Protections, Chip Location Verification, and State Preemption(06:56) In Other News--- First published: May 20th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-55-trump-administration --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    9:18
  • AISN #54: OpenAI Updates Restructure Plan
    Plus, AI Safety Collaboration in Singapore. In this edition: OpenAI claims an updated restructure plan would preserve nonprofit control; A global coalition meets in Singapore to propose a research agenda for AI safety. Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts. OpenAI Updates Restructure Plan On May 5th, OpenAI announced a new restructure plan. The announcement walks back a December 2024 proposal that would have had OpenAI's nonprofit—which oversees the company's for-profit operations—sell its controlling shares to the for-profit side of the company. That plan drew sharp criticism from former employees and civil‑society groups and prompted a lawsuit from co‑founder Elon Musk, who argued OpenAI was abandoning its charitable mission. OpenAI claims the new plan preserves nonprofit control, but is light on specifics. Like the original plan, OpenAI's new plan would have OpenAI Global LLC become a public‑benefit corporation (PBC). However, instead of the nonprofit selling its [...] ---Outline:(00:31) OpenAI Updates Restructure Plan(03:19) AI Safety Collaboration in Singapore(05:42) In Other News--- First published: May 13th, 2025 Source: https://newsletter.safe.ai/p/ai-safety-newsletter-54-openai-updates --- Want more? Check out our ML Safety Newsletter for technical safety research. Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    8:40

Meer Maatschappij & cultuur podcasts

Over AI Safety Newsletter

Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai
Podcast website

Luister naar AI Safety Newsletter, De Jortcast en vele andere podcasts van over de hele wereld met de radio.net-app

Ontvang de gratis radio.net app

  • Zenders en podcasts om te bookmarken
  • Streamen via Wi-Fi of Bluetooth
  • Ondersteunt Carplay & Android Auto
  • Veel andere app-functies
Social
v7.21.1 | © 2007-2025 radio.de GmbH
Generated: 7/15/2025 - 8:26:49 PM