European Commission spokesperson Thomas Regnier explicitly stated there would be "no stop the clock", "no grace period", and "no pause".. Legislative Process EU maintains AI Act timeline despite pressure: According to Foo Yun Chee from Reuters, the European Commission has rejected calls from some companies and countries to delay implementation of the AI Act, maintaining the legislation's original schedule. Commission spokesperson Thomas Regnier explicitly stated there would be "no stop the clock", "no grace period", and "no pause" in response to recent requests from companies including Google's Alphabet, Meta, and European firms like Mistral and ASML seeking years-long delays. The regulatory timeline remains unchanged: provisions began in February, general-purpose AI model obligations commence in August, and high-risk AI requirements take effect in August 2026. The Commission acknowledged industry concerns by planning to propose simplification measures for digital rules later this year, particularly reducing reporting obligations for smaller companies. [...] ---Outline:(00:35) Legislative Process(03:41) Analyses---
First published:
July 8th, 2025
Source:
https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-81-pause
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
8:14
--------
8:14
The EU AI Act Newsletter #80: Commission Seeks Experts for AI Scientific Panel
The European Commission is establishing a scientific panel of independent experts to aid in implementing and enforcing the AI Act.. Legislative Process Commission seeks experts for AI Scientific Panel: The European Commission is establishing a scientific panel of independent experts to aid in implementing and enforcing the AI Act. The panel's mandate centres on general-purpose AI (GPAI) models and systems. It will advise the EU AI Office and national authorities on systemic risks, model classification, evaluation methods, and cross-border market surveillance, as well as alerting the AI Office to emerging risks. The Commission seeks 60 members for a two-year renewable term. Candidates must have expertise in GPAI, AI impacts, or related fields including model evaluation, risk assessment and mitigation, cybersecurity, systemic risks, and compute measurements. A PhD or equivalent experience is required, and experts must maintain independence from AI providers. The selection will ensure gender balance and representation [...] ---Outline:(00:37) Legislative Process(03:12) Analyses---
First published:
June 26th, 2025
Source:
https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-80-commission
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
9:02
--------
9:02
The EU AI Act Newsletter #79: Consultation on High-Risk AI
The European Commission has initiated a public consultation regarding the implementation of regulations for high-risk AI systems under the AI Act.. Legislative Process Commission launches public consultation on high-risk AI systems: The European Commission has initiated a public consultation regarding the implementation of rules for high-risk AI systems under the AI Act. The consultation aims to gather practical examples and clarify issues surrounding high-risk AI systems. This information will inform forthcoming Commission guidelines on the classification of high-risk AI systems and their associated requirements. It will also examine responsibilities throughout the AI value chain. The Act defines high-risk AI systems in two categories: those important for product safety under EU harmonised legislation on product safety, and those that could significantly impact people's health, safety, or fundamental rights in specific scenarios outlined in the Act. The Commission welcomes input from a broad range of stakeholders, including providers and developers [...] ---Outline:(00:37) Legislative Process(03:13) Analyses---
First published:
June 11th, 2025
Source:
https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-79-consultation
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
8:48
--------
8:48
The EU AI Act Newsletter #78: Cutting Red Tape
Risto Uuk and Sten Tamkivi argue that Europe's path to AI competitiveness lies in cutting actual bureaucratic red tape, not in removing AI safeguards.. Legislative Process Stakeholder feedback on AI definitions and prohibited practices: the European Commission published a report prepared by the Centre for European Policy Studies (CEPS) for the EU AI Office, analysing stakeholder feedback from two public consultations on AI Act regulatory obligations. These consultations examined the definition of AI systems and prohibited AI practices, which have been applicable since 2 February 2025. The report analyses responses to 88 consultation questions across nine sections. Industry stakeholders dominated participation with 47.2% of nearly 400 replies, whilst citizen engagement remained limited at 5.74%. Respondents requested clearer definitions of technical terms like "adaptiveness" and "autonomy", warning against inadvertently regulating conventional software. The report highlights significant concerns regarding prohibited practices including emotion recognition, social scoring, and real-time biometric identification. [...] ---Outline:(00:37) Legislative Process(03:57) Analyses(08:24) Feedback---
First published:
May 27th, 2025
Source:
https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-78-cutting
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
9:23
--------
9:23
The EU AI Act Newsletter #77: AI Office Tender
The AI Office will soon be looking for third-party contractors to support the monitoring of compliance and risk assessment of general-purpose AI models.. Legislative Process AI Office AI safety tender: The AI Office will soon be looking for third-party contractors to provide technical assistance aimed at supporting the monitoring of compliance, in particular in its assessment of risks posed by general-purpose AI models at Union level, as authorised by Articles 89, 92 and 93 in the AI Act. The €9,080,000 tender is divided into six lots. Five lots address specific systemic risks: 1) CBRN, 2) cyber offence, 3) loss of control, 4) harmful manipulation and 5) sociotechnical risks. These lots involve risk modelling workshops, the development of evaluation tools, the creation of a reference procedure and reporting template for risk assessment, and ongoing risk monitoring services. The sixth lot focuses on agentic evaluation interface, providing software and infrastructure [...] ---Outline:(00:36) Legislative Process(01:45) Analyses---
First published:
May 13th, 2025
Source:
https://artificialintelligenceact.substack.com/p/the-eu-ai-act-newsletter-77-ai-office
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Up-to-date developments and analyses of the EU AI Act.
Narrations of the “EU AI Act Newsletter”, a biweekly newsletter by Risto Uuk and The Future of Life Institute.
ABOUT US
The Future of Life Institute (FLI) is an independent non-profit working to reduce large-scale, extreme risks from transformative technologies. We also aim for the future development and use of these technologies to be beneficial to all. Our work includes grantmaking, educational outreach, and policy engagement. Our EU transparency register number is 787064543128-10.
In Europe, FLI has two key priorities: i) promote the beneficial development of artificial intelligence and ii) regulate lethal autonomous weapons. FLI works closely with leading AI developers to prepare its policy positions, funds research through recurring grant programs and regularly organises global AI conferences. FLI created one of the earliest sets of AI governance principles – the Asilomar AI principles. The Institute, alongside the governments of France and Finland, is also the civil society champion of the recommendations on AI in the UN Secretary General’s Digital Cooperation Roadmap.