The European Commission has published the first draft of a voluntary Code of Practice to guide the marking and labeling of AI-generated content, including rules for detecting AI content and labeling deepfakes, with finalization expected by June 2026 and rules becoming effective in August 2026.
Meta has decided not to sign the EU's voluntary AI guidelines, citing concerns that the regulations could hinder the development and deployment of advanced AI models in Europe, contrasting with other companies like OpenAI that plan to sign. The EU's AI Act aims to regulate general-purpose AI models, but Meta fears it may introduce legal uncertainties and stifle innovation, especially as the regulations are set to take effect soon.
Meta has announced it will not sign the EU's voluntary AI code of practice, citing concerns over overreach and legal uncertainties, amid ongoing tensions between US tech companies and European regulators over AI regulation.
Meta has declined to sign the EU's voluntary AI code of practice, criticizing it as overreaching and harmful to AI development in Europe, just weeks before the EU's AI rules take effect. The EU's AI Act aims to regulate high-risk AI applications, but major tech companies are opposing the legislation, which will require compliance by providers of general-purpose AI models by August 2027.
The European Commission has announced it will proceed with the scheduled implementation of the EU's AI Act, despite calls from some tech companies to delay, emphasizing their commitment to meet the original deadlines and addressing concerns through support measures.
The European Union will proceed with its scheduled rollout of the AI legislation, despite efforts by major tech companies to delay it, emphasizing that there will be no pause or grace period for implementation, with full rules expected by mid-2026.
European lawmakers have passed the AI Act, the world's most comprehensive legislation on artificial intelligence, which sets out rules for developers and restrictions on AI use. The law bans certain AI uses, introduces transparency rules, and requires risk assessments for high-risk AI systems. It applies to AI products in the EU market, backed by fines of up to 7% of a company's worldwide revenue, and is expected to have a global impact. The legislation still needs final approval from EU member states but is likely to be a formality. Other jurisdictions may use the law as a model for their AI regulations.
European Union lawmakers have given final approval to the world-leading Artificial Intelligence Act, which is set to take effect later this year. The AI Act takes a risk-based approach, with low-risk AI systems facing voluntary requirements and high-risk uses, such as in medical devices or critical infrastructure, facing tougher regulations. The law also addresses generative AI models, requiring developers to provide detailed summaries of training data and follow EU copyright law. The rules are expected to influence global AI governance, with provisions set to start taking effect in stages, and violations could result in fines of up to 35 million euros or 7% of a company's global revenue.
The ambassadors of the 27 EU countries have unanimously approved the world's first comprehensive rulebook for Artificial Intelligence, following a political agreement reached in December. The AI Act aims to regulate AI based on its potential to cause harm, with a tiered approach for different AI models. Despite initial opposition from countries like France, Germany, and Italy, the compromise was reached, and the AI Act is set to be adopted by the European Parliament and enter into force after a series of steps and timelines.
The EU's AI Act is set to impose regulations on AI systems, with different compliance timelines for various categories of AI. General-purpose AI providers may have only six to 12 months to prepare, while high-risk systems will have longer. The act includes prohibited uses such as biometric categorization systems and untargeted scraping of facial images. Critics argue that the act may burden AI research and development, but regulatory sandboxes are proposed to allow for development outside the strict provisions of the legislation.
This week in AI news, the European Union passed the AI Act, a significant piece of regulation aimed at addressing the potential harms of the AI industry. French AI startup Mistral released a new large language model, while Grimes, Elon Musk's partner, is reportedly working on a plush AI toy named "Grok." In the UK, a court ruled that software programs can make legal rulings, and researchers suggest that ChatGPT may suffer from "seasonal depression." Additionally, Microsoft announced an alliance with AFL-CIO to ensure AI serves the interests of workers.
The passage of the EU AI Act was a challenging process due to contentious issues surrounding facial recognition and powerful "foundation" models. Policymakers had to adapt the legislation to address rapidly evolving AI technology, including general-purpose systems like OpenAI's ChatGPT. France, Germany, and Italy sought compromises to protect innovation and startups developing foundational AI models, while other EU lawmakers pushed for tighter regulations. Delays and disagreements also arose over the ban on facial recognition systems and the use of AI-assisted surveillance. The final agreement subjects General Purpose AI Systems (GPAIs) to a two-tier system, allowing some flexibility for companies, and includes exceptions for limited use of facial recognition under certain conditions. The full text of the AI Act is not yet available, and the legislation is expected to become law by mid-2024, with provisions gradually coming into force over the next two years.
French President Emmanuel Macron has warned that the European Union's new AI Act could hinder innovation. Macron expressed concerns that the regulations proposed in the act may stifle technological advancements and hinder Europe's ability to compete globally in the field of artificial intelligence.
The European Union has reached an agreement on landmark legislation, known as the AI Act, to regulate artificial intelligence. The law aims to establish comprehensive standards for the use of AI while safeguarding the rights of individuals and businesses. It requires tech companies to disclose data used to train AI systems and conduct testing, particularly for high-risk applications like self-driving vehicles and healthcare. The legislation prohibits indiscriminate scraping of images for facial recognition databases but allows exemptions for law enforcement purposes. Violators may face fines of up to seven percent of global revenue. The EU law is considered the most comprehensive effort to regulate AI, contrasting with the hands-off approaches of other countries like the UK and Japan.
The European Union has reached an agreement on the AI Act, one of the world's first comprehensive laws on artificial intelligence. The legislation aims to promote AI development while addressing potential risks and banning harmful AI practices that threaten people's safety and rights. The law classifies AI uses by risk level and imposes increased regulation on higher-risk systems. Limited-risk systems, such as chatbots and content-generating technology, are subject to new transparency obligations. The EU hopes that the AI Act will set a global standard and foster innovation in the region.