China's Central Cyberspace Affairs Commission has proposed rules for anthropomorphic AI systems, emphasizing alignment with 'core socialist values,' transparency, user data protection, and restrictions on harmful behaviors, including endangering security, spreading misinformation, and encouraging self-harm, with measures to ensure user well-being and safety.
The Trump White House plans to eliminate AI regulations that hinder innovation, aiming to make the US lead in the global AI race by reducing red tape, changing procurement rules, and streamlining permits, while also boosting AI exports and infrastructure, despite concerns about ideological bias and regulatory risks.
Meta announced it will not sign the EU's voluntary AI code of practice, criticizing it as overreaching and unnecessary, amid ongoing tensions over AI regulation in Europe. The company argues the guidelines introduce legal uncertainties and go beyond the scope of the AI Act, continuing its stance against EU AI regulations, which it claims hinder innovation. While signing the code offers legal protections and reduces regulatory scrutiny, Meta prefers to avoid commitments that could lead to stricter oversight and penalties.
The article discusses various European Union initiatives and issues, including plans to provide affordable communication for Ukrainians in the EU from 2026, potential delays in AI regulation implementation, EU efforts to strengthen cybersecurity defenses, and legal challenges against Hungary's anti-LGBTQ+ laws, highlighting the EU's multifaceted approach to regional stability and governance.
The 2023 AI Index report, released by the Stanford Institute for Human-Centered Artificial Intelligence, highlights the rise of multimodal foundation models, major investments in generative AI, and new performance benchmarks. It reveals that industry dominates AI, with Google leading in releasing the most foundation models, and the United States leading in AI private investment. The report also shows a move towards open-sourced models, concerns about job displacement due to AI, and increasing regulations to govern the use of AI tools and data.
The Biden administration has announced new artificial intelligence (AI) regulations for federal agencies, requiring mandatory risk reporting, transparency rules, and the appointment of a chief AI officer to oversee technology. The regulations aim to ensure the safe, secure, and responsible use of AI, with a focus on preventing biased diagnoses and discrimination. The administration has been taking steps to address potential dangers of AI, including requiring AI developers to share safety-test results with the federal government. However, a coalition of state attorneys general has expressed concerns that the executive order could centralize government control over AI and be used for political purposes.
Negotiations for the European Union's AI Act, which aims to establish comprehensive regulations for artificial intelligence, are facing a critical moment as negotiators grapple with the rise of generative AI. The EU's proposed regulations have been delayed by debates over how to govern general purpose AI systems like OpenAI's ChatGPT and Google's Bard chatbot. Big tech companies are concerned about overregulation stifling innovation, while European lawmakers seek added safeguards. The US, UK, China, and other global coalitions are also racing to establish AI regulations. The finalization of the AI Act before the European Parliament elections next year is uncertain, potentially delaying the establishment of global AI standards.
The United States, Britain, and several other countries have unveiled the first detailed international agreement aimed at ensuring the safety of artificial intelligence (AI) systems. The agreement emphasizes the need for companies to develop and deploy AI in a way that prioritizes security and protects customers and the wider public from misuse. While the agreement is non-binding, it includes recommendations such as monitoring AI systems for abuse, protecting data from tampering, and vetting software suppliers. The guidelines represent a significant step towards recognizing the importance of security in AI design. The rise of AI has raised concerns about potential misuse, including disruption of the democratic process and job loss. Europe is ahead of the United States in AI regulations, with lawmakers drafting AI rules, while the Biden administration has been pushing for AI regulation in the US.
SAG-AFTRA has released a summary of its strike-ending agreement with the AMPTP, which includes details about a streaming revenue-sharing plan and AI protections. The revenue sharing will be split between cast and a new fund co-run by SAG-AFTRA and the AMPTP. However, concerns have been raised about the broad use of AI in post-production, as consent is not required for certain changes. The full agreement has yet to be made public, but SAG-AFTRA membership voting will begin tomorrow.
US policymakers are considering various approaches to regulate artificial intelligence (AI). The proposals can be categorized into four main areas: rules, institutions, money, and people. In terms of rules, there are debates between those who advocate for minimal government intervention and those who want extensive regulation on liability, copyright, privacy, and algorithmic bias. Licensing requirements and compute regulation are also being discussed. New institutions, such as a dedicated AI regulator or a National Artificial Intelligence Research Resource, are being considered. The goal is to ensure the responsible development and deployment of AI while addressing ethical concerns and potential risks.
The Federal Trade Commission's (FTC) aggressive stance against tech companies, including its recent investigation into OpenAI, is seen as an abuse of power and an attempt to scapegoat big tech. The FTC's claims against OpenAI regarding data scraping and publishing false information are questionable, as ChatGPT primarily uses publicly available information and clearly states its limitations. The FTC's actions undermine its credibility and raise concerns about whether they are motivated by a partisan agenda. The FTC's aggressive approach risks stifling innovation, harming small businesses, and hampering America's position as an AI leader, while China seeks to dominate the AI space with its own agenda. AI is projected to double America's productivity rate and boost global GDP by $7 trillion over the next decade.
Elon Musk said that during his recent trip to China, he had productive discussions with senior leadership on the risks of artificial intelligence and the need for oversight and regulation. Musk stated that China will initiate AI regulations, but did not provide further details. The Chinese government has previously unveiled draft measures for managing generative AI services, requiring firms to submit security assessments to authorities before launching their offerings to the public.
OpenAI CEO Sam Altman stated that the company might consider leaving Europe if it cannot comply with the upcoming AI regulations by the European Union. The EU is working on the first set of rules globally to govern AI, and companies deploying generative AI tools, such as ChatGPT, will have to disclose any copyrighted material used to develop their systems. Before considering pulling out, OpenAI will try to comply with the regulation in Europe when it is set.
OpenAI's ChatGPT is available again in Italy after the company met the demands of regulators who temporarily blocked it over privacy concerns. The Italian data protection authority had ordered OpenAI to temporarily stop processing Italian users’ personal information while it investigated a possible data breach. OpenAI said it fulfilled a raft of conditions that the Italian data protection authority wanted satisfied by an April 30 deadline to have the ban on the AI software lifted.