Tag

Ai Governance

All articles tagged with #ai governance

US Agencies Accelerate AI and Data Reform Amid Tech Trends for 2026

Originally Published 9 days ago — by Federal News Network

Featured image for US Agencies Accelerate AI and Data Reform Amid Tech Trends for 2026
Source: Federal News Network

Artificial intelligence may not be the dominant buzzword for 2026 in federal IT; instead, focus will shift to topics like AI governance, workforce transformation, acquisition reform, and resilient innovation, with experts emphasizing the importance of managing AI effectively, modernizing security clearances, and improving government procurement and workforce strategies.

Tim Berners-Lee Reflects on Giving Away the Web and Its Future

Originally Published 3 months ago — by The Guardian

Featured image for Tim Berners-Lee Reflects on Giving Away the Web and Its Future
Source: The Guardian

Tim Berners-Lee, the inventor of the web, explains his original vision of a free, open, and accessible internet for everyone, and discusses how current platforms have diverged from this vision by exploiting user data. He advocates for empowering individuals with control over their data through standards like Solid and calls for international, non-profit governance of AI to ensure it benefits society, not just corporations.

UN Addresses AI Challenges and Governance at Global Summit

Originally Published 3 months ago — by AP News

Featured image for UN Addresses AI Challenges and Governance at Global Summit
Source: AP News

AI has been added to the list of major global challenges at the UN, with efforts underway to establish international governance through new bodies and forums, aiming to regulate AI's development and mitigate risks like misinformation and existential threats, though concerns remain about the effectiveness of these measures.

Singapore's Ambitious Investment Plans for AI, Tax Schemes, and Clean Energy Transition

Originally Published 1 year ago — by CNBC

Featured image for Singapore's Ambitious Investment Plans for AI, Tax Schemes, and Clean Energy Transition
Source: CNBC

Singapore plans to invest over $743 million in artificial intelligence over the next five years to strengthen its position as a global business and innovation hub. The investment aims to boost the country's AI capabilities, secure access to advanced chips crucial for AI development, and set up AI centers of excellence. The initiative also focuses on promoting responsible AI use through the AI Verify framework and software toolkit, with tech executives emphasizing the importance of reassuring consumers about data safety and ethical technology use.

OpenAI's Plan to Control Superhuman AI: Demos and Alignment

Originally Published 2 years ago — by TechCrunch

Featured image for OpenAI's Plan to Control Superhuman AI: Demos and Alignment
Source: TechCrunch

OpenAI's Superalignment team is working on developing ways to control and govern "superintelligent" AI systems that surpass human intelligence. Led by OpenAI co-founder and chief scientist Ilya Sutskever, the team aims to build governance and control frameworks for future powerful AI systems. They are using a weaker AI model to guide a more advanced model in desirable directions. OpenAI is also launching a $10 million grant program to support research on superintelligent alignment and plans to host an academic conference on superalignment in 2025. The involvement of former Google CEO Eric Schmidt, who is funding a portion of the grant, raises questions about the availability and use of OpenAI's research.

C3.ai's Q2 Revenue Affected by AI Governance Reviews

Originally Published 2 years ago — by Yahoo Finance

Featured image for C3.ai's Q2 Revenue Affected by AI Governance Reviews
Source: Yahoo Finance

C3.ai, a data management and analysis software company, reported lower-than-expected quarterly revenue and projected a worse-than-expected operating loss for the fiscal year. The company attributed the revenue shortfall to new AI governance departments set up by customers, which slowed down the sales process. C3.ai plans to invest more in generative AI products, leading to short-term pressure on profitability. Despite challenges, the company remains optimistic about its position in the AI market and expects positive cash flow by 2025. However, some investors remain skeptical, and the company recently implemented cost-saving measures and reorganized its European sales teams.

Sam Altman Returns as OpenAI CEO, Overhauls Board in Wake of Controversy

Originally Published 2 years ago — by NDTV

Featured image for Sam Altman Returns as OpenAI CEO, Overhauls Board in Wake of Controversy
Source: NDTV

Sam Altman has returned as the CEO of OpenAI after being fired, leading to a major reshuffle of the board. The only surviving member from the previous board is Adam D'Angelo, CEO of Quora. Altman will be joined by Bret Taylor, former co-CEO of Salesforce, and Larry Summers, former US Treasury Secretary and president of Harvard University. Altman's return highlights the growing influence of Microsoft, which has invested heavily in OpenAI. The reasons for Altman's initial firing remain unclear, but concerns over OpenAI's mission and AI governance have been raised.

Elon Musk and Rishi Sunak Collaborate on UK AI Summit with China

Originally Published 2 years ago — by POLITICO Europe

Featured image for Elon Musk and Rishi Sunak Collaborate on UK AI Summit with China
Source: POLITICO Europe

Elon Musk praises British Prime Minister Rishi Sunak's decision to invite China to the UK AI summit, describing it as "essential." Musk believes that AI has the potential to create a future of abundance and predicts a world where no job is needed, provoking nervous laughter from Sunak. Musk emphasizes the need for government regulation in AI governance to safeguard the interests of the public, comparing it to the role of a referee in a sports game. The interview took place at the world's first AI Safety Summit, where an international agreement was reached to monitor large language models.

Biden's AI Executive Order Addresses Divisions, Guidelines, and National Security Risks

Originally Published 2 years ago — by POLITICO

Featured image for Biden's AI Executive Order Addresses Divisions, Guidelines, and National Security Risks
Source: POLITICO

President Joe Biden signed an executive order on artificial intelligence (AI) that addresses various concerns related to the technology, including cybersecurity, global competition, discrimination, and technical oversight. The order has garnered support from both the tech industry and its critics, reflecting an attempt by the White House to appease different factions within AI governance. These factions include progressives concerned about job security and civil rights, "longtermists" worried about existential risks, and AI hawks focused on national security. While the order's breadth aims to please multiple groups, there are concerns about its implementation and the ability of agencies to handle the workload. The order includes provisions to address real-world problems caused by AI, require reporting on advanced AI systems, and enhance national security and global competitiveness. Despite the positive reception, all factions agree that congressional action is still necessary.

Google cautions employees against using Bard's generated code and AI chatbots.

Originally Published 2 years ago — by The Register

Featured image for Google cautions employees against using Bard's generated code and AI chatbots.
Source: The Register

Google has warned its employees not to use the code generated by its AI chatbot, Bard, due to privacy and security risks. Nuance, a voice recognition software developer acquired by Microsoft, has been accused of recording and using people's voices without permission in an amended lawsuit filed last week. Google's DeepMind AI lab does not want the US government to set up an agency singularly focused on regulating AI. OpenAI reportedly cautioned Microsoft about releasing its GPT-4-powered Bing chatbot too quickly, considering it could generate false information and inappropriate language.

"Microsoft's Activision Deal Faces UK Hurdles and Call of Duty Concerns"

Originally Published 2 years ago — by Reuters

Featured image for "Microsoft's Activision Deal Faces UK Hurdles and Call of Duty Concerns"
Source: Reuters

Microsoft's president, Brad Smith, met with UK finance minister Jeremy Hunt to discuss the company's $69 billion purchase of Activision Blizzard, which was blocked by British competition authorities in April. Smith said he would work with regulators to seek UK approval for the deal and address any concerns they may have. Smith also spoke about the need for AI governance and regulations to govern it, saying it should not be the sole responsibility of tech companies.

OpenAI predicts AI to surpass human expertise in most domains within a decade, calls for regulation and collective wisdom.

Originally Published 2 years ago — by Decrypt

Featured image for OpenAI predicts AI to surpass human expertise in most domains within a decade, calls for regulation and collective wisdom.
Source: Decrypt

OpenAI's CEO, Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever have co-authored a blog post warning that AI development needs heavy regulation to prevent potentially catastrophic scenarios. They believe that within the next ten years, AI systems will exceed expert skill levels in most domains and carry out as much productive activity as one of today's largest corporations. OpenAI has outlined three pillars crucial for strategic future planning: a balance between control and innovation, an international authority tasked with system inspections, and technical capability to maintain control over superintelligence and keep it safe.

"Global AI Regulation and Collaboration: Insights and Warnings"

Originally Published 2 years ago — by TechCrunch

Featured image for "Global AI Regulation and Collaboration: Insights and Warnings"
Source: TechCrunch

OpenAI's leadership proposes the need for an international regulatory body for AI, similar to that governing nuclear power, to ensure safety and help integrate AI systems with society. The proposed body would inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, and track compute power and energy usage dedicated to AI research. While the proposal is a conversation starter, there are no plans to slow down AI development just yet.