Tag

Ai Governance

All articles tagged with #ai governance

Flagged but ignored: the Tumbler Ridge case exposes Canada’s AI governance gaps
technology1 day ago

Flagged but ignored: the Tumbler Ridge case exposes Canada’s AI governance gaps

Eight people were killed in the Tumbler Ridge shooting after OpenAI’s automated review system flagged the shooter’s ChatGPT account months earlier for violent discussions; OpenAI banned the account but did not refer the case to police because it didn’t meet a then-threshold. The incident highlights a broader Canadian AI governance vacuum: there is no binding national framework to require referrals of flagged AI interactions to authorities, no independent triage body, and privacy laws ill-suited to probabilistic threat indicators. With Bill C-27 (AI Act) and Bill C-63 (Online Harms) stalled, Canada relies on voluntary codes and faces ambiguity about disclosures. The piece calls for a binding, multidisciplinary framework, an independent digital safety commission, modernized privacy rules, and renewed international AI-regulation efforts to prevent future tragedies.

Treasury, Industry Unveil Practical AI Cybersecurity Toolkit for Banking
technology7 days ago

Treasury, Industry Unveil Practical AI Cybersecurity Toolkit for Banking

The U.S. Treasury, in support of the AI Action Plan, led a public-private collaboration to release six resources in February through the Artificial Intelligence Executive Oversight Group, aimed at strengthening governance, data practices, transparency, fraud prevention, and digital identity for AI in the financial system. The tools prioritize practical, non-prescriptive guidance to help financial institutions—especially small and mid-sized ones—adopt AI securely and more resiliently while promoting innovation.

Delhi AI Summit Shifts from Safety to Dealmaking
technology8 days ago

Delhi AI Summit Shifts from Safety to Dealmaking

The Delhi AI summit has grown from a narrow safety-focused discussion to a vast dealmaking marketplace, with a draft final declaration reportedly omitting the word “safety.” Major powers and tech leaders are there to attract talent and investment rather than to forge binding safeguards, reflecting a geopolitical and commercial shift that could fragment global AI governance and push discussions into alternative forums like COP- or G7-style formats.

Balancing openness and safety in AI biology data
technology9 days ago

Balancing openness and safety in AI biology data

More than 100 researchers back a framework to treat certain biological data like sensitive health records, arguing most data should remain open while a narrow subset that could enable misuse—such as linking viral genetics to real-world traits—needs protection. They warn that training AI models on such data could lower the barrier to designing dangerous pathogens, and while legitimate researchers should have access, it shouldn’t be uploaded anonymously or browsable on the open web. The aim is to balance scientific progress with biosecurity, advocating regular reassessment of restrictions as science evolves to prevent worst-case scenarios.

AI Governance in a Global Race: Balancing Security, Jobs, and Innovation
policy11 days ago

AI Governance in a Global Race: Balancing Security, Jobs, and Innovation

Foreign Affairs argues that governing AI requires navigating a three-way tradeoff among national security, economic competitiveness, and societal safety. It rejects the idea of a rapid “singularity” and urges deliberate policy that weighs practical tradeoffs, not idealized extremes. The piece proposes two main compromises: (1) a modest AI safety “risk tax” that nudges private labs to invest in safety research, funded in part by tax credits and bolstered by public–academic collaboration; and (2) a stronger government data and oversight framework (CAISI) with the power to veto dangerous releases and to curate public data, enhancing societal safety while limiting short-term economic costs. It also argues for a targeted approach to open-weight models and envisions a possible global nonproliferation path for AI, saying policymakers should embrace tradeoffs rather than the do-nothing option.

Unsealed evidence reveals boardroom battles behind Musk v. OpenAI
ai1 month ago

Unsealed evidence reveals boardroom battles behind Musk v. OpenAI

Unsealed depositions in Elon Musk’s lawsuit against OpenAI reveal a fractious shift from nonprofit roots to aggressive commercialization, with Sutskever’s early open-source concerns, Nadella’s push to accelerate products, Altman’s leadership clashes, and Microsoft’s heavy investment influence shaping governance and strategy as thousands of pages of evidence surface ahead of a jury trial in Northern California.

US Agencies Accelerate AI and Data Reform Amid Tech Trends for 2026
government-and-technology1 month ago

US Agencies Accelerate AI and Data Reform Amid Tech Trends for 2026

Artificial intelligence may not be the dominant buzzword for 2026 in federal IT; instead, focus will shift to topics like AI governance, workforce transformation, acquisition reform, and resilient innovation, with experts emphasizing the importance of managing AI effectively, modernizing security clearances, and improving government procurement and workforce strategies.

Tim Berners-Lee Reflects on Giving Away the Web and Its Future
technology5 months ago

Tim Berners-Lee Reflects on Giving Away the Web and Its Future

Tim Berners-Lee, the inventor of the web, explains his original vision of a free, open, and accessible internet for everyone, and discusses how current platforms have diverged from this vision by exploiting user data. He advocates for empowering individuals with control over their data through standards like Solid and calls for international, non-profit governance of AI to ensure it benefits society, not just corporations.

Singapore's Ambitious Investment Plans for AI, Tax Schemes, and Clean Energy Transition
technology2 years ago

Singapore's Ambitious Investment Plans for AI, Tax Schemes, and Clean Energy Transition

Singapore plans to invest over $743 million in artificial intelligence over the next five years to strengthen its position as a global business and innovation hub. The investment aims to boost the country's AI capabilities, secure access to advanced chips crucial for AI development, and set up AI centers of excellence. The initiative also focuses on promoting responsible AI use through the AI Verify framework and software toolkit, with tech executives emphasizing the importance of reassuring consumers about data safety and ethical technology use.

OpenAI's Plan to Control Superhuman AI: Demos and Alignment
artificial-intelligence2 years ago

OpenAI's Plan to Control Superhuman AI: Demos and Alignment

OpenAI's Superalignment team is working on developing ways to control and govern "superintelligent" AI systems that surpass human intelligence. Led by OpenAI co-founder and chief scientist Ilya Sutskever, the team aims to build governance and control frameworks for future powerful AI systems. They are using a weaker AI model to guide a more advanced model in desirable directions. OpenAI is also launching a $10 million grant program to support research on superintelligent alignment and plans to host an academic conference on superalignment in 2025. The involvement of former Google CEO Eric Schmidt, who is funding a portion of the grant, raises questions about the availability and use of OpenAI's research.

C3.ai's Q2 Revenue Affected by AI Governance Reviews
business2 years ago

C3.ai's Q2 Revenue Affected by AI Governance Reviews

C3.ai, a data management and analysis software company, reported lower-than-expected quarterly revenue and projected a worse-than-expected operating loss for the fiscal year. The company attributed the revenue shortfall to new AI governance departments set up by customers, which slowed down the sales process. C3.ai plans to invest more in generative AI products, leading to short-term pressure on profitability. Despite challenges, the company remains optimistic about its position in the AI market and expects positive cash flow by 2025. However, some investors remain skeptical, and the company recently implemented cost-saving measures and reorganized its European sales teams.

Sam Altman Returns as OpenAI CEO, Overhauls Board in Wake of Controversy
technology2 years ago

Sam Altman Returns as OpenAI CEO, Overhauls Board in Wake of Controversy

Sam Altman has returned as the CEO of OpenAI after being fired, leading to a major reshuffle of the board. The only surviving member from the previous board is Adam D'Angelo, CEO of Quora. Altman will be joined by Bret Taylor, former co-CEO of Salesforce, and Larry Summers, former US Treasury Secretary and president of Harvard University. Altman's return highlights the growing influence of Microsoft, which has invested heavily in OpenAI. The reasons for Altman's initial firing remain unclear, but concerns over OpenAI's mission and AI governance have been raised.