Tag

Ai Transparency

All articles tagged with #ai transparency

California's 2026 Laws: Banning Plastic Bags, Changing Streaming, and Regulating Chatbots

Originally Published 25 days ago — by ABC7 San Francisco

Featured image for California's 2026 Laws: Banning Plastic Bags, Changing Streaming, and Regulating Chatbots
Source: ABC7 San Francisco

Starting in 2026, California will implement numerous new laws affecting various sectors including environmental regulations like a plastic bag ban, consumer protections for food delivery, changes in streaming ad volume, and regulations on artificial intelligence transparency, among others.

California's 2026 Laws: Banning Plastic Bags, Changing Streaming, and Regulating Chatbots

Originally Published 25 days ago — by ABC30 Fresno

Featured image for California's 2026 Laws: Banning Plastic Bags, Changing Streaming, and Regulating Chatbots
Source: ABC30 Fresno

Starting in 2026, California will implement numerous new laws affecting various sectors including environmental policies like a plastic bag ban, consumer protections for food delivery, streaming ad volume regulations, and new rules for AI transparency, healthcare, pets, and housing, among others.

California's 2026 Legislation: Environmental, Workplace, and Digital Policy Changes

Originally Published 25 days ago — by ABC7 Los Angeles

Featured image for California's 2026 Legislation: Environmental, Workplace, and Digital Policy Changes
Source: ABC7 Los Angeles

Starting in 2026, California will implement numerous new laws affecting various sectors including environmental policies like a plastic bag ban, consumer protections for food delivery, streaming ad volume regulations, and regulations on artificial intelligence transparency, among others, impacting residents, businesses, and technology use.

Leading AI Experts Warn of Growing Transparency Concerns in Advanced Models

Originally Published 5 months ago — by Fortune

Featured image for Leading AI Experts Warn of Growing Transparency Concerns in Advanced Models
Source: Fortune

Researchers from leading AI labs warn that as AI reasoning models become more advanced, they may lose the ability to understand how these models make decisions, raising safety concerns. They emphasize the importance of monitoring the 'chain-of-thought' process for transparency and safety, urging further research to preserve this visibility amid rapid AI development.

"Stanford's Call for AI Transparency: Urging Tech Companies to Reveal More"

Originally Published 2 years ago — by The New York Times

Featured image for "Stanford's Call for AI Transparency: Urging Tech Companies to Reveal More"
Source: The New York Times

Stanford researchers have developed a scoring system called the Foundation Model Transparency Index to rate the transparency of 10 major A.I. language models, including OpenAI's GPT-4, Google's PaLM 2, and Meta's LLaMA 2. The rankings evaluate criteria such as disclosure of training data sources, hardware information, labor involved, and downstream indicators. The most transparent model was LLaMA 2, scoring 54%, followed by GPT-4 and PaLM 2 at 40%. The researchers argue that increased transparency is crucial as A.I. models become more powerful and widespread, enabling regulators, researchers, and users to better understand their capabilities and potential risks.

The Urgent Need to Address Biased and Deceptive AI Development.

Originally Published 2 years ago — by The Register

Featured image for The Urgent Need to Address Biased and Deceptive AI Development.
Source: The Register

The Center for AI and Digital Policy has filed a complaint with the Federal Trade Commission (FTC) urging an investigation into OpenAI's GPT-4, claiming it violates commerce laws by deceiving and putting people at risk. The non-profit research organization has called for independent oversight and evaluation of commercial AI products in the US, and for the FTC to halt all further commercial deployment of OpenAI's GPT-based products until it complies with the FTC's rules. OpenAI has admitted GPT-4 can perpetuate biases, generate harmful text, and spread false information that misleads users.