Tag

Responsible Ai

All articles tagged with #responsible ai

technology29 days ago

California Governor Launches Initiative to Promote Responsible AI in State Government

Governor Newsom announced new initiatives in California to foster responsible AI development, including partnerships with top tech policy experts, the launch of the California Innovation Council, and the deployment of Poppy, an AI digital assistant for state employees, aiming to modernize government services and ensure ethical AI use.

technology6 months ago

Margaret Mitchell Criticizes AI as 'Vibes and Snake Oil'

Margaret Mitchell criticizes the pursuit of artificial general intelligence (AGI), calling it a vague and potentially harmful narrative driven by industry hype rather than scientific reality. She emphasizes the importance of centering AI development around human needs and ethical considerations, warning against the risks of privacy loss, societal harm, and the widening economic gap caused by AI advancements. Mitchell advocates for responsible AI focused on augmenting human capabilities rather than replacing them, and urges a more rigorous, people-centered approach to AI innovation.

technology1 year ago

"Microsoft Introduces Safety Measures to Prevent AI Deception in Azure Tools"

Microsoft has introduced new safety features for Azure AI, including Prompt Shields, Groundedness Detection, and safety evaluations, to detect vulnerabilities, block malicious prompts, and prevent generative AI controversies. These tools aim to provide easy-to-use safety measures for customers without deep expertise in AI security. The system evaluates prompts and model responses for banned words, hidden prompts, and hallucinations, while also allowing for customized control over filtering hate speech and violence. The safety features are immediately available for popular models like GPT-4 and Llama 2, with plans to expand to other AI models on Azure.

technology1 year ago

"Microsoft AI Tool Accused of Generating Harmful Content for Kids, Engineer Warns"

A Microsoft engineer, Shane Jones, has accused the company of ignoring warnings about its AI text-to-image generator, Copilot Designer, creating violent and sexual imagery. Despite Jones' efforts to alert Microsoft and OpenAI, the issues have not been addressed. Jones has sent letters to the Federal Trade Commission and Microsoft's board of directors, urging intervention and an independent review of Microsoft's responsible AI incident reporting processes. Microsoft has not confirmed whether it is taking steps to filter images, but has stated its commitment to addressing concerns and enhancing safety. OpenAI did not respond to requests for comment.

technology1 year ago

"Microsoft's AI Tool Sparks Controversy with Violent, Sexual Images and Demands for Worship"

A Microsoft engineer, Shane Jones, has raised concerns about the company's AI image generator, Copilot Designer, for creating violent and sexualized images, as well as potentially violating copyrights. Despite reporting his findings to Microsoft and OpenAI, the product remains on the market with an "E for Everyone" rating on Google's Android app. Jones has escalated his concerns by sending letters to the Federal Trade Commission and Microsoft's board of directors, urging for better safeguards and responsible AI incident reporting processes. The AI tool's capability to produce harmful and disturbing images globally without proper guardrails has sparked a public debate about generative AI and the need for stricter regulations.

technology1 year ago

"Google's Controversial AI Missteps: From Gemini Image Generation to Diversity Lectures"

Google disabled the image generation of people in Gemini due to criticism about historical accuracy and has now issued a detailed explanation. The company identified issues with the model's tuning and over-cautiousness, leading to overcompensation and over-conservatism in image generation. Google plans to improve the feature significantly before reactivating it and recommends relying on Google Search for accurate information. The company acknowledges the potential for occasional embarrassing, inaccurate, or offensive results but promises to take action and roll out AI technology safely and responsibly.

technology1 year ago

"Google Unveils Gemma: A New Open AI Model for Developers"

Google has launched Gemma, a new family of lightweight open-weight models, including Gemma 2B and Gemma 7B, inspired by its Gemini models. These dense decoder-only models are available for commercial and research usage, with access to ready-to-use Colab and Kaggle notebooks, as well as integrations with Hugging Face, MaxText, and Nvidia’s NeMo. While not open-source, developers can use the models for inferencing and fine-tune them at will. Google also released a responsible generative AI toolkit and a debugging tool to provide guidance and essential tools for creating safer AI applications with Gemma.

technology1 year ago

"Google Unveils Gemma: A Lightweight Open-Source AI Model"

Google has introduced Gemma, a lightweight open AI model available in two sizes, Gemma 2B and Gemma 7B, which are designed to run directly on a developer's laptop or desktop. The models, created using the same technology as Google's Gemini AI models, come with pre-trained and instruction-tuned variants and are aimed at helping developers build AI responsibly. Google also released the Responsible Generative AI Toolkit alongside Gemma, containing a debugging tool and best practices guide for AI development. The company plans to introduce more Gemma variants in the future and has made the models accessible through platforms like Kaggle, Colab notebooks, and Google Cloud.

technology1 year ago

"Google's Gemma: A New Open-Source AI Model for Developers"

Google DeepMind has introduced Gemma, its new 2B and 7B open source models, along with a Responsible Generative AI toolkit and toolchains for inference and supervised fine-tuning across major frameworks. The models are integrated with various platforms and can run on different devices. Google also offers APIs and open models for workflow, aiming to provide a wide set of capabilities for the community. Gemma's safety and responsible design are emphasized, with extensive evaluations and filtering of sensitive data. Google DeepMind will release benchmarks evaluating Gemma against other models, emphasizing transparency and community involvement in the project.

artificial-intelligence1 year ago

"Unveiling Goody-2: The Over-Ethical AI Chatbot"

Goody-2, a new AI chatbot, takes AI safety to the extreme by refusing every request, citing potential harm or ethical breaches. Created by artists Mike Lacher and Brian Moore, the chatbot's self-righteous responses highlight the challenges of defining responsible AI. While serving as a parody, it also underscores the ongoing safety issues with large language models and generative AI systems. Despite its absurdity, Goody-2 prompts serious discussions about the complexities of AI responsibility and the difficulty in finding moral alignment that pleases everyone.

technology1 year ago

"Major Tech Players Unite in US Consortium for Responsible AI Advancement"

A US-based effort called the AI Safety Institute Consortium (AISIC) has been joined by 200 big tech companies, including Meta, Google, Microsoft, and Apple, to advance responsible AI practices in response to President Biden’s executive order on artificial intelligence. The consortium will focus on developing guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content to mitigate risks and harness the potential of AI. This initiative represents the largest collection of testing and evaluation teams in the world, as Congress continues to fail in passing meaningful AI legislation.