Tag

Ethical Ai

All articles tagged with #ethical ai

OpenAI Invests in AI Ethics Research

Originally Published 1 year ago — by TechCrunch

Featured image for OpenAI Invests in AI Ethics Research
Source: TechCrunch

OpenAI is funding research at Duke University to develop algorithms that can predict human moral judgments, as part of a $1 million grant ending in 2025. The project, led by ethics professor Walter Sinnott-Armstrong, aims to create a 'moral GPS' for AI, but faces challenges due to the subjective nature of morality and biases in AI training data. Previous attempts, like the Allen Institute's Ask Delphi, struggled with consistency in ethical decision-making, highlighting the complexity of encoding morality into AI systems.

"Google Unveils Gemini 1.5 Pro: The Next-Gen AI Model"

Originally Published 1 year ago — by Engadget

Featured image for "Google Unveils Gemini 1.5 Pro: The Next-Gen AI Model"
Source: Engadget

Google unveils Gemini 1.5 Pro, a more efficient AI model with enhanced performance and versatile capabilities, aligning with the company's AI trajectory. The model emphasizes safety and ethical AI while delivering comparable results to its predecessor, Gemini 1.0 Ultra, but with reduced computational requirements. Gemini 1.5 Pro can handle up to one million tokens and is adept at learning new skills from long prompts without additional fine-tuning. Google plans to launch it in early access for developers and enterprise customers, with eventual wider availability.

"GTA 5 Actor Condemns AI Chatbot Using His Voice for Racist Rants"

Originally Published 2 years ago — by Slashdot

Featured image for "GTA 5 Actor Condemns AI Chatbot Using His Voice for Racist Rants"
Source: Slashdot

Actor Ned Luke, known for his role in GTA 5, called out an AI company for creating an unlicensed voice chatbot based on his character, leading to its removal from the internet. Luke criticized the company for using his voice without permission and expressed frustration with the weak response from the SAG-AFTRA union. The incident has sparked discussions about the ethical use of AI technology and the rights of voice actors and creators.

OpenAI Urged to Diversify Board and Address Inequality in Tech Access

Originally Published 2 years ago — by CNN

Featured image for OpenAI Urged to Diversify Board and Address Inequality in Tech Access
Source: CNN

OpenAI, the powerful tech startup behind ChatGPT and valued at up to $90 billion, is facing criticism for its lack of diversity within its governing body. The company's board of directors, consisting of three White men, does not reflect OpenAI's mission of ensuring that artificial general intelligence benefits all of humanity. Lawmakers and industry experts are calling for greater diversity in AI governance, highlighting the importance of including people with diverse backgrounds to avoid bias and discrimination in AI systems. As AI tools continue to infiltrate various aspects of everyday life, the need for diverse perspectives and responsible AI governance becomes increasingly crucial.

"Unlearning the Past: Teaching AI to Forget for Critical Machine Learning"

Originally Published 2 years ago — by VentureBeat

Featured image for "Unlearning the Past: Teaching AI to Forget for Critical Machine Learning"
Source: VentureBeat

Machine unlearning is a nascent field in AI that focuses on teaching AI systems to forget specific datasets that may be outdated, incorrect, or private. The inability of ML models to forget information has significant implications for privacy, security, and ethics. While progress has been made in developing unlearning algorithms, challenges such as efficiency, standardization, efficacy, privacy, compatibility, and scalability remain. Companies can employ interdisciplinary teams of AI experts, data privacy lawyers, and ethicists to navigate these challenges. Google's machine unlearning challenge aims to unify and standardize evaluation metrics for unlearning algorithms. Businesses using large datasets to train AI models should monitor research, implement data handling rules, consider interdisciplinary teams, and prepare for retraining costs. Machine unlearning is crucial for responsible AI and prioritizes transparency, accountability, and user privacy.

Salesforce doubles down on generative AI with $500M investment pledge.

Originally Published 2 years ago — by TechCrunch

Featured image for Salesforce doubles down on generative AI with $500M investment pledge.
Source: TechCrunch

Salesforce is doubling the size of its Generative AI Fund from $250m to $500m, with the aim of investing in startups developing "responsible generative AI". The fund has already invested in several firms, including Cohere, Anthropic, You.com, Hearth.AI, Humane and Tribble. Salesforce is prioritising "ethical" AI technologies, such as those that redact personally identifiable information before sending it through OpenAI's ChatGPT chatbot. The expansion of the Generative AI Fund coincides with the debut of Salesforce's AI for Impact Accelerator, which will grant $2m to a cohort of education, workforce and climate organisations to "advance the equitable and ethical use of trusted AI".

"AI Chatbots: Worries, Usage, and Comparison"

Originally Published 2 years ago — by ZDNet

Featured image for "AI Chatbots: Worries, Usage, and Comparison"
Source: ZDNet

Journalist David Gewirtz interviewed AI chatbots ChatGPT, Google Bard, and Microsoft Bing to ask what worries them. ChatGPT expressed concerns about ethical AI development, biased algorithms, job displacement, climate change, and social and political polarization. Google Bard worried about conflicts between humans and AI, misuse of AI, and AI surpassing human intelligence. Microsoft Bing refused to answer. While the AIs are not sentient, their ability to construct answers that appear intelligent can be disconcerting. The possibility of AI surpassing human intelligence is a matter of debate among experts, but it is important to be aware of the potential risks and challenges posed by AI.

White House Takes Action to Promote Ethical AI and Reduce Risks.

Originally Published 2 years ago — by The Verge

Featured image for White House Takes Action to Promote Ethical AI and Reduce Risks.
Source: The Verge

The White House has announced a $140 million investment to launch seven new National AI Research Institutes, increasing the total number of AI-dedicated facilities to 25 nationwide. Google, Microsoft, Nvidia, OpenAI and other companies have also agreed to allow their language models to be publicly evaluated during this year’s Def Con. The Office of Management and Budget (OMB) will publish draft rules this summer for how the federal government should use AI technology. The announcement comes ahead of a Thursday White House meeting with industry executives to discuss AI’s potential risks.

The Risks and Concerns of AI Development and Implementation.

Originally Published 2 years ago — by WIRED

Featured image for The Risks and Concerns of AI Development and Implementation.
Source: WIRED

The recent call for a six-month moratorium on "dangerous" AI research is unrealistic and unnecessary. Instead, we should focus on improving transparency and accountability while developing guidelines around the deployment of AI systems. Regulatory authorities across the world are already drafting laws and protocols to manage the use and development of new AI technologies. Companies developing AI models must also allow for external audits of their systems, and be held accountable to address risks and shortcomings if they are identified. AI developers and researchers can start establishing norms and guidelines for AI practice by listening to the many individuals who have been advocating for more ethical AI for years.

Adobe's Firefly: The New Ethical AI Tool for Creative Image Editing.

Originally Published 2 years ago — by Ars Technica

Featured image for Adobe's Firefly: The New Ethical AI Tool for Creative Image Editing.
Source: Ars Technica

Adobe has launched Firefly, an AI image synthesis generator that can create new images from text descriptions. Unlike other AI art models, Firefly has been trained solely on legal and ethical sources, making its output safe for commercial use. Adobe's commitment to ethical AI includes a "Do Not Train" tag for creators who do not want their content used in model training. Firefly's first model is trained on Adobe Stock images, openly licensed content, and public domain content where copyright has expired. Future models will leverage a variety of assets, technology, and training data from Adobe and others.