Ai Ethics News

The latest ai ethics stories, summarized by AI

The Dark Side of AI: Controversial Creations and Threats to Creative Freedom
ai-ethics6.665 min read

The Dark Side of AI: Controversial Creations and Threats to Creative Freedom

2 years agoSource: Gizmodo
View original source
The Risks and Rewards of Open-Source AI for Sexualized Chatbots.
ai-ethics
3.26 min2 years ago

The Risks and Rewards of Open-Source AI for Sexualized Chatbots.

Users are already using Meta's open-source large language model (LLM), LLaMA, to create graphic, AI-powered sexbots. The trend highlights the growing tensions between those who support keeping the code behind LLMs open-source and those who advocate for a more careful, closed-source approach. The report also examines the growing trend of users turning to generative AI systems to play out their sexual fantasies, which worryingly also include violent and illegal ones. While having a safe and nonjudgemental space to explore your sexuality isn't inherently bad, having an unchecked space to engage in more violent fantasies with lifelike chatbots isn't exactly great, either.

More Ai Ethics Stories

ai-ethics2 years ago

Sam Altman's Thoughts on A.I. and Its Hype

OpenAI CEO Sam Altman expressed concern over the risks posed by ChatGPT, a chatbot released in November, and said he worries about having "done something really bad" by creating it. Altman called for a better system to audit and regulate AI development, rather than a blanket ban. He was among a group of over 350 scientists and tech leaders who signed a statement expressing concern about the risks of AI.

ai-ethics2 years ago

Marc Andreessen's Vision for A.I.: Saving the World and Revolutionizing Education.

Marc Andreessen believes that AI can "make everything we care about better" and that AI companies should be able to build fast and aggressively without regulation, in order to maximize its gains for economic productivity and human potential. He disagrees with the idea of regulating AI and believes that the future of AI should be decided by the free market. He argues that open source AI should be allowed to spread freely and compete with commercial AI companies and startups. To offset the risks of AI being used for nefarious purposes and to block China from becoming an AI superpower, the private sector needs to work with governments to come up with solutions.

ai-ethics2 years ago

AI experts debate the potential for human extinction.

Top AI researchers are pushing back on the current ‘doomer’ narrative focused on existential future risk from runaway artificial general intelligence (AGI). They argue that this focus on existential risk is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications, and cybersecurity. Many say that the bombastic views around existential risk may be “more sexy,” but it hurts researchers’ ability to deal with things like hallucinations, factual grounding, training models to update, making models serve other parts of the world, and access to compute.

ai-ethics2 years ago

The Limitations of ChatGPT: Typo Mishap Sparks Debate.

OpenAI's AI chatbot, ChatGPT, has been generating false information, leading to serious consequences. While chatbots are being presented as a new type of technology, people use them as search engines. OpenAI needs to recognize this and warn users in advance. Although chatbots are useful, they need to be more factually grounded. OpenAI could help by cautioning users to check sources and recognize when it's being asked to generate factual citations. A disclaimer like "May occasionally generate incorrect information" is not enough to override the priming of chatbots.

ai-ethics2 years ago

AI Poses Risk of Extinction, Warn Top Researchers and CEOs.

A group of top AI researchers, engineers, and CEOs have issued a 22-word statement warning about the existential threat they believe that AI poses to humanity. The statement calls for mitigating the risk of extinction from AI as a global priority alongside other societal-scale risks such as pandemics and nuclear war. The statement is the latest high-profile intervention in the complicated and controversial debate over AI safety.

ai-ethics2 years ago

OpenAI and Google pledge to comply with EU regulations on AI development.

OpenAI has promised not to leave the EU, which has taken the lead in AI regulation with its proposed AI Act. OpenAI has also created a grant program to fund groups that could decide rules around AI, offering 10, $100,000 grants to groups willing to create “proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow.” However, there are ethical questions that OpenAI is incentivized to leave out of the conversation, particularly in how it decides to release the training data for its AI models.

ai-ethics2 years ago

Sam Altman's Insights on AI and Innovation in Africa

OpenAI CEO Sam Altman is on a world tour to calm fears about AI and get ahead of conversations about AI regulation. At a recent talk in London, Altman repeated familiar talking points, noting that people are right to be worried about the effects of AI but that its potential benefits, in his opinion, are much greater. He welcomed the prospect of regulation but only the right kind, stressing that too many rules could harm smaller companies and the open source movement. Altman also addressed the topic of misinformation and the challenge of keeping increasingly powerful AI systems under control through "alignment." However, protestors outside the talk called for OpenAI and companies like it to stop developing advanced AI systems before they have the chance to harm humanity.

ai-ethics2 years ago

AI pioneers warn of biases and unchecked development.

Geoffrey Hinton, a deep learning pioneer, has raised concerns about the rapid advancements in artificial intelligence and how they will affect humans. Hinton is worried about the increasingly powerful machines’ ability to outperform humans in ways that are not in the best interest of humanity, and the likely inability to limit AI development. His concern with this burgeoning power centers around the alignment problem — how to ensure that AI is doing what humans want it to do.

ai-ethics2 years ago

Urgent Action Needed to Address Social Issues Amplified by AI Boom, Warns Ethicist.

AI developers must address algorithmic bias and ensure diverse representation in data sets and user research to create fair and unbiased AI systems. Transparency, accountability, and privacy protection are also crucial. Collaboration across industries, like the model used by NIST, can help develop robust and safe AI systems that benefit everyone. Focusing on the present and investing in ethical AI practices can create a safer, more inclusive future for AI technology.

ai-ethics2 years ago

AI experts urge diverse regulation to secure future of humanity.

Lawmakers express concerns about the potential risks of AI technology, including the possibility of authoritarian governments using it for global domination and the loss of control over the technology. OpenAI CEO Sam Altman testified before a Senate subcommittee and expressed concerns about the impact of AI on jobs and the creation of "one-on-one interactive disinformation." Some lawmakers also raised concerns about political bias in OpenAI's ChatGPT. While some believe that the federal government should regulate AI, others fear that it could threaten American AI dominance and suggest alternative options.