MIT scientists have developed a technique called "distribution matching distillation" (DMD) that accelerates popular AI image generators by condensing a 100-stage process into one step, making them up to 30 times faster. This advancement results in smaller, leaner AI models that can generate high-quality images more quickly, reducing computational time and costs. The new approach, detailed in a study uploaded to arXiv, has the potential to significantly impact industries where efficient image generation is crucial.
Midjourney has introduced a new "Character Reference" feature, -cref, for its AI image generator to address the issue of consistency across images. By adding a reference image URL as a parameter, users can now generate more similar-looking facial features, body shapes, and clothing for characters across a series of images. This development could potentially extend to other subjects in AI image generators, offering a solution to the challenge of maintaining consistency in generated content.
Stanford researchers have discovered over 1,000 images of child sexual abuse in a popular open-source database used to train AI image-generating models. The presence of these illegal images raises concerns that AI tools may be learning to create hyper-realistic fake images of child exploitation. AI image generators have been increasingly promoted on pedophile forums, enabling the creation of uncensored explicit images of children. The inclusion of child abuse photos in training data allows AI models to produce content resembling real-life child exploitation. The researchers suggest implementing protocols to screen and remove abusive content from databases, increasing transparency in training data sets, and teaching image models to "forget" how to create explicit imagery. The illegal images are in the process of being removed from the training database.
A new report by the Stanford Internet Observatory reveals that popular artificial intelligence (AI) image-generators, including Stable Diffusion, have been trained on thousands of explicit images of child sexual abuse. These images have enabled AI systems to produce realistic and explicit imagery of fake children, as well as transform clothed photos of real teens into nudes. The report highlights the need for companies to address this harmful flaw in their technology and take action to prevent the generation of abusive content. LAION, the AI database containing the illegal material, has temporarily removed its datasets, but the report emphasizes the need for more rigorous attention and filtering in the development of AI models to prevent the misuse of such technology.
OpenAI is not disclosing the number of artists who have opted out of training their AI system. Artists have expressed frustration with the process of excluding their content, which they feel is time-consuming and ineffective. OpenAI is collecting feedback to enhance the experience, as new tools like Glaze and Nightshade emerge to disrupt AI image generators.
Black artists are raising concerns about racial bias in artificial intelligence (AI), particularly in image generators. Many artists have found evidence of bias in the large data sets used to train AI algorithms, resulting in distorted or stereotypical depictions of Black people. Companies like OpenAI, Stability AI, and Midjourney have acknowledged the problem and pledged to improve their tools. However, artists argue that the biases are deeply embedded in these systems and call for a more nuanced understanding of Black culture and history. The issue of bias in AI algorithms goes beyond data sets and is rooted in the history of machine learning, which was developed by predominantly white male scientists. Some companies have attempted to address bias by banning certain words from text prompts, but experts argue that this approach avoids the fundamental issues of bias in the underlying technology. Despite the challenges, Black artists continue to explore and utilize AI in their work, albeit with skepticism.
AI image generators require high-quality source photos to produce good results, which means photographers may actually be needed to train AI models. Lay people's photos tend to be of lower quality, making it difficult for AI to learn new objects and people without photographers. The AI and photography spaces may overlap, with photographers becoming AI technicians and creating more economic opportunities.
AI-generated content is becoming more prevalent, but it can be difficult to distinguish from real content. While some tools can help detect AI-generated content, they are not always reliable. Experts recommend using media literacy techniques, such as investigating the source and finding better coverage, to assess what you're looking at. Additionally, chatbots can produce text that sounds highly plausible but may not be accurate, so it's important to fact-check any important information before sharing it. When experimenting with generative AI, it's important to consider privacy, ethics, consent, disclosure, and fact-checking.
AI art has developed rapidly in the past year, with image generators creating some freaky and convincing creations. Examples include an AI-generated photo of the Pope, AI stock photography of a woman eating salad, glitchy AI-generated video of Will Smith eating spaghetti, and an AI live-action South Park video that creates an uncanny valley feeling. The technology still has some way to go before creating truly convincing video, but it's clear that AI art is here to stay.
AI-generated images have become increasingly common on the internet, and there are now many AI image generators available, from free to paid and simple to complex. Midjourney, DALL-E, and Stable Diffusion Online are among the best options, each with their own unique features and capabilities. Midjourney is a popular choice for its photorealistic results, while DALL-E offers high-quality images and the ability to edit existing images. Stable Diffusion Online is completely free and open-source, but requires a powerful computer with a dedicated graphics card. DreamStudio offers a highly customizable experience, but comes with a cost. Finally, Bing Image Creator, which uses DALL-E under the hood, is a free option that grants users 100 "boosts" per week to generate new images.
Artificial intelligence image-generators have been able to create realistic images except for human hands, until now. Midjourney, a popular image maker, released a software update that fixed the problem, but it has also sparked a debate about the danger of generated content that is indecipherable from authentic images. The improved technology could put artists out of work and make deep fake-campaigns more plausible, absent glaring clues that an image is fabricated. However, there are still clues that can be used to detect deep-fakes, such as disfigured tree branches in the background.
Midjourney has released version 5 of its commercial AI image synthesis service, which can produce photorealistic images at a quality level that some AI art fans are calling creepy and "too perfect." Midjourney v5 generates images based on text descriptions called "prompts" using an AI model trained on millions of works of human-made art. The latest version offers improvements in skin textures, lighting, reflections, and shadows, as well as more expressive angles and overviews of a scene. It can also generate realistic human hands with five fingers.
The US Copyright Office has stated that an image generated solely from a text prompt does not qualify for human authorship, but did not rule out recognizing copyright for AI-generated material that has been modified to meet the standard for copyright protection. The Copyright Office will apply its human authorship requirement on a case-by-case basis, considering how the AI tool operates and how it was used to create the final work. The rise of AI image generators has given birth to a new strand of photography, with images that look like real photos but actually started life from a text prompt.