A new AI model has been used to enhance images of the black hole Sagittarius A* at the center of our galaxy, revealing it spins at nearly top speed, but experts caution that the data quality and AI's reliability need further validation before drawing definitive conclusions.
Computational biologists are harnessing the power of deep learning algorithms to improve the segmentation of cellular and subcellular features in biological imaging experiments. Algorithms such as U-Net have been transformative in identifying cell nuclei, while other approaches like StarDist and CellPose aim for a more holistic strategy by segmenting the entire cell. Training these algorithms requires large and diverse annotated datasets, and researchers are exploring strategies such as bulk annotation and human-in-the-loop approaches to streamline the process. While challenges remain, such as interoperability across imaging platforms and the analysis of 3D volumes, the field is making rapid progress and researchers are already exploring more advanced applications of these tools.
OpenAI has introduced new features to its ChatGPT, including AI voice options and image analysis capabilities. Users can upload photos through the mobile app or desktop browser, and the chatbot attempts to identify objects in the images. However, the image analysis feature has limitations and can make mistakes. OpenAI advises users not to upload personal or sensitive photos. Privacy concerns arise regarding the potential misuse of the image feature, especially if privacy protections are not in place.
OpenAI's GPT-4 with vision, which combines text and image analysis, has been revealed to have flaws in a technical paper published by the company. While OpenAI has implemented safeguards to prevent misuse and mitigate biases, the model still struggles with making accurate inferences, hallucinates, and misses text or objects in images. It is not suitable for identifying dangerous substances or chemicals, and it misidentifies certain hate symbols. GPT-4V also exhibits discrimination against certain sexes and body types when safeguards are disabled. OpenAI acknowledges that the model is a work in progress and is working on expanding its capabilities in a safe manner.
The Hubble Space Telescope's images are increasingly being spoiled by satellites, but researchers at the Space Telescope Science Institute have developed new software to mitigate the issue and remove the troublesome satellites from photos. The new tool identifies satellite trails in photos captured by Hubble’s Advanced Camera for Surveys using an image analysis technique called “Radon Transform.” The software is up to ten times more sensitive than prior software and identifies “roughly twice” as many trails as its previous studies.
Eight years after a controversy over Black people being mislabeled by image analysis software, Google and Apple's photo apps still cannot accurately find images of gorillas and most primates. The tech giants fear repeating the mistake and have disabled the ability to visually search for primates for fear of making an offensive mistake and labeling a person as an animal. The issue raises larger questions about other unfixed, or unfixable, flaws lurking in services that rely on computer vision and artificial intelligence.