The article discusses the nature of unprocessed photos, emphasizing that all images undergo some form of processing, whether in sensor signal interpretation, demosaicing, or post-capture editing. It highlights the importance of understanding how raw sensor data is transformed into final images, the role of green in luminance, and the subjective nature of 'processing' and 'fakeness' in digital photography.
Apple researchers have developed DarkDiff, an AI model integrated into the camera's image processing pipeline that significantly enhances extremely low-light photos by recovering detail from raw sensor data, outperforming previous methods, though it currently requires high computational power and is not yet available on iPhones.
Lux, creators of the Halide camera app, reviewed the iPhone 17 Pro, praising its new 4x and 8x telephoto zoom capabilities, significant upgrades to the 2x zoom, and improvements in image processing, highlighting the phone's advanced hardware stabilization and software enhancements for professional-quality photography.
Researchers at Penn State have developed a metasurface, an optical element that mimics the image processing capabilities of the human eye, allowing for instantaneous image transformation before digitalization by a camera. This innovation has the potential to significantly reduce the computing power and energy required for artificial intelligence systems to process images and identify objects, making it easier to recognize objects across different scales and orientations. The metasurface works using nanostructures to bend light and can be applied in various fields, including target tracking, surveillance, and satellite imaging.
The Event Horizon Telescope (EHT) Collaboration has released the sharpest image of the M87 black hole yet, using an additional telescope and independent data from 2018. The new image displays the chaotic nature of the black hole's accretion disk and confirms the accuracy of the image technique. The image also shows the Doppler/Einstein effects, with the brightest spot shifting to the right between the capture of two images. Scientists plan to continue advancing the science with new observations set for the first half of 2024, aiming to capture multiple images to create the first "video" of a black hole.
The upcoming OnePlus 12 is expected to finally address the long-standing stigma of underperforming cameras in OnePlus phones. With a triple camera system including a 64MP telephoto lens, improved image processing techniques, and video features such as 8K recording and AI-assisted capabilities, the OnePlus 12 aims to compete with top flagship phones from Apple, Samsung, and Google. The use of AI and the Snapdragon 8 Gen 3 chip are anticipated to enhance the camera's performance, potentially positioning the OnePlus 12 as a strong contender for the title of best camera phone.
The upcoming Samsung Galaxy S24 Ultra is rumored to feature significant camera upgrades, including improved image processing that aims to deliver more realistic results. The phone is expected to rival Google's Pixel lineup in terms of photo capture ability. Additionally, leaks suggest that the Galaxy S24 Ultra will have a titanium frame, which is said to be 56% stronger than aluminum and is rumored to look "far better" than the iPhone 15 Pro. However, storage configurations will remain unchanged, and the phone is expected to retain the same options as its predecessor, the S23 Ultra.
Researchers have developed a deep-learning method that simplifies the creation of holograms, allowing 3D images to be generated directly from 2D photos captured with standard cameras. This technique outperforms current high-end graphics processing units in speed and doesn't require expensive equipment like RGB-D cameras, making it cost-effective. The approach involves three deep neural networks that transform a 2D color image into data that can be used to display a 3D scene or object as a hologram. This breakthrough has potential applications in high-fidelity 3D displays and in-vehicle holographic systems, revolutionizing holographic technology.
Bing Image Creator, powered by DALL-E 3, allows users to generate AI-generated images for free with a Microsoft account. While the tool has improved, it still has limitations and concerns regarding deepfakes and copyright infringement. Users can access Image Creator through Bing Chat or the Image Creator website, and boosts can be earned through Microsoft Rewards. Specific prompts can yield whimsical or terrifying results, but accuracy may vary.
OpenAI has announced an update to its ChatGPT chatbot, allowing it to understand spoken words, respond with synthetic voices, and process images. Users can now opt into voice conversations on the mobile app and choose from five different synthetic voices for the bot's responses. Additionally, users can share images with ChatGPT and highlight areas of focus or analysis. The update will be rolled out to paying users in the next two weeks, with voice functionality limited to iOS and Android apps, while image processing capabilities will be available on all platforms. The move comes as tech giants race to launch new chatbot apps and features, raising concerns about the potential misuse of AI-generated synthetic voices. OpenAI has addressed these concerns by stating that the synthetic voices were created with voice actors they have directly worked with.
Scientists at the Space Telescope Science Institute (STScI) explain how they transform black-and-white image data from the James Webb Space Telescope into vibrant, full-color composites. By assigning colors based on chromatic ordering and using layers from different filters, the resulting images are visually appealing while still conveying scientific information. The balance between art and science is crucial in creating these images, ensuring both aesthetic appeal and scientific accuracy. The processed images are made available to the public, allowing anyone to explore the wonders of space using freely available datasets and software.
The James Webb Space Telescope has captured a stunning image of Herbig-Haro 46/47, showcasing a pair of actively forming stars in vibrant detail and color. The image was processed using scientific principles of "chromatic ordering," assigning color channels to filtered images. The image reveals the ability of Webb to peer through thick dust and gas, providing insights into star formation. The image also highlights distant galaxies and stars in the background.
NASA's Juno mission captured stunning images of Jupiter's cloud tops during its 49th close flyby of the planet. The images show complex structures in the cloud tops of the planet's atmosphere, including bands of high-altitude haze forming above cyclones in an area known as Jet N7. Another image shows a vortex near Jupiter's north pole with the glow from a bolt of lightning. The raw images from JunoCam are publicly available for image processing enthusiasts.
A blind camera comparison between the Xiaomi 13 Ultra and the iPhone 14 Pro Max was conducted in low light conditions to determine which smartphone has the better camera system. The shots were taken using the main camera and the results will be unveiled on May 9th.
The rumored "close to 1-inch" primary camera sensor on the iPhone 15 Pro Max may be a custom-developed Sony IMX903, which is about 2x more expensive than the 50MP IMX989 in the Xiaomi 13 Ultra. The new periscope zoom lens is also expected to make its debut on the iPhone 15 Pro Max. While a larger sensor is a hardware upgrade, Apple's image processing needs to improve to compete with the likes of the Xiaomi 13 Ultra.