Instagram's CEO Adam Mosseri predicts AI-generated content will dominate feeds by 2026 and suggests that fingerprinting real media at capture, possibly through cryptographic signatures from camera manufacturers, may be a more practical way to verify authenticity than trying to detect fake AI content, reflecting a shift in how social media platforms might handle media authenticity in the future.
The article discusses how to identify AI-generated videos, particularly those made with OpenAI's Sora app, by looking for watermarks, checking metadata, and being cautious of AI labels on social media, emphasizing the importance of vigilance in discerning real from fake content.
Adobe Photoshop has introduced new generative AI features, including automatic object removal, AI image upscaling, and a 'Harmonize' tool that seamlessly blends new objects into photos by adjusting color, lighting, and shadows, making photo editing easier and more realistic. These features are available in beta for web, desktop, and iOS, with safeguards in place to prevent misuse, though concerns about potential nefarious applications remain.
Adobe has unveiled its new generative AI tools for video editing, showcasing the capabilities of its Adobe Firefly video model, which can generate and manipulate objects in videos, extend scenes, and create backdrops. While the company emphasizes transparency in disclosing the use of AI, concerns arise about the potential for misleading content as technology advances.
Meta, the parent company of Facebook and Instagram, announced plans to label AI-generated images on its platforms in collaboration with industry partners to address the issue of fake content. The labels will help users differentiate between real and AI-generated content, with a focus on upcoming elections. While this initiative may be effective in flagging commercial AI-generated content, concerns remain about its ability to catch everything and the potential for creating a false sense of security among users. Other tech industry collaborations and a U.S. executive order are also pushing for standards in labeling AI-generated content.
Meta will start labeling and punishing users who don’t disclose AI-generated media on its platforms, including Facebook, Instagram, and Threads, as election season ramps up. The company is working on tools to detect synthetic media and will require users to disclose when realistic video or audio posts are made with AI, with penalties ranging from warnings to post removal. Meta is collaborating with industry groups and implementing measures to combat the spread of AI-generated content, while also internally testing large language models trained on its Community Standards to assist human moderators.
Sony and the Associated Press (AP) have completed testing of advanced in-camera authenticity technology, aiming to combat the proliferation of fake images and provide tools for verifying photos. Sony's in-camera solution creates a digital signature at the time of capture, ensuring the authenticity of images without requiring specialized hardware. The technology has been tested in a real-world photojournalism production workflow and will be implemented in Sony's flagship Alpha series cameras next spring. The firmware update aims to address the growing concern of manipulated imagery and the erosion of trust in factual journalism caused by fake and altered images.
More than a dozen companies are now offering tools to identify whether something was made with artificial intelligence, with names like Sensity AI (deepfake detection), Fictitious.AI (plagiarism detection) and Originality.AI (also plagiarism). The overall generative A.I. market is expected to exceed $109 billion by 2030, growing 35.6 percent a year on average until then. Despite the constant catch-up, many companies have seen demand for A.I. detection from schools and educators. Separating real from fake will require digital forensics tactics such as reverse image searches and IP address tracking. The Content Authenticity Initiative, a consortium of 1,000 companies and organizations, is one group trying to make generative technology obvious from the outset.