New advanced AI-generated images of receipts are increasingly being used to commit expense fraud, prompting companies to adopt AI-based detection methods that analyze image metadata and contextual data to identify fakes, as the technology becomes more convincing and accessible.
YouTube is rolling out a new AI 'likeness detection' tool for creators in its Partner Program to identify and report unauthorized uploads using their likeness, including deepfakes, with the feature currently in early access and expanding over the coming months.
A new report indicates that over 50% of recent internet articles are now AI-generated, with the share plateauing around this level since late 2024, suggesting a stabilization in AI content production and potential shifts in content creation practices.
AI-generated articles briefly outnumbered human-written ones online but are now roughly equal, with AI content still comprising a minority of search rankings and user trust remaining low. Researchers highlight the difficulty in distinguishing AI from human content and note that humans prefer human-written material, though AI's role in content creation continues to grow.
Deezer reports that nearly 28% of daily uploaded tracks are fully AI-generated, with over 30,000 such tracks received daily, though they constitute only 0.5% of streams. The platform is actively filtering out AI content from recommendations and playlists to minimize its impact, amid concerns about fraudulent activity and copyright issues. Deezer's efforts include AI detection tools and transparent tagging, as the industry grapples with the rise of generative AI in music.
A team of computer scientists developed an AI-based classifier to identify questionable open access scientific journals, which often prioritize fees over editorial quality, aiming to combat the proliferation of predatory publishing and improve scientific integrity.
A new AI framework called CTCAIT can detect neurological disorders like Parkinson's, Huntington's, and Wilson disease with over 90% accuracy by analyzing speech patterns, offering a promising non-invasive tool for early diagnosis and monitoring across multiple languages.
Scientists observed a unique supernova, SN 2023zkd, which appeared to interact with a nearby black hole in an unprecedented way, including a prolonged brightening and signs of material hitting a disk near the black hole, suggesting a star trying to 'eat' a black hole before exploding. This discovery, aided by AI algorithms, offers new insights into the complex interactions between stars and black holes during stellar death.
Astronomers have discovered a new type of supernova, SN 2023zkd, likely triggered by a massive star interacting with a black hole companion, with evidence suggesting the star was under extreme gravitational stress before exploding, and this discovery highlights the importance of studying binary star interactions in stellar evolution.
People are increasingly using AI tools like ChatGPT to craft personal messages, raising concerns about authenticity and detection. While AI-generated texts can be identified by their tone, style, and lack of personal references, advanced chatbots make detection more challenging. Some users feel guilty about relying on AI for sensitive communication, and public figures have faced backlash for promoting AI-assisted messages. Detection methods include looking for overly polished language and specific punctuation like em dashes, but as AI evolves, distinguishing human from machine writing becomes more difficult.
Professor Mark Massaro discusses the challenges of detecting AI-generated essays in education, highlighting signs such as excessive em dashes, lack of indents, perfect grammar with shallow content, absence of drafting history, impersonal writing, leftover prompt inputs, and fake citations, which are used to identify AI-written student papers and address the impact on student development and academic integrity.
The music industry is developing infrastructure to detect and trace AI-generated songs early in the production and distribution process, focusing on licensing and control rather than enforcement, with tools that analyze tracks for synthetic elements, attribute creative influence, and regulate training data to prevent misuse.
Science journals are implementing AI-powered software to detect image manipulation in research papers, addressing the issue of research fraud. The software, Proofig, will help identify some of the most blatant cases of image fraud, although it may not catch all manipulations, especially if fraudsters understand how the software works. This step is crucial in maintaining the integrity of scientific publications, as digital data has made it easier to commit fraud by altering images, such as those in western blots, to misrepresent experimental results.
Researchers at the University of Kansas have developed an AI text detector that can accurately distinguish between human-written and computer-generated content in scientific essays, specifically in the field of chemistry. The detector, trained on journals published by the American Chemical Society, achieved almost 100% accuracy in identifying human-authored passages and reports generated by AI models like ChatGPT. Existing AI detectors for general content performed poorly in detecting AI-generated content in scientific papers. The tool aims to help academic publishers assess the infiltration of AI-generated text, mitigate potential problems, and ensure the integrity of scientific literature.
Google's DeepMind team has developed a tool called SynthID, which can watermark AI-generated images in a way that is imperceptible to the human eye but easily detectable by AI detection tools. The watermark is embedded in the pixels of the image and remains intact even after cropping or resizing. The tool aims to address concerns about deepfakes and provide a means to identify and detect AI imagery. While SynthID is currently being rolled out for Google Cloud customers, Google hopes to eventually make it an internet-wide standard. The launch of SynthID marks the beginning of an ongoing arms race between AI detection tools and hackers seeking to bypass them.