Preprint servers like PsyArXiv are combating suspicious submissions, including AI-generated content and paper mill outputs, by removing non-compliant manuscripts and increasing moderation efforts, amid rising concerns over the quality and authenticity of scientific publications influenced by AI tools.
The scientific journal Frontiers in Cell and Developmental Biology published research featuring bogus imagery created by an AI image generator called Midjourney, depicting inaccurate and grotesque depictions of rat testes, signaling pathways, and stem cells. Despite the paper's written content appearing legitimate, the AI-generated images were glaringly wrong and not caught during the review process, raising concerns about the potential for AI to pass off nonsensical content as real in scientific publications. This incident highlights the challenges of ensuring scientific accuracy in AI-generated imagery and the need for best practices in its use.
Researchers at the University of Kansas have developed an AI text detector that can accurately distinguish between human-written and computer-generated content in scientific essays, specifically in the field of chemistry. The detector, trained on journals published by the American Chemical Society, achieved almost 100% accuracy in identifying human-authored passages and reports generated by AI models like ChatGPT. Existing AI detectors for general content performed poorly in detecting AI-generated content in scientific papers. The tool aims to help academic publishers assess the infiltration of AI-generated text, mitigate potential problems, and ensure the integrity of scientific literature.