Tag

Research Integrity

All articles tagged with #research integrity

AI Slop Tests the Limits of Computer Science Publishing
technology13 days ago

AI Slop Tests the Limits of Computer Science Publishing

Nature reports that a surge of AI-generated, low-quality submissions—dubbed 'AI slop'—is flooding computer science journals and conferences, with ICML 2026 receiving over 24,000 papers and arXiv submissions up more than 50% since ChatGPT; some papers are AI-generated or contain fabrications, prompting arXiv and conference policy changes, expanded reviewer pools, and debates about moving to rolling-journal models to preserve research integrity.

Stolen study, sold authorship, and a plagiarism trap for the victim
science1 month ago

Stolen study, sold authorship, and a plagiarism trap for the victim

A Bengaluru economist discovers her study was stolen and published by others, with authorship slots allegedly sold on Telegram for roughly $165–$200. The misappropriated paper, later indexed by a different journal, prompts plagiarism concerns when editors find it nearly identical to her rejected draft. The incident underscores how paper mills operate, the push for retractions, and ongoing investigations by publishers, even as some listed authors deny involvement.

AI Flood Threatens Trust in Scientific Publishing
artificial-intelligence1 month ago

AI Flood Threatens Trust in Scientific Publishing

A Gizmodo.io9 piece argues that AI-generated or AI-augmented papers are flooding arXiv, undermining traditional signals of quality and risking the reliability of scientific publishing. While AI can help with language barriers, analyses show AI-authored submissions are more prolific and standard quality indicators are becoming less reliable as publication volume rises; incidents like a Nature report about a German researcher misusing ChatGPT and AI-generated data in cancer research illustrate the potential for fraud. The article warns this could overwhelm scholarly communication unless reviewers and repositories tighten safeguards.

AI-suspected technobabble prompts Springer Nature inquiry into prolific editor
science1 month ago

AI-suspected technobabble prompts Springer Nature inquiry into prolific editor

A Turkish associate professor and editor, Eren Öğüt, faces a Springer Nature investigation after reviewers flagged multiple 2025 papers that read like technobabble, use irrelevant MATLAB code, and lack reproducible data or overlaid brain images. His unusually high volume of peer reviews (about 650 in one year) and roles as editor across journals raise concerns about editorial bias and integrity, with critics noting AI-assisted editing and a pattern of single-authored works that resemble prior templates. The investigation focuses on methodological gaps, data sharing, and potential misrepresentation of results in Neuroinformatics and related journals.

Clarivate’s Highly Cited Researchers List Faces Criticism and Changes
science3 months ago

Clarivate’s Highly Cited Researchers List Faces Criticism and Changes

The Highly Cited Researchers list has updated its methodology for 2025 to exclude scientists associated with ethical breaches, such as excessive self-citation, leading to the re-inclusion of mathematicians after two years of exclusion due to suspicious citation patterns. The new rules aim to improve the list's integrity by removing papers linked to previous research misconduct, although this may inadvertently exclude some deserving scientists. The list now recognizes 6,868 researchers across various fields, with particular attention to addressing gaming of citation metrics, especially in mathematics.

NEJM Launches MMWR Rival Amidst Scientific and Ethical Concerns
science-and-medicine4 months ago

NEJM Launches MMWR Rival Amidst Scientific and Ethical Concerns

The article summarizes recent developments in scientific publishing, including NEJM launching a new public health report rival, a former NIH official's paper receiving an expression of concern, and a study revealing that 1 in 5 chemists have intentionally added errors during peer review, highlighting ongoing issues of misconduct and integrity in research.

AI Identifies Over 1,000 Questionable Science Journals to Protect Research Integrity
science-and-technology6 months ago

AI Identifies Over 1,000 Questionable Science Journals to Protect Research Integrity

A new AI system developed by the University of Colorado automatically screens open-access journals to identify potentially predatory publications, flagging over 1,000 suspicious journals out of 15,200 analyzed. While not perfect, it serves as a crucial first filter to help protect scientific credibility, with human experts making the final decisions.

AI Identifies Over 1,000 Questionable Science Journals
science-and-technology6 months ago

AI Identifies Over 1,000 Questionable Science Journals

A team led by the University of Colorado Boulder developed an AI tool to identify questionable scientific journals, particularly predatory ones that lack proper peer review, aiming to protect the integrity of scientific research. The AI screens journal websites for suspicious features, assisting human experts in vetting publications, and has identified over 1,000 potentially problematic journals among nearly 15,200 analyzed.

PLOS One Flags Four Papers Overlapping Control Data
science6 months ago

PLOS One Flags Four Papers Overlapping Control Data

Four papers from Japanese researchers received expressions of concern from PLOS One due to overlapping control data and issues with microarray study design and analysis, raising questions about the reliability of their results, though some supporting data remain valid. The publisher concluded the investigation and published permanent notices, with related investigations ongoing at other journals.

Peer reviewers favor articles citing their own research
science6 months ago

Peer reviewers favor articles citing their own research

A study analyzing 18,400 articles suggests that peer reviewers are more likely to approve manuscripts if their own work is cited, raising concerns about citation bias and coercive practices in peer review. Reviewers who request citations of their own work tend to be less likely to approve the article, and language used in reviewer comments may indicate coercion, highlighting potential issues in the peer review process.