A CNN report reveals that the FDA's AI tool Elsa is generating fake studies and misrepresenting research, raising concerns about the reliability of AI in drug approval processes and public health decision-making, amid broader skepticism about AI's effectiveness and accuracy in scientific contexts.
The FDA has launched its generative AI tool, Elsa, ahead of schedule to improve efficiency in scientific reviews and operations, housing it securely in GovCloud and emphasizing the importance of human oversight to mitigate AI hallucinations. The tool is already aiding in accelerating reviews and identifying inspection targets, with plans for broader use in the future.
The FDA's new AI tool, CDRH-GPT, still in beta, faces significant issues such as bugs, connectivity problems, and inaccuracies, raising concerns about rushing AI deployment in medical device review processes amid staff layoffs and safety considerations.
The FDA launched Elsa, a secure AI platform that significantly reduces task times and aims to modernize agency workflows, marking the beginning of several AI initiatives to enhance internal operations and better serve the public.
The FDA has launched its agency-wide AI tool, Elsa, ahead of schedule, aiming to streamline clinical protocol reviews and speed up regulatory processes, as highlighted by Commissioner Marty Makary.