Federal Guidance
This FDA news release highlights the completion of the FDA’s first AI-assisted scientific review pilot, demonstrating the ability to reduce tasks that previously took days down to minutes. FDA plans to deploy a secure generative AI tool called Elsa across all centers by June 30, 2025. This initiative highlights AI’s transformative role in increasing regulatory efficiency for drug, biologic, and device reviews, signaling a shift toward faster approvals. However, as AI streamlines review processes, it also raises critical questions about maintaining scientific integrity, securing confidential data, ensuring human judgment remains central, and protecting participant safety. For human participant research, the rollout underscores the importance of transparency and human oversight in AI-enhanced regulatory decisions to preserve trust and equitable outcomes.
Author: FDA
Artificial Intelligence in Research: Policy Considerations and Guidance
This resource from the National Institutes of Health (NIH) outlines the policies and guidelines related to the use of AI in research funded by the National Institutes of Health (NIH). It provides a framework for how AI should be integrated into research projects, with a focus on maintaining ethical standards and ensuring responsible use. The NIH’s AI policies provide essential guidance for integrating AI into research involving human participants, ensuring that ethical considerations, data security, and regulatory compliance are prioritized in the research process.
Author: National Institutes of Health (NIH)
FDA Guidelines on AI/ML in Medical Devices
This guidance from the FDA outlines the development of the FDA’s regulatory framework for AI and machine learning in medical devices, ensuring that any AI-driven medical devices undergo rigorous testing, including clinical trials with human participants to affirm their safety and efficacy before these technologies are widely adopted.
Author: U.S. Food and Drug Administration (FDA)
IRB Considerations for AI in Human Subjects Research
This resource from Office for Human Research Protections (OHRP) outlines IRBs’ role in overseeing AI-driven research, ensuring that human subjects are protected, and ethical principles are followed, particularly with respect to informed consent, data privacy, and the potential risks associated with AI systems in research.
Author: Office for Human Research Protections (OHRP)