Editorial Resources

AI in Cardiology: A Call for Robust Validation, Regulatory Labeling and Security of Data

AI in Cardiology: A Call for Robust Validation, Regulatory Labeling and Security of Data

This Healio article discusses the growing role of AI in cardiology, emphasizing the need for robust validation, regulatory oversight, and data security. These considerations are crucial for human participant research to ensure AI tools used in clinical trials and healthcare practices are safe, effective, and ethically sound. Proper validation and regulatory measures can enhance trust in AI technologies, especially when they directly affect patient care and research outcomes.

Author: Scott Buzby

Five Protein-Design Questions that Still Challenge AI

Five Protein-Design Questions that Still Challenge AI

This Nature article discusses advancements in AI and their implications in healthcare and research. In the context of human participant research, the article highlights the critical need for ethical AI applications to avoid biases, ensure data privacy, and improve clinical decision-making. These developments aim to make research involving human participants more accurate, equitable, and efficient.

Author: Sara Reardon

Should Race be Used in Clinical Algorithms? How One Doctor’s Research is Helping Shape Policy

Should Race be Used in Clinical Algorithms? How One Doctor’s Research is Helping Shape Policy

This STATS article discusses research on the role of race in clinical algorithms and advocates for the development of more inclusive medical technologies. The author is recognized for his efforts to identify and address biases in medical algorithms that may disproportionately affect certain racial groups. This work is essential to human participant research, as it impacts how health data is analyzed, ensuring that research involving human participants is fair, ethical, and accurately reflects diverse populations.

Author: Katie Palmer

How AI Might Help Men Fighting Prostate Cancer

How AI Might Help Men Fighting Prostate Cancer

This article in Health Day discusses a study on AI’s ability to identify aggressive prostate cancer tumors. The technology can improve precision medicine by helping doctors predict cancer progression and tailor treatments faster. This shows how AI tools can be used in human participant research to ultimately enhance diagnostic accuracy, improve treatment strategies, and expedite clinical decision-making in oncology.

Author: Dennis Thompson

Removing Bias from Devices and Diagnostics Can Save Lives

Removing Bias from Devices and Diagnostics Can Save Lives

This article from Nature addresses the removal of race-based equations in medical diagnostics. These changes aim to eliminate biased algorithms that may unfairly affect human participants based on race, ensuring more accurate health assessments and better outcomes in medical research. Such shifts highlight the importance of eliminating systemic biases and promoting fairness in clinical trials and healthcare practices.

Author: Cassandra Willyard

Scientific Papers that Mention AI get a Citation Boost

Scientific Papers that Mention AI get a Citation Boost

This article in Nature explains that papers that mention AI are more likely to be highly cited and highlights potential inequalities, as groups underrepresented in science don’t experience the same citation boost. The increasing integration of AI tools with human participant research, from supporting data analysis to improving study outcomes, could impact how research is conducted, analyzed, and cited, raising ethical concerns around fairness and equity.

Author: Mariana Lenharo

AI Tool Helps People with Opposing Views Find Common Ground

AI Tool Helps People with Opposing Views Find Common Ground

This Nature article discusses an AI tool designed to facilitate consensus-building among individuals with opposing views. The tool generates statements that are clearer and more neutral, enabling participants to better understand and reconcile their differences. In the context of human participant research, AI can support collaborative decision-making processes, often integral to behavioral studies and social science research. The article highlights the potential for AI to enhance participant interactions and streamline data collection in studies involving group dynamics and conflict resolution.

Author: Helena Kudiabor

Did AI Solve the Protein-Folding Problem?

Did AI Solve the Protein-Folding Problem?

This Harvard Medicine Magazine article discusses AI’s role in solving the protein-folding problem, a major milestone in understanding biological processes. AI models like AlphaFold have helped predict protein structures, which is essential for drug development and disease research. This breakthrough could improve our understanding of diseases at the molecular level and ultimately accelerate the translation of research findings to medical treatments.

Author: Molly McDonough

How Generative AI Is Transforming Medical Education

How Generative AI Is Transforming Medical Education

This Harvard Medicine Magazine article discusses the integration of generative AI into medical education. It highlights Harvard Medical School’s efforts to prepare future physicians with AI skills through courses and PhD programs dedicated to AI in healthcare and emphasizes how AI is reshaping teaching, clinical training, and administrative tasks. Tools like AI-driven standardized patients and grading systems provide medical students hands-on experience with AI, while aiming to improve both patient care and medical education.

Author: Elizabeth Gehrman

Can AI Make Medicine More Human?

Can AI Make Medicine More Human?

The article in Harvard Medicine Magazine explores how AI can enhance human-centered medical care, emphasizing the balance between technology and empathy. In human participant research, AI tools must align with patient needs, ensuring ethical, and compassionate conduct in clinical trials. Properly used, AI can improve patient outcomes and research efficacy, while respecting human dignity.

Author: Adam Rodman

Health Care, AI, and the Law: An Emerging Regulatory Landscape in California

Health Care, AI, and the Law: An Emerging Regulatory Landscape in California

This article by the Harvard Law School Petrie-Flom Center discusses California’s emerging regulatory landscape for AI in healthcare, emphasizing the need for legal frameworks that ensure patient safety, privacy, and fairness. This is crucial for human participant research, as AI tools become more integrated into clinical trials. Proper regulation helps ensure that AI technologies are applied ethically and transparently, safeguarding research participants from potential biases or risks.

Author: Rebeka Ninan

Past Editorial Resources

Future of AI: Streamlined Clinical Care, Better Patient Connection?

Future of AI: Streamlined Clinical Care, Better Patient Connection?

This article from Healio- Endocrine Today emphasizes AI’s potential to improve clinical care through streamlined processes and enhanced doctor-patient interactions. It highlights how human participant research is critical in developing and testing AI systems to ensure the models can accurately reflect patient needs, improve outcomes, and maintain the human element in healthcare.

Author: Michael Monostra

The Next Pandemic Virus Could Be Built Using AI

The Next Pandemic Virus Could Be Built Using AI

This article from The Hill discusses AI’s capabilities in synthetic biology, which has implications for human participant research, particularly in pandemic preparedness. AI could be used to predict and mitigate pandemics but also presents ethical concerns about the creation of viruses and/or organisms. Research with human participants would be essential for testing AI-driven models designed for outbreak response and/or vaccine development.

Authors: Arya Rao, Al Ozonoff, and Pardis Sabeti

New AI Tool for Cancer Research

New AI Tool for Cancer Research

This article from HMS News highlights how AI is being used to identify cancer subtypes and improve treatment outcomes. In human participant research, AI tools help process and analyze large datasets, enhancing understanding of cancer biology. Researchers can use AI to develop more targeted therapies that are tested in clinical trials involving human participants, making it a critical tool in oncology research.

Author:  Ekaterina Pesheva

AI in Research and Publishing Workflows: A New Paradigm

AI in Research and Publishing Workflows: A New Paradigm

This article from Research Information discusses how AI is revolutionizing the research and publication process. For human participant research, AI can streamline the management of research data, improve literature reviews, and help in generating hypotheses or identifying gaps in research. It also brings up questions of transparency and reproducibility, crucial for ethical research with human participants.

Author: Dave Flanagan

Embedded Bias: Clinical Algorithms and Health Equity Implications

Embedded Bias: Clinical Algorithms and Health Equity Implications

This article explores doctors’ use of race-based clinical algorithms. AI-driven tools used in healthcare may reflect underlying biases from the data they are trained on, potentially leading to health disparities. Human participant research must critically evaluate these tools to ensure they do not perpetuate inequities and that they are safely and fairly implemented in clinical settings.

Authors: Katie Palmer  and Usha Lee McFarling

Using AI to Repurpose Drugs and Treat Rare Diseases

Using AI to Repurpose Drugs and Treat Rare Diseases

This article from Harvard Medical School explains how AI is helping researchers repurpose existing drugs to treat rare diseases. AI can analyze previous drug data to find new therapeutic applications, which then need to be validated through human participant research, including clinical trials. This accelerates treatment development but introduces ethical considerations for patient safety, data privacy, and informed consent.

Author: Ekaterina Pesheva

Early AI Models Exhibit Human-Like Errors but ChatGPT-4 Outperforms Humans in Cognitive Reflection Tests

Early AI Models Exhibit Human-Like Errors but ChatGPT-4 Outperforms Humans in Cognitive Reflection Tests

This resource from PsyPost, explores how advanced AI models, like ChatGPT-4, can both mirror human cognitive errors and surpass human performance in certain cognitive tasks, highlighting the potential for AI to both influence and improve research methodologies and participant interactions. This article shows how AI can be of used in varying settings but must be monitored thoroughly at the same time.

Author: Eric W. Dolan

Too Much Trust in AI Poses Unexpected Threats to the Scientific Process

Too Much Trust in AI Poses Unexpected Threats to the Scientific Process

This resoure from Scientific American, discusses the complexities and risks associated with integrating AI into scientific research, particularly when it involves human participants. It emphasizes the importance of maintaining trust in scientific processes and ensuring that AI tools are used ethically and responsibly. The article calls for a balanced approach to integrating AI into research to safeguard the integrity of studies involving human participants and to maintain public trust in scientific practices.

Author: Laren Leffer

Can AI Replace Human Research Participants? These Scientists See Risks

Can AI Replace Human Research Participants? These Scientists See Risks

This resource from Scientific American explores the potential and risks of using AI to simulate or replace human participants in research studies. It examines the arguments from scientists who are concerned about the implications of such practices. The article underscores the need for careful consideration when integrating AI into research methodologies, ensuring that such practices do not compromise the quality or ethical standards of studies involving human participants.

Author: Chris Stokel- Walker

AI Outperforms Humans in Creativity Tests

AI Outperforms Humans in Creativity Tests

This resource from Earth.com reports on studies where AI systems have demonstrated superior performance in tasks traditionally considered creative, such as generating novel ideas or artworks. The article highlights the evolving landscape of creativity research, where AI’s capabilities challenge existing paradigms and necessitate thoughtful consideration of how human participants are integrated into these studies.

Author: Eric Ralls

Understanding Artificial Intelligence with the IRB: Impacts in Research

Understanding Artificial Intelligence with the IRB: Impacts in Research

This resource from Columbia University, explores the implications of integrating AI into research from an IRB perspective. It focuses on how AI influences research ethics and regulatory considerations. The article highlights the critical role of IRBs in adapting to the challenges posed by AI in research, ensuring that the integration of AI into studies involving human participants is ethical, transparent, and respects participants’ rights.

Authors: Diana Bae and Jooyoung Jeon

The Derek Bok Center for Teaching and Learning

The Derek Bok Center for Teaching and Learning

This resource from the Derek Bok Center for Teaching and Learning at Harvard University discusses the implications and applications of AI in the context of education and research. It explores how AI can be used to enhance learning, teaching, and research methodologies, as well as the ethical considerations involved. Overall, the Bok Center article provides valuable insights into how AI can be integrated into research and education while highlighting the importance of maintaining ethical standards and protecting human participants in these processes.

Generative Artificial Intelligence (AI)

Generative Artificial Intelligence (AI)

This resource from Harvard University Information Technology provides an overview of how AI is being integrated into various systems and processes at Harvard, including its applications in research and education. The article underscores the potential benefits of AI in research while also emphasizing the need for careful consideration of ethical and practical issues to ensure that the use of AI in studies involving human participants is responsible and effective.

New AI Tool Captures How Proteins Behave in Context

New AI Tool Captures How Proteins Behave in Context

This resource from Harvard Medical School, describes an AI tool that models protein behavior in various contexts, which is crucial for advancing personalized medicine by tailoring treatments to individual patients’ protein profiles. This tool also accelerates drug development by providing precise insights into protein-drug interactions, leading to faster identification of effective therapies. Furthermore, it enhances basic research into protein functions and disease mechanisms, improving our understanding of health conditions and treatment strategies. Ethical considerations and data accuracy are vital to ensure that such advanced tools are used responsibly in research involving human participants.

Author: Mark Gaige