Skip to main content

News & Highlights

Topics: Clinical Trials, Regulatory Guidance

Puzzling It Out

Symposium examines cluster randomized trials.

Suppose you’d like to determine if a new hand sanitization product is a better choice for hospitals than the product in standard use. A randomized trial is in order. But what is the best way to randomize the interventions?

Randomizing the patients—those whose outcomes will be measured to see if, say, infection rates fall because of the new product—are not a good choice; they won’t be using the products.

The hospital staff members who will be using it seem a better choice, but there is no practical way to be sure they use the appropriate sanitizer as they move around the hospital.

Enter the cluster randomized trial (CRT), a study design that randomizes clusters of individuals to receive one intervention or another. In this case, a CRT could randomize hospitals to use one or the other product.

The simplicity ends there, said Barbara Bierer, Harvard Medical School professor of medicine (pediatrics) at Brigham and Women’s Hospital who spoke at a symposium examining the issues on Nov. 3 called “Cluster Randomized Trials: Ethics, Regulations, Statistics and Design.”

Held at the Joseph B. Martin Conference Center, the event brought together experts to discuss the complexities of CRTs with an audience equally split between investigators, statisticians and institutional review board (IRB) members.

The event, sponsored by Harvard Catalyst | The Harvard Clinical and Translational Science Center, kicked off what Bierer described as an ongoing conversation.

“We’re here to figure out the issues and spend time together now and in the future addressing them,” Bierer said.

Subject Matters

Among the big questions were concerns about ethics, particularly regarding questions around who can be a research subject and who must give consent to participate in a cluster randomized trial. Take the hand sanitizer example, said Holly Fernandez Lynch, executive director of the Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics at Harvard Law School.

“Do hospital staff members need to consent? Are patients involved in the research?” she asked.

Answers to these questions vary depending on the individuals in the cluster, the intervention in question and how the two intersect. It may not be necessary to obtain consent from patients who are visiting doctors participating in hand sanitation research, for example. In contrast, an intervention in an emergency room that directly affects every patient who visits that ER would require patient consent.

But ER patients cannot give consent in advance, since anyone can walk through the doors of an ER. Nor can they consent if they are incapacitated by a medical emergency. In this case, consent should be obtained as early as possible.

“If you’re doing something to someone that you wouldn’t do but for the research, you get consent,” said Lynch. “We really have to home in on who counts as a subject because this is who we have to get consent from.”

“It’s important to consider the public trust in research,” said Michele Russell-Einhorn, vice president of oncology services for the Central Oncology Review division of Schulman IRB.

“When you’re randomizing health professionals or facilities and patients don’t know they’re involved, you run into concerns: ‘No one told me; you used my data without permission.’ It’s important to think about this in advance,” she said.

Both Lynch and Russell-Einhorn referred to the “but for” test as a way to determine who needs to be asked for consent.

“If you’re doing something to someone that you wouldn’t do but for the research, you get consent,” said Lynch. “We really have to home in on who counts as a subject because this is who we have to get consent from.”

Lynch and Russell-Einhorn also directed researchers to recent recommendations that have come from advisory groups interested in the ethics and regulations related to CRTs: The Ottawa Statement on the Ethical Design and Conduct of Cluster Randomized Trials was developed by bioethicists funded by the Canadian government and lists 15 principles of CRTs.

The U.S. Department of Health and Human Services Secretary’s Advisory Committee on Human Research Protections (SACHRP) also recently published recommendations regarding CRTs.

Calculating Concerns

The design of a CRT also raises important and complicated questions about statistical analysis. In one example, Michael Hughes, professor of biostatistics at the Harvard T. H. Chan School of Public Health, presented the design of a cluster randomized tuberculosis prevention trial called PHOENIx (protecting households on exposure to newly diagnosed Index multidrug-resistant tuberculosis patients). In the trial, individuals at high risk of contracting tuberculosis, and sharing a household with an individual with multi-drug resistant TB, were given one of two preventive drugs. In this trial, each household forms a cluster because study subjects in the same household receive the same drug.

One design challenge Hughes grappled with was intra-cluster correlation — the idea that people within a cluster tend to be more similar to one another than those in different clusters.

This is possible in any cluster, but in this trial, the similarities stem from the fact that people in the same household will be frequently exposed to the same pathogen, which could be more or less virulent than in other clusters.

They are also exposed to the same household conditions, which could be more or less conducive to infection. In addition, household members could influence one another to take the drug or to skip it.

The likelihood that outcomes will be more similar within a cluster must be factored into statistical analyses and in study design.

“With intra-cluster correlation, you have less information compared to a trial with randomized individuals, so you need more clusters to get the same power as an individually randomized clinical trail,” said Hughes. “There is an inflation factor.”

Such considerations add time to the design process.

“PHOENIx took five years to develop, with cluster issues contributing to a significant amount of the complexity,” said Hughes. “My idea of fun is getting the study designed nicely so that the data is super simple. It’s important to get statisticians involved early.”

“From the beginning,” added Rebecca Betensky, professor of biostatistics at the Harvard Chan School and a symposium moderator. “It’s better science to involve statisticians from the beginning to help with design.”

Investigators must also worry about clusters interfering with one another, a phenomenon called cross-contamination.

For instance, Rui Wang, HMS assistant professor of medicine at Brigham and Women’s, presented a community randomized HIV prevention trial in Botswana in which she randomized villages to receive either an HIV prevention intervention or standard care. She found out, however, that sexual networks crossed between villages and, therefore, between her clusters.

“The randomized effect shrinks when participants have partners outside the cluster,” she said.

Wang, who is also assistant professor in the Department of Biostatistics at Harvard Chan School, addressed this by increasing the number of clusters and also by analyzing sexual networks of individuals in the study to get a better sense of their statistical impact.

These examples demonstrate the power of CRTs to enable research interventions that cannot be directed to singularly selected individuals. Yet they also illustrate the complexities involved in conducting such trials ethically and with certainty that they will produce reliable, statistically powerful results.

Even with guidelines and regulations, each study will come with its own puzzles.

“We’re trying to design these studies—and our statisticians are trying to figure out how to analyze them—on the fly,” said Bierer. “It’s an evolution.”

Sign up to receive our newsletter: courses, funding, events, and resources.