COVID-19 Research Resources
A curated list of research resources around guidelines, policies, and procedures related to COVID-19, drawn from Harvard University, affiliated academic healthcare centers, and government funding agencies

Talks focused on translating recent advances in biostatistics into practice. Biostatistics Continuing Education

These are typical research seminars, on topics of interest to Harvard Catalyst statisticians and other quantitative researchers. We often organize a series of seminars on a single theme that takes place over a period of a month or so. We hold these in the late afternoon.

Past Biostatistics Seminars

Applications of Independent Component Analysis for MRI Data: Denoising, Brain Functional Connectivity, and Multi-Modal Data Fusion
November 13, 2018

Read more for details.

Lisa Nickerson, PhD; Assistant Professor of Psychiatry, HMS; Director of Applied Neuroimaging Statistics Laboratory, McLean Hospital

FXB, Room G13
Harvard T.H. Chan School of Public Health

Abstract: Independent component analysis (ICA) is a purely data-driven multivariate statistical method that has found numerous applications in MRI data analysis. The ability of ICA to separate meaningful signals of interest from confounding noise signals in a data-driven fashion make it a powerful dual purpose technique for both addressing noise problems in MRI data and for investigating brain function. Dr. Nickerson will present several such applications of ICA techniques, including the application of single-subject ICA for denoising fMRI data and group ICA for investigating functional connectivity of brain networks, highlighting the work on dual regression and findings on brain network connectivity in individuals with alcohol use disorder. Also, a new analytic method will be discussed that uses linked ICA to denoise scanner effects from multi-study MRI data, and the findings using linked ICA for multi-modal data fusion to investigate brain structure and function in heavy chronic cannabis users.

Vetting and Advancing Individualized Medicine: Issues and Strategies
May 17 2018

Read more for details.

Nicholas J. Schork, PhD
TGen and TGen/City of Hope IMPACT Center, Phoenix, AZ, and Duarte, CA
J. Craig Venter Institute (JCVI) and Adjunct Professor of Psychiatry and Family and Preventive Medicine (Division of Biostatistics) at the University of California, San Diego (UCSD), La Jolla, CA

Abstract: There is a lot of attention being given to 'personalized,' 'individualized,' and 'precision' medicine. This is not without some justification, as applications of contemporary technologies such as DNA sequencing, proteomics, induced pluripotent stem cells, imaging protocols, and wireless health monitoring devices have identified nuanced and often unique features of individuals at the genomic, physiologic, environmental and behavioral levels that may impact their risk for disease and treatment response. However, unless it can be shown the individualized medicine approaches and strategies result in better outcomes than an alternative approach to treating disease, its adoption will be in question. Vetting individualized medicine strategies is not trivial, although there are a few emerging strategies. These include aggregated N-of-1 trials, drug matching trials, and developing clinical learning systems. Recent trends in regulatory oversight perspectives may accommodate these and related vetting strategies for a number of reasons. This talk discusses relevant issues in vetting individualized medicine and provides examples, analytical results related to specific study designs, and simulation study results, as well as an eye towards the need to vet and test more futuristic technologies, such as individualized digital therapeutics and therapeutic 'companion' technologies.

Statistical Learning of Dynamic Systems - a Direct Approach
October 16, 2017

Read more for details.

Itai Dattner, PhD
Lecturer, Department of Statistics
University of Haifa

3:30pm - 5:30pm
Ballard Room 503
HMS Countway Library

Abstract: Dynamic systems are ubiquitous in nature and are used to model many processes in biology, chemistry, physics, medicine, and engineering. In particular, systems of (deterministic or stochastic) differential equations as well as discrete models are commonly used for the mathematical modeling of dynamic processes. These systems describe the interrelationships between the variables involved, and depend in a complicated way on unknown quantities (e.g., initial values, constants, or time dependent parameters). Modern dynamic systems are typically very complex: nonlinear, high dimensional, and only partly measured. Moreover, data may be sparse and noisy. Thus, statistical learning (inference, prediction) of dynamical systems is not a trivial task in practice.

In the first part of the talk we will present the direct integral method, a novel approach for estimating the parameters of systems of ordinary differential equations. We will discuss some theoretical results such as identifiability and consistency for both, fully and partially observed systems.

The second part of the talk will be concerned with applications of the direct method. We will consider examples from infectious diseases and biology. In particular, we will present a recent study where we experimentally monitored the temporal dynamic of a predatory-prey system and demonstrated the ability to obtain realistic parameter estimates given sparse and noisy data. Next, we will discuss the statistical learning of age-dependent dynamics which is an important characteristic of many infectious diseases. We examine the estimation of the so called next-generation matrix using incidence data of influenza-like-illness. Unlike previous studies, using our estimation method we do not have to assume any constraints regarding the structure of the matrix.

Slides from Dr. Dattner's presentation

Single Institution Studies: Specific Aims, Study Design and Sample Size
June 14, 2017

Read more for details.

David A. Schoenfeld, PhD
Professor of Medicine, Harvard Medical School, MGH
Professor in the Department of Biostatistics, Harvard T.H. Chan School of Public Health

June 14, 2017
11:30am - 1:00pm
Room 3130, Simches Research Center, MGH
185 Cambridge Street, Boston, MA 02114


This talk is designed to summarize what I have learned in the past 30 years planning studies with MGH investigators. It will focus on pilot studies and other studies that are feasible for a single investigator. The primary purpose of a pilot study is to make sure that a larger study is feasible. This included making sure the instructions can be followed, the treatment is tolerated, the relatively common adverse events are anticipated and the larger study appears to be worthwhile. I show how these considerations affect the sample size of the study. Academic researchers need to have a Clinical Development Plan in any proposal for an initial study, which follows the treatment from their proposed study through to a change in medical practice.

Pilot studies can help the investigator learn the properties of their primary measurement, but they are not very good at estimating the effect size of a future study. I discuss how to approach sample size considerations. Issues arise because the clinical meanings of most measurements used in small studies don't have an obvious clinical meaning. In this setting, there are several ways that sample size can be approached. Finally, I will address the issue of whether to have a control group.

Slides from Dr. Schoenfeld's presentation.

Reproducible Research in Biostatistics: Some Concepts and Tools
March 30, 2017

Read more for details.

Vincent J. Carey, PhD
Channing Division of Network Medicine, Brigham and Women's Hospital
Professor of Medicine, Harvard Medical School


A workshop on "Statistical Challenges in Assessing and Fostering the Reproducibility of Scientific Results" was recently sponsored by the National Academy of Sciences and the National Science Foundation. I will start with a brief sketch of some key concepts of the workshop report. I will then review a number of my own efforts to implement reproducible research principles for projects in cancer genomics, vaccine evaluation, and statistical software design. For practical work in biostatistics, essential tools include software version control, verification via formal testing with "continuous integration," careful attention to data provenance, and "computable documents."

After the material of the talk is presented, the meeting will transition to an interactive mode, in which I will guide the audience in mobilizing GitHub, Rstudio, and Travis CI to understand what is involved in embedding reproducibility practice in daily work. Attendees interested in hands-on experience should have a account and should bring a laptop with a recent version of Rstudio. Talk materials and links to relevant resources are in development, viewable now at

Slides from Dr. Carey's presentation.


The Placebo Effect in Clinical Trials
January 26, 2017
February 8, 2017
February 23, 2017

Read more for details.

Kathryn T. Hall, PhD, MPH
The Pharmacogenetics of the Placebo Response
January 26, 2017
3:30pm-5:00pm, Harvard T.H. Chan School of Public Health, Kresge G1

Maurizio Fava, MD
The Challenge of the Placebo Response in Depression Trials
February 8, 2017
3:30pm-5:00pm, Harvard T.H. Chan School of Public Health, Kresge G2

Ted J. Kaptchuk and Roger B. Davis, ScD
Placebo Effects in Medicine and Clinical Trial Design Considerations
February 23, 2017
3:30pm-5:00pm, Harvard T.H. Chan School of Public Health, Kresge G3

Patient reported outcomes and clinical and translational research
February 17, 2017

Read more for details.

Rochelle E. Tractenberg, PhD
Associate Professor of Neurology, Biostatistics, Bioinformatics & Biomathematics
Georgetown University Medical Center
12:30pm-1:30pm, Harvard T.H. Chan School of Public Health, Room 202B Kresge

The difference between a patient-reported outcome and a patient-centered outcome is often blurred. In fact, "patient-reported" and "patient-centered" are not synonymous; a patient-centered outcome is a subtype of patient-reported outcomes (PROs). A new model of how to create a patient-centered outcome was developed: it starts with a focus on what the patients experience or what they prioritize about their experience, and not the patients' report on what clinicians or investigators prioritize about the patients' experiences. This approach results in a qualitatively different patient-reported outcome than one that does not follow this new framework. The recent emphasis on patient-centeredness and the inclusion of PROs in clinical and comparative effectiveness research has important implications for the translation from the "bench" to the bedside, whereas the differences between patient-centered and non-patient-centered PROs further complicate the translation from bedside to community. These problems adversely affect both analysis and interpretation of results as well as decisionmaking that relies on these results. This talk will explore relationship(s) between PROs and the types of outcomes used in more basic science along the translational continuum, to promote effective translation and the integration of the patient's perspective throughout the research continuum.

Slides from Dr. Tractenberg's presentation.

Staged-informed consent in the cohort multiple randomized controlled trial design
May 20, 2015

Read more for details.

Rieke van der Graaf, PhD
May 20, 2015
4:00pm-5:00pm, Harvard T.H. Chan School of Public Health, FXB G11

The "cohort multiple randomized controlled trial" (cmRCT), a new design for pragmatic trials, embeds multiple trials within a cohort. The cmRCT is an attractive alternative to conventional RCTs in fields where recruitment is difficult, multiple interventions have to be tested, where experimental interventions are highly preferred by physicians and patients, and the risk of disappointment bias, cross-over, and contamination is considerable. In order to prevent these unwanted effects, the cmRCT design uses pre-randomization, which, according to some, is ethically problematic. Pre-randomization in cmRCT can be avoided by adopting a staged-informed consent procedure. In the first stage, at entry into the cohort, all potential participants are asked for their informed consent to participate in a cohort study and broad consent to be either randomly selected or to serve as control. In a second stage, informed consent to receive the experimental intervention is sought only in those randomly selected for the intervention arm. At the third stage, after an RCT has been completed, all cohort participants who have opted-in receive aggregate disclosure of the results of the RCTs. This stage-informed consent procedure aims to keep participants actively engaged. Moreover, this model seems to have improved recruitment rates in our cancer center in the UMC Utrecht.

CER with Administrative Data: Methods for Confounding Uncertainty and Heterogeneous Treatment Effects
May 19, 2015

Read more for details.

Cory Zigler, PhD
May 19, 2015
3:30pm-5:30pm, Harvard T.H. Chan School of Public Health, FXB G12

Comparative effectiveness research depends heavily on the analysis of a rapidly expanding universe of observational data made possible by the integration of health care delivery, the availability of electronic medical records, and the development of clinical registries. Despite extraordinary opportunities for research aimed at improving value in health care, a critical barrier to progress related to the lack of sound statistical methods that can address the multiple facets of estimating treatment effects in large, process-of-care databases with little a priori knowledge about confounding and treatment effect heterogeneity. When attempting to make causal inferences with such large observational data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set are necessary to properly adjust for confounding, or define subgroups experiencing heterogeneous treatment effects. To address these barriers, we discuss methods for estimating treatment effects that account for uncertainty in: 1) which of a high-dimensional set of observed covariates are confounders required to estimate causal effects; 2) which (if any) subgroups of the study population experience treatment effects that are heterogeneous with respect to observed factors. We outline two methods rooted in the tenets of Bayesian model averaging. The first prioritizes relevant variables to include in a propensity score model for confounding adjustment while acknowledging uncertainty in the propensity score specification. The second characterizes heterogeneous treatment effects by estimating subgroup-specific causal effects while accounting for uncertainty in the subgroup identification. Causal effects are averaged across multiple model specifications according to empirical support for confounding adjustment and existence of heterogeneous effects. We illustrate with a comparative effectiveness investigation of treatment strategies for brain tumor patients.


Research Practices and Reproducible Research
January 6, 2015

Read more for details.

John Ioannidis, DSc, MD
January 6, 2015
10:30am-12:00pm, Harvard T.H. Chan School of Public Health, FXB G12

The way research is selected for funding, designed, conducted, analyzed, and published can have a substantial impact on the reproducibility of scientific results. Empirical evidence suggests that the efficiency of many currently applied research practices is suboptimal, and there is wide variability across different scientific fields in this regard. This leads to a high prevalence of biased results. Dr. Ioannidis will peruse the current landscape and discuss different possibilities that have been proposed on how to improve the adoption and implementation of research practices that could lead to more reliable, accurate, and translatable results in a reproducible manner.

Risk Prediction with Biomarkers under Complex Study Designs
May 5, 2014

Read more for details.

Tianxi Cai, ScD
Professor in the Department of Biostatistics, Harvard T.H. Chan School of Public Health

May 5, 2014
Harvard T.H. Chan School of Public Health, FXB G-12

To evaluate the clinical utility of new biomarkers for risk prediction, a crucial step is to measure their predictive accuracy with prospective studies. However, it is often unfeasible to obtain marker values for all study participants. The nested case-control (NCC) design is a useful cost-effective strategy for such settings. Under the NCC design, markers are only ascertained for cases and a fraction of controls sampled randomly from the risk sets. The outcome dependent sampling generates a complex data structure and therefore a challenge for analysis. Existing methods for analyzing NCC studies focus primarily on association measures. In this talk, I will discuss several approaches to evaluating risk prediction models with NCC studies. The new procedures will be illustrated with data from the Nurse's Health Study to evaluate the accuracy of biomarkers and genetic markers for predicting the risk of developing Rheumatoid Arthritis.

Cardiovascular Risk Assessment and Guidelines for Statin Therapy
April 9, 2014

Read more for details.

Nancy Cook, ScD
Professor in the Department of Epidemiology, Harvard T.H. Chan School of Public Health
Professor of Medicine, Harvard Medical School

April 9, 2014
Harvard T.H. Chan School of Public Health, FXB G-12

The Framingham risk score for cardiovascular disease (CVD) has long been used in recommendations for cholesterol-lowering therapy. There is continuing controversy over whether novel biomarkers can improve risk prediction. Various CVD risk models will be described, with particular attention to the development and validation of the Reynolds risk score, as well as the new AHA/ACC risk model. Metrics for evaluating and comparing models will be discussed. Several approaches to setting treatment guidelines will be considered, including risk-based guidelines. The implications of these approaches for statin use in the prevention of cardiovascular disease will be illustrated in the NHANES population.

Comparison of Dependent Deattenuated Correlation Coefficients
November 13, 2013

Read more for details.

Bernard Rosner, PhD
Professor in the Department of Biostatistics, Harvard T.H. Chan School of Public Health
Professor of Medicine (Biostatistics), Harvard Medical School

November 13, 2013
Harvard T.H. Chan School of Public Health, FXB G-13

A marker of validity of a dietary instrument is how well it correlates with a relevant biomarker. For example, there are different methods of assessing dietary Beta carotene (e.g., 24-hour recall, food frequency questionnaire, diet record) and a natural question is which measure correlates most strongly with plasma Beta carotene. Indeed, there already exist methods for comparing dependent correlation coefficients (e.g., Wolfe, 1976; Steiger, 1980; Meng, 1992). However, each of these instruments has associated random error and a related question is after correcting for random error, which instrument correlates most highly with plasma Beta carotene. Since these correlations are assessed from the same subjects, the more general question is how can we compare dependent deattenuated correlation coefficients. This is a generalization of previous work for obtaining confidence limits for a single deattenuated correlation coefficient (Rosner and Willett, 1988). In addition, we extend this work to the comparison of dependent Spearman correlation coefficients, which to our knowledge has never been done before. These methods are illustrated with two examples, (a) the comparison of the validity of different methods for measuring dietary Beta carotene and (b) the comparison of the validity of different storage protocols in processing plasma samples for the determination of HgbAlc.

Slides from Dr. Rosner's presentation.

Clinical Trials Disclosure
October 29, 2013

Read more for details.

Pearl O'Rourke, MD, Director, Mass General Brigham Human Research Affairs
Sarah White, MPH, Director, Mass General Brigham Human Research Quality Improvement Program
David Schoenfeld, PhD, Biostatistician, MGH Biostatistics Center
Nancy Ringwood, RN, Project Manager, ARDSNet CCC MGH Biostatistics Center

Tuesday, October 29, 2013, 3:00pm-5:00pm
Harvard T.H. Chan School of Public Health
Kresge G2

Since 2005, investigators have been required to register their clinical trials with the federal government. This includes studies published in journals, as well as academic research. While the process of registering is relatively simple, reporting is more complex. This seminar will address this topic and provide guidance around registration, and also:

  • Provide background on rationale for clinical trials registration and disclosure
  • Review the current requirements for clinical trials registration and reporting, including federal and journal editor requirements
  • Discuss specific information required within the results reporting database
  • Provide a forum for discussion regarding how biostatisticians can help clinical researchers

Slides from Dr. O'Rourke's presentation.

Slides from Ms. Ringwood's and Dr. Schoenfeld's presentation.

Return to top

Integrating Effectiveness and Safety Outcomes in the Assessment of Treatments
May 1, 2013

Read more for details.

Jessica M. Franklin, Ph.D., Instructor of Medicine, Harvard Medical School, biostatistician,
Division of Pharmacoepidemiology and Pharmacoeconomics, Brigham and Women's Hospital

Wednesday, May 1, 2013, 3:30-5:00pm
Reception 5:00-5:30pm
Location: Harvard T.H. Chan School of Public Health, FXB G12

Slides from Dr. Franklin's presentation.

Comparative safety and effectiveness monitoring of newly marketed new drugs
April 10, 2013

Read more for details.

Joshua Gagne, PharmD, ScD, Instructor in Medicine, Harvard Medical School;
Pharmacoepidemiologist, Division of Pharmacoepidemiology and Pharmacoeconomics,
Brigham and Women's Hosptial

Wednesday, April 10, 2013, 3:30-5:00pm
Reception 5:00-5:30pm
Location: Harvard T.H. Chan School of Public Health, FXB G12

Sequential Analysis Applied to Comparative Effectiveness
March 13, 2013

Read more for details.

Martin Kulldorff, Ph.D., Professor and biostatistician, Department of Population Medicine,
Harvard Medical School and Harvard Pilgrim Health Care Institute

Wednesday, March 13, 2013, 3:30-5:00pm
Reception 5:00-5:30pm
Location: Harvard T.H. Chan School of Public Health, FXB G12

Slides from Dr. Kulldorff's presentation.

Current Issues in the Design and Analysis of Cluster Randomization Trials
March 6, 2013

Read more for details.

(co-sponsored with the Dept. of Global Health and Population)

Allan Donner, Ph.D., FRSC
Professor, Department of Epidemiology and Biostatistics, Schulich School of Medicine and Dentistry, The University of Western Ontario; Director, Biometrics, Robarts Research Institute
Wednesday, March 6, 2013
Harvard T.H. Chan School of Public Health, FXB G12

Cluster randomization trials are those that randomize intact social units or clusters of individuals to different intervention groups. Such trials are particularly widespread in the evaluation of educational programs and innovations in the provision of healthcare. This talk will discuss current issues in the design and analysis of such trials that have proved controversial among practitioners. They include the choice of unit of inference, potential threats to trial validity, factors influencing the selection of a design, assuring the power of a cluster randomized trial, and recent work focusing on ethical issues.

Slides from Dr. Donner's presentation.

Lessons Learned in a Multi-tissue Genomewide Methylation Pilot Study
February 27, 2013

Read more for details.

Murray A. Mittleman, MD, DrPH, Associate Professor of Medicine, HMS-BIDMC;
Associate Professor in the Department of Epidemiology, Harvard T.H. Chan School of Public Health

Elissa H. Wilker, ScD, Research Associate in the Exposure, Epidemiology and Risk Program, Harvard T.H. Chan School of Public Health;
Instructor in Medicine, HMS-BIDMC

Oliver Hofmann, PhD, Senior Research Scientist, Harvard T.H. Chan School of Public Health

Wednesday, February 27, 2013, 3:30-5:00pm
Reception 5:00-5:30pm
Location: BIDMC, Kirstein Living Room

Atrial fibrillation (AF) is a common arrhythmia responsible for a high morbidity burden, but established risk factors account for only a portion of incident AF. Epigenetic mechanisms such as methylation of CpG islands in the DNA alter gene expression or phenotypic profile without modification to the DNA sequence and may influence atrial function, but this association has not been explored. There is some evidence that methylation patterns are more highly correlated across tissues within an individual than within a given tissue across individuals; however this has not been extensively studied. To address these issues, we conducted a pilot study and collected samples of the atrial appendage, left internal mammary artery and peripheral white blood cells in 18 subjects at the time of elective coronary artery bypass surgery. The tissue samples were processed and sent to a core laboratory facility for evaluation with the Illumina Infinium HumanMethylation450 array which covers over 480,000 methylation sites across the genome. In this presentation, we will discuss approaches to evaluating data quality and integrity including the appropriate use of bioinformatics tools and how these assessments may inform subsequent analyses of data arising in this setting.

Slides from Dr. Mittleman's presentation.

Slides from Dr. Wilker's presentation.

Slides from Dr. Hofmann's presentation.

Methodological issues in observational studies of comparative effectiveness
Choice of comparators, target population, and composite endpoints
January 9, 2013

Read more for details.

Robert Glynn, Sc.D., Professor in the Department of Biostatistics, Harvard T.H. Chan School of Public Health
Department of Biostatistics, Brigham & Women's Hospital

Wednesday, January 9, 2013, 3:30-5:00pm
Reception 5:00-5:30pm
Location: Harvard T.H. Chan School of Public Health, FXB G12

This first of four scheduled talks on methodological issues in observational studies of comparative effectiveness will begin with a focus on time scales and comparator treatments, and argue for the value of new user designs with active (vs. non-user) referents. A new user perspective for all compared treatments allows for consideration of the indications and barriers to treatment at the time of initiation, as well as the possibly evolving determinants of persistence, on a parallel time scale. We next consider overlap in propensity score distributions across treatment groups, and trimming the tails of the propensity score distribution, as an approach to identify a target population with reasonable treatment equipoise, and, in whom more stable treatment effects are estimable. In addition to the propensity score, an appropriately developed disease risk score can provide an important tool to enhance comparability across treatment groups, with particular value in the setting of newly introduced treatments, for which the propensity score may be evolving in early use. Lastly, we discuss the value of composite endpoints as a way to summarize effects across multiple outcomes, and with consideration of possible heterogeneity in effects across components.

Slides from Dr. Glynn's presentation.

Capture-recapture methods, occupational health surveillance, and population-level translational research
March 21, 2012

Read more for details.

Al Ozonoff, PhD, Director, Design and Analysis Core - Clinical Research Center, Boston Children's Hospital

Les Boden, PhD, Professor and Associate Chair of Environmental Health, Boston University School of Public Health

Wednesday, March 21, 2012, 3:00pm-5:00pm (reception to follow)
Boston Children's Hospital - Auditorium A - 1 Autumn St.

The methodology of capture-recapture (sometimes referred to as mark-and-release or capture-mark-release-recapture) has its earliest roots in demography from an application of Laplace over 300 years ago. The modern treatment of the subject began in the late 19th century with ecological applications, and has extended to applications in human health over the past fifty years. In this talk, we will use capture-recapture methods as a jumping-off point to explore the deep historical connections between statistics and public health surveillance. We will discuss applied methodological research in this context as an exemplar of translation to population health - T4 on the clinical and translational research spectrum. Dr. Ozonoff will present the history of capture-recapture and its applications to problems in public health, describe the statistical principles and foundations of the methodology, and explore questions that may arise during design and implementation of cap-recap studies. Dr. Boden will follow with a brief introduction to occupational health, in particular his use of capture-recapture methods to estimate the population-level burden of workplace injuries, and will share some practical lessons learned after over 30 years of research in the field. We will conclude by engaging the audience in an open discussion centered around how to bring important statistical advances 'to bedside,' i.e., making useful and effective tools available to those who will have the greatest impact on problems of public health.

Slides from Dr. Ozonoff's presentation.

Slides from Dr. Boden's presentation.

Return to top

Principles and Challenges for Ethical Biostatistical Practice in Clinical and Translational Research: An Illustrated Panel Discussion
January 11, 2012

Read more for details.

Shelley Hurwitz, PhD.
Director of Biostatistics, Center for Clinical Investigation, Brigham and Women's Hospital
Assistant Professor (Biostatistics), Harvard Medical School
Chair, American Statistical Association Committee on Professional Ethics

Jonathan Gelfond, MD PhD
Assistant Professor, Department of Epidemiology & Biostatistics
University of Texas Health Science Center at San Antonio

Peter Imrey, PhD
Professor, Cleveland Clinic Lerner College of Medicine, Case Western Reserve University
Past President, International Biometric Society Eastern North American Region
Member, American Statistical Association Committee on Professional Ethics

Wednesday, January 11, 2012
Time: 3:00pm-5:00pm
Reception: 5:00pm-5:30pm
Brigham and Women's Hospital
Carrie Hall
Peter Bent Brigham building
15 Francis Street
Boston, MA

Three panelists and the audience will interactively explore ethical issues in biostatistical practice. Dr. Hurwitz will introduce the topic and present the historical context of ethics in statistical practice. Dr. Gelfond will review ethical principles recently proposed specifically to guide data analysis by clinical and translational researchers (Stat Med. 2011, 30:2785-92), with examples that will ring true for many biostatisticians. Dr. Imrey will add examples, comment from a systems perspective on 21st century medical research integrity concerns, and the statistical profession's responses to medical society reform efforts, and moderate audience discussion, including comments, questions, suggestions, and objections.

Issues in the Analysis of Progression-Free Survival from a Cancer Clinical Trial
November 16, 2011

Read more for details.

Dianne M. Finkelstein, PhD, Massachusetts General Hospital
David Schoenfeld, PhD, Massachusetts General Hospital
Paul Goss, MD, Massachusetts General Hospital

Wednesday, November 16, 2011, 3:00pm-5:00pm
Massachusetts General Hospital
Room: Thier 101

This seminar will be geared to a statistical audience. Dr. Finkelstein will introduce the issues involved in PFS analysis in a cancer trial and Paul Goss will describe breast cancer in general, the specific issues in early stage BC and one specific trial (MA-17) that highlights the issues discussed in this talk. David Schoenfeld will give a talk about some methods work on the topic we are doing. There will then be open discussion with the audience.

The use of PFS as a primary endpoint in cancer trials must consider several issues to ensure validity of this outcome as a surrogate for survival. First, although a trial is designed to evaluate progression at regular prescribed time points, recurrence can be recorded at times outside these times because visits are missed, resulting in interval censored data. Second, the patient may die of the disease before the time of progression is recorded. Third, patients may go off (or change) therapy or withdraw from the study, which could bias the analysis. In addition, the real endpoint of interest in a trial is cancer mortality, but for early stage patients, it is not feasible to design a trial on this endpoint, and showing a treatment is superior on PFS is sometimes not sufficient to change practice. We will discuss these issues and methodology that can be used to refine the analysis of PFS in cancer clinical trials.

Neuropsychological Profiles in Alzheimer's Disease and Cerebral Infarction: A Longitudinal MIMIC Model
October 5, 2011

Read more for details.

Frances Yang, Ph.D., Hebrew Senior Life
Richard Jones, Sc.D., Hebrew Senior Life
Alex Grigorenko, Department of Biostatistics

Wednesday, October 5, 2011, 3:30-5:30 PM, reception to follow
Harvard T.H. Chan School of Public Health
Kresge G2

This seminar will describe a longitudinal extension of the Multiple Indicators Multiple Causes (MIMIC) model to characterize associations between cognitive decline and findings of Alzheimer's disease (AD) or cerebral infarction at death. The data come from the Religious Orders Study, a longitudinal study of priests, monks, and nuns who agreed prospectively to autopsy.

The speakers will describe statistical methods for identifying a specific neuropsychological profile characteristic of emerging AD and cerebral infarction. They hypothesized that specific neuropsychological functions are preferentially impaired in the presence of AD and vascular neuropathology. The seminar will cover three topics, (1) Background, (2) an extension of the MIMIC model to the longitudinal setting and its implementation in Mplus, and (3) results.

The study used data from the Religious Orders Study (ROS), a large prospective study of cognitive aging and neuropathology. The sample included 502 ROS participants followed from enrollment to death with an annual neuropsychological battery and brain autopsy. The analytic approach involved the use of Mplus software to estimate a measurement model for neuropsychological performance assessed with 17 neuropsychological tests, extended to accommodate repeated assessments over 10 years. Preliminary results will be presented describing the general pattern of cognitive decline and impairments specific to individual tests in the presence of AD neuropathology or cerebrovascular infarction.

Slides from this presentation.

Comparative Effectiveness Clinical Trials: Methodological and Practical Considerations
May 4, 2011

Read more for details.
Peter Peduzzi, PhD

Peter Peduzzi, PhD
Professor, Yale School of Public Health
Director, Yale Center for Analytical Sciences
VA Cooperative Studies Program

Wednesday, May 4, 2011, 3:30-4:45 PM
Beth Israel Deaconess Medical Center
Kirstein Living Room, Kirstein Building - 1st Floor
330 Brookline Avenue

Comparative effectiveness clinical trials are comparisons of treatments (usually randomized) designed to determine which treatment options are superior in order to help better inform decision makers. Treatment options could be similar, such as a comparison of different drugs or surgical techniques, or could be very different, such as comparisons of open surgery versus a device or behavioral therapy versus pharmacological therapy. Comparative effectiveness studies have a long history in clinical trials. Schwartz and Lellouch (1967) made the distinction between pragmatic (effectiveness) and explanatory (efficacy) trials. The distinction between effectiveness and efficacy is not always clear. Many trials have elements of both types of studies, i.e., hybrid designs. In this talk, some methodological and practical considerations for designing and conducting these types of studies will be presented and illustrated with data from actual clinical trials. Some of these considerations relate to managing risk factors, maintaining studywide clinical equipoise, accounting for patient preferences, accommodating evolving technology, and using usual care as a comparator. Future directions are discussed and include making comparative effectiveness clinical trials more efficient and generalizable and strengthening the research infrastructure.

The Biostatistician's Role in Managing Clinical Translational Research Data
March 30, 2011

Read more for details.
Brad Pollock

Brad H. Pollock, MPH, PhD
Department of Epidemiology and Biostatistics
School of Medicine
University of Texas Health Science Center

Wednesday, March 30, 2011, 3:30-4:45 PM
Dana-Farber Cancer Institute
CLSB Building 11081, 3 Blackfan Circle, 11th Floor

Computation has played a pivotal role in modern biostatistical practice with a major emphasis on the development and application of new analytic methods. Less computational attention has been focused on the data management component of biostatistics units. Biomedical informatics has an increasingly prominent role in the clinical translational research enterprise, especially with the growth of the Clinical Translational Science Award program; however, interactions between biostatistics and those in computational disciplines have not been fully exploited. Clinical translational research is likely to be strengthened through synergistic interaction between biostatisticians, informaticians, and information technology experts.

For research data operations, we will discuss the: 1) infrastructure and technologies; 2) personnel responsibilities and oversight; 3) human subjects and security considerations; and 4) opportunities to promote interactions between disciplines. Examples will be given of systems and projects that bring biostatistics together with other experts in order to optimize biostatistical core operations.

Slides from Dr. Pollock's presentation.

Return to top

Dense longitudinal data analysis for counts of self-reported events
February 8, 2011

Read more for details.
Ronald Thisted

Ronald Thisted, PhD
Department of Health Studies
The University of Chicago

Tuesday, February 8, 2011, 3:30-4:45 PM
Brigham & Women's Hospital
OBC Room 4-002B

Pseudobulbar affect is a condition manifested by socially debilitating outbursts of uncontrollable laughing or crying that can occur multiple times per day. An effective drug for treating this condition will reduce the number of reported episodes, but making that simple idea operational is surprisingly difficult. We examine challenges (and some approaches) that arise in describing, modeling, and making inferences in the context of a randomized clinical trial for which the outcome consists of dense longitudinal observations of daily episode counts. We illustrate these ideas using data from a recently completed clinical trial (Pioro, et al., Ann Neurol 2010; 68: 693-702).

The Design and Analysis of Studies of Diagnostic Tests: The Methodologist as Medical Decision Maker
November 9, 2010

Read more for details.

Christopher LindsellChristopher Lindsell, PhD
Associate Professor and Director of Research, Department of Emergency Medicine
University of Cincinnati

Tuesday, November 9, 2010, 3:30-4:45 PM
Boston Children's Hospital
CLSB, 3 Blackfan Circle, 12th Floor, Room 12007

Physicians rely on diagnostic tests to help with their decision making, and must weigh the strength of information provided against the risks of making an incorrect diagnosis. In designing the research surrounding the diagnostic test, the methodologist has significant impact on subsequent medical decision making. A poorly designed study, or a well designed study poorly analyzed and reported, can compromise a physician's diagnosis and may have a direct effect on patients' lives. It is imperative that every methodologist facilitating research in Academic Health Centers have a thorough understanding of the fundamental components of diagnostic test research, and can apply sound statistical principles to the analysis of these studies. This talk will discuss the primary considerations for the design of diagnostic test research, including choice of index tests and criterion standards, avoiding work up bias, and eliminating circular reasoning. The statistical methods used to analyze diagnostic studies will be reviewed. Approaches that the methodologist can use to extend the analysis to provide information useful to the clinical decision maker evaluating an individual patient will be explored.

Statistical Strategies for Comparative Effectiveness Research Using Observational Data
October 19, 2010

Read more for details.

Sharon-Lise NormandSharon-Lise Normand, PhD
Professor of Health Care Policy (Biostatistics)
Harvard Medical School
Professor, Department of Biostatistics
Harvard T.H. Chan School of Public Health

Tuesday, October 19, 2010, 3:30-4:45 PM
Kresge G2
Harvard T.H. Chan School of Public Health

Substantial attention is focused on comparative effectiveness research, that is, "comparing different medical interventions and strategies to prevent, diagnose, treat, and monitor health conditions" in order to improve health outcomes. The assumption that comparative effectiveness research will provide timely, relevant evidence rests on changing the current framework for assembling evidence. Divergence from this framework involves the recognition that randomized trials that often serve as the basis for new technology approval are small and short-term, and post-market studies are often voluntary and difficult to implement. These problems have become increasingly important over the last decade because technology is changing at a rapid pace, therapies are utilized outside their intended populations, and more representative groups of patients are likely to have differential responses to the same therapy.

In this talk, I will discuss three questions: (1) why use observational data analysis for comparative effectiveness research; (2) how to use observational data for comparative effectiveness research; and (3) what new statistical methodologies will be required for comparative effectiveness research. Key statistical issues, such as defining causal effects, justification for lack of randomization, as well as design and analytical strategies (use of multiple control groups and matching strategies), will be discussed. Examples involving the safety and effectiveness of direct thrombin inhibitors compared to heparin, and of metal-on-metal total hip replacement systems compared to other bearing surface hips, illustrate methods.

Data Monitoring of Clinical Trials Using Prediction
April 14, 2010

Read more for details.

Scott EvansScott Evans, PhD
Senior Research Scientist in the Department of Biostatistics at the Harvard T.H. Chan School of Public Health
(with Lingling Li, Hajime Uno, and LJ Wei)

Wednesday, April 14, 2010, 3:30-5:00 PM
East Campus, Reisman Lecture Hall, Feldberg/Reisman Complex
Beth Israel Deaconess Medical Center

We present use of prediction as an informative and flexible tool for quantitative monitoring of Clinical Trials (CTs). Prediction can be used to assist in the evaluation of efficacy, futility, or to evaluate design assumptions. Prediction provides information regarding effect size estimates and associated precision with trial continuation; can be used in superiority or noninferiority CTs; can be used for binary, continuous, and time-to-event endpoints; provides flexibility in the decision making process; and can be used in tandem with repeated confidence intervals to control error rates. We will describe predicted interval plots (PIPs) and show examples from CTs in which these methods have been utilized by protocol teams, Data Safety and Monitoring Boards (DSMBs), and planners of development programs in decision-making.

Slides to Dr. Evans's presentation.

Return to top

Adaptive Designs for Clinical Trials: Insightfully Innovative or Irrelevantly Impractical
March 10, 2010

Read more for details.

Stuart PocockStuart Pocock, PhD
Professor of Medical Statistics
London School of Hygiene & Tropical Medicine

Wednesday, March 10, 2010, 3:30-5:00 PM
Dana-Farber Cancer Institute
CLSB Building, 3 Blackfan Circle, 11th Floor, Room 11081A & B

There is much interest in the role of adaptive designs in major Phase III trials with a view to making the development of new treatments more flexible and efficient. This talk will review the main types of Adaptive Design (e.g., unblinded sample size re-estimation, seamless Phase II/III trials) from both statistical and practical perspectives. The methodology will be illustrated by several real examples of adaptive designs.

Slides to Dr. Pocock's presentation.

Overview of methods for analyzing cluster-correlated data
January 27, 2010

Read more for details.

Garrett FitzmauriceGarrett Fitzmaurice, ScD
Professor in the Department of Biostatistics
Harvard T.H. Chan School of Public Health

Wednesday, January 27, 2010, 3:30-5:00pm
(Please note different time)

Kresge G2
Harvard T.H. Chan School of Public Health

Many studies in the health sciences give rise to data that are clustered or cluster- correlated. For example, clustered data commonly arise when intact groups are randomized to health interventions or when naturally occurring groups in the population are randomly sampled. With clustered data, we might reasonably expect that measurements on units within a cluster are more similar than measurements on units in different clusters. The degree of clustering can be expressed in terms of correlation and this correlation invalidates the crucial assumption of independence that is the cornerstone of so many standard statistical techniques. In this seminar we (i) present numerous examples where cluster-correlated data arise, (ii) discuss the consequences of clustering for statistical analysis, (iii) review some of the dominant approaches for analyzing cluster-correlated data, and (iv) illustrate, via two case-studies, the application of methods for analyzing cluster-correlated data.

Slides from Dr. Fitzmaurice's presentation.

Return to top