Evaluating Public Health Interventions: 2. Stepping Up to Routine Public Health Evaluation with the Stepped Wedge Design
3286 AJPH.2016.303068.pdf (551.1Kb)
Access StatusFull text of the requested work is not available in DASH at this time ("dark deposit"). For more information on dark deposits, see our FAQ.
MetadataShow full item record
CitationSpiegelman, Donna. 2016. “Evaluating Public Health Interventions: 2. Stepping Up to Routine Public Health Evaluation With the Stepped Wedge Design.” American Journal of Public Health 106 (3): 453–57. https://doi.org/10.2105/ajph.2016.303068.
AbstractIn a stepped wedge design (SWD), an intervention is rolled out in a staggered manner over time, in groups of experimental units, so that by the end, all units experience the intervention. For example, in the MaxART study, the date at which to offer universal antiretroviral therapy to otherwise ineligible clients is being randomly assignedin nine "steps" of four months duration so that after three years, all 14 facilities in northern and central Swaziland will be offering early treatment.In the common alternative, the cluster randomized trial (CRT), experimental units are randomly allocated on a single common start date to the interventions to be compared. Often, the SWD is more feasible than the CRT, both for practical and ethical reasons, but takes longer to complete. The SWD permits both within-and between-unit comparisons, while the CRT only allows between-unit comparisons. Thus, confounding bias with respect to time-invariant factors tends to be lower in an SWD than a CRT, but the SWD cannot as readily control for confounding by time-varying factors. SWDs have generally more statistical power than CRTs, especially as the intraunit correlation and the number of participants within unit increases.Software for both designs are available, although for a more limited set of SWD scenarios.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:41384813
- SPH Scholarly Articles