Publication:
Making Every Study Count: Learning From Replication Failure to Improve Intervention Research

No Thumbnail Available

Date

2019-12

Authors

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

American Educational Research Association (AERA)
The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Kim, J. S. (2019). Making Every Study Count: Learning from Replication Failure to Improve Intervention Research. Educational Researcher, 48(9): 599-607.

Research Data

Abstract

Why, when so many educational interventions demonstrate positive impact in tightly controlled efficacy trials, are null results common in follow-up effectiveness trials? Using case studies from literacy, this article suggests that replication failure can surface hidden moderators—contextual differences between an efficacy and an effectiveness trial—and generate new hypotheses and questions to guide future research. First, replication failure can reveal systemic barriers to program implementation. Second, it can highlight for whom and in what contexts a program theory of change works best. Third, it suggests that a fidelity first and adaptation second model of program implementation can enhance the effectiveness of evidence-based interventions and improve student outcomes. Ultimately, researchers can make every study count by learning from both replication success and failure to improve the rigor, relevance, and reproducibility of intervention research.

Description

Other Available Sources

Keywords

Education

Terms of Use

This article is made available under the terms and conditions applicable to Open Access Policy Articles (OAP), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories