Using Single-Case Experiments to Support Evidence-Based Decisions How Much Is Enough?
For practitioners, the use of single-case experimental designs (SCEDs) in the research literature raises an important question: How many single-case experiments are enough to have sufficient confidence that an intervention will be effective with an individual from a given population? Although standa...
Saved in:
Published in | Behavior modification Vol. 40; no. 3; pp. 377 - 395 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Los Angeles, CA
SAGE Publications
01.05.2016
SAGE PUBLICATIONS, INC |
Subjects | |
Online Access | Get full text |
ISSN | 0145-4455 1552-4167 1552-4167 |
DOI | 10.1177/0145445515613584 |
Cover
Summary: | For practitioners, the use of single-case experimental designs (SCEDs) in the research literature raises an important question: How many single-case experiments are enough to have sufficient confidence that an intervention will be effective with an individual from a given population? Although standards have been proposed to address this question, current guidelines do not appear to be strongly grounded in theory or empirical research. The purpose of our article is to address this issue by presenting guidelines to facilitate evidence-based decisions by adopting a simple statistical approach to quantify the support for interventions that have been validated using SCEDs. Specifically, we propose the use of success rates as a supplement to support evidence-based decisions. The proposed methodology allows practitioners to aggregate the results from single-case experiments to estimate the probability that a given intervention will produce a successful outcome. We also discuss considerations and limitations associated with this approach. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 0145-4455 1552-4167 1552-4167 |
DOI: | 10.1177/0145445515613584 |