An Empirical Investigation of Variance Design Parameters for Planning Cluster-Randomized Trials of Science Achievement

Background: Prior research has focused primarily on empirically estimating design parameters for cluster-randomized trials (CRTs) of mathematics and reading achievement. Little is known about how design parameters compare across other educational outcomes. Objectives: This article presents empirical...

Full description

Saved in:
Bibliographic Details
Published inEvaluation review Vol. 37; no. 6; pp. 490 - 519
Main Authors Westine, Carl D., Spybrook, Jessaca, Taylor, Joseph A.
Format Journal Article
LanguageEnglish
Published Los Angeles, CA SAGE Publications 01.12.2013
SAGE PUBLICATIONS, INC
Subjects
Online AccessGet full text
ISSN0193-841X
1552-3926
1552-3926
DOI10.1177/0193841X14531584

Cover

More Information
Summary:Background: Prior research has focused primarily on empirically estimating design parameters for cluster-randomized trials (CRTs) of mathematics and reading achievement. Little is known about how design parameters compare across other educational outcomes. Objectives: This article presents empirical estimates of design parameters that can be used to appropriately power CRTs in science education and compares them to estimates using mathematics and reading. Research Design: Estimates of intraclass correlations (ICCs) are computed for unconditional two-level (students in schools) and three-level (students in schools in districts) hierarchical linear models of science achievement. Relevant student- and school-level pretest and demographic covariates are then considered, and estimates of variance explained are computed. Subjects: Five consecutive years of Texas student-level data for Grades 5, 8, 10, and 11. Measures: Science, mathematics, and reading achievement raw scores as measured by the Texas Assessment of Knowledge and Skills. Results: Findings show that ICCs in science range from .172 to .196 across grades and are generally higher than comparable statistics in mathematics, .163–.172, and reading, .099–.156. When available, a 1-year lagged student-level science pretest explains the most variability in the outcome. The 1-year lagged school-level science pretest is the best alternative in the absence of a 1-year lagged student-level science pretest. Conclusion: Science educational researchers should utilize design parameters derived from science achievement outcomes.
Bibliography:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-2
content type line 23
ObjectType-Article-1
ObjectType-Feature-2
ISSN:0193-841X
1552-3926
1552-3926
DOI:10.1177/0193841X14531584