What Cognitive Interviewing Reveals about a New Measure of Undergraduate Biology Reasoning

Reasoning skills have been clearly related to achievement in introductory undergraduate biology, a course with a high failure rate that may contribute to dropout of undergraduate STEM majors. Existing measures are focused on the experimental method, such as generating hypotheses, choosing a research...

Full description

Saved in:
Bibliographic Details
Published inThe Journal of experimental education Vol. 89; no. 1; pp. 145 - 168
Main Authors Cromley, Jennifer G., Dai, Ting, Fechter, Tia, Van Boekel, Martin, Nelson, Frank E., Dane, Aygul
Format Journal Article
LanguageEnglish
Published Washington Routledge 02.01.2021
Taylor & Francis Inc
Subjects
Online AccessGet full text
ISSN0022-0973
1940-0683
DOI10.1080/00220973.2019.1613338

Cover

More Information
Summary:Reasoning skills have been clearly related to achievement in introductory undergraduate biology, a course with a high failure rate that may contribute to dropout of undergraduate STEM majors. Existing measures are focused on the experimental method, such as generating hypotheses, choosing a research method, how to control variables other than those manipulated in an experiment, analyzing data (e.g., naming independent and dependent variables), and drawing conclusions from results. We developed a new measure called inference making and reasoning in biology (IMRB) that tests deductive reasoning in biology outside of the context of the experimental method, using not previously taught biology content. We present results from coded cognitive interviews with 86 undergraduate biology students completing the IMRB, using within-subjects comparisons of verbalizations when questions are answered correctly versus incorrectly. Results suggest that the IMRB taps local and global inferences but not knowledge acquired before study or elaborative inferences that require such knowledge. For the most part, reading comprehension/study strategies do not help examinees answer IMRB questions correctly, except for recalling information learned earlier in the measure, summarizing, paraphrasing, skimming, and noting text structure. Likewise, test-taking strategies do not help examinees answer IMRB questions correctly, except for noting that a passage had not mentioned specific information. Similarly, vocabulary did not help examinees answer IMRB questions correctly. With regard to metacognitive monitoring, when questions were answered incorrectly, examinees more often noted a lack of understanding. Thus, we present strong validity evidence for the IMRB, which is available to STEM researchers and measurement experts.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0022-0973
1940-0683
DOI:10.1080/00220973.2019.1613338