Interrater agreement with a standard scheme for classifying medication errors
The interrater agreement for and reliability of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) index for categorizing medication errors were determined. A letter was sent by the U.S. Pharmacopeia to all 550 contacts in the MEDMARX system user database. Par...
Saved in:
| Published in | American journal of health-system pharmacy Vol. 64; no. 2; pp. 175 - 181 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
England
American Society of Health-System Pharmacists
15.01.2007
Oxford University Press |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1079-2082 1535-2900 |
| DOI | 10.2146/ajhp060109 |
Cover
| Summary: | The interrater agreement for and reliability of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) index for categorizing medication errors were determined.
A letter was sent by the U.S. Pharmacopeia to all 550 contacts in the MEDMARX system user database. Participants were asked to categorize 27 medication scenarios using the NCC MERP index and were randomly assigned to one of three tools (the index alone, a paper-based algorithm, or a computer-based algorithm) to assist in categorization. Because the NCC MERP index accounts for harm and cost, and because categories could be interpreted as substantially similar, study results were analyzed after the nine error categories were collapsed to six. The interrater agreement was measured using Cohen's kappa value.
Of 119 positive responses, 101 completed surveys were returned for a response rate of 85%. There were no significant differences in baseline demographics among the three groups. The overall interrater agreement for the participants, regardless of group assignment, was substantial at 0.61 (95% confidence interval [CI], 0.41-0.81). There was no difference among the kappa values of the three study groups and the tools used to aid in medication error classification. When the index was condensed from nine categories to six, the interrater agreement increased with a kappa value of 0.74 (95% CI, 0.56-0.90).
Overall interrater agreement for the NCC MERP index for categorizing medication errors was substantial. The tool provided to assist with categorization did not influence overall categorization. Further refining of the scale could improve the usefulness and validity of medication error categorization. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 ObjectType-Undefined-3 |
| ISSN: | 1079-2082 1535-2900 |
| DOI: | 10.2146/ajhp060109 |