Purpose Surgical site infection (SSI) can be a challenging complication after hand surgery. Retrospective studies often rely on chart review to determine presence of an SSI. The purpose of this study was to assess reliability of Centers for Disease Control and Prevention (CDC) criteria for determining an SSI as applied to a chart review. We hypothesized that interobserver and intraobserver reliability for determining an SSI using these criteria while reviewing medical record documentation would be none to minimal (k < 0.39) based on an interpretation of Cohen’s k statistics. Methods We created and used a database of 782 patients, 48 of whom received antibiotics within 3 months of a surgical procedure of the hand. Three fellowship-trained orthopedic hand surgeons then evaluated the charts of those 48 patients, in which each reviewer determined whether an SSI was present or absent based on CDC criteria provided to the reviewers. Patients’ charts were then reassessed 1 month later by the same reviewers. Kappa statistics were calculated for each round of assessment and averaged to determine intraobserver and interobserver reliability. Results Overall k values were 0.22 (standard error, 0.13), indicating fair reliability. Average k value between reviewers was 0.26 (standard error, 0.13. On average, intrarater reliability was 68.7%. Conclusions We found poor interobserver and intraobserver reliability when using CDC criteria to determine whether a patient had an SSI, based on chart review.
Seigerman D, Lutsky K, Banner L, Fletcher D, Leinberry C, Lucenti L, et al. (2020). The Reliability of Determining the Presence of Surgical Site Infection Based on Retrospective Chart Review. JOURNAL OF HAND SURGERY, 45(12), 1181.e1-1181.e4 [10.1016/j.jhsa.2020.05.016].
The Reliability of Determining the Presence of Surgical Site Infection Based on Retrospective Chart Review
Lucenti LPenultimo
;
2020-07-18
Abstract
Purpose Surgical site infection (SSI) can be a challenging complication after hand surgery. Retrospective studies often rely on chart review to determine presence of an SSI. The purpose of this study was to assess reliability of Centers for Disease Control and Prevention (CDC) criteria for determining an SSI as applied to a chart review. We hypothesized that interobserver and intraobserver reliability for determining an SSI using these criteria while reviewing medical record documentation would be none to minimal (k < 0.39) based on an interpretation of Cohen’s k statistics. Methods We created and used a database of 782 patients, 48 of whom received antibiotics within 3 months of a surgical procedure of the hand. Three fellowship-trained orthopedic hand surgeons then evaluated the charts of those 48 patients, in which each reviewer determined whether an SSI was present or absent based on CDC criteria provided to the reviewers. Patients’ charts were then reassessed 1 month later by the same reviewers. Kappa statistics were calculated for each round of assessment and averaged to determine intraobserver and interobserver reliability. Results Overall k values were 0.22 (standard error, 0.13), indicating fair reliability. Average k value between reviewers was 0.26 (standard error, 0.13. On average, intrarater reliability was 68.7%. Conclusions We found poor interobserver and intraobserver reliability when using CDC criteria to determine whether a patient had an SSI, based on chart review.File | Dimensione | Formato | |
---|---|---|---|
19. The reliability of determining the presence of SSI.pdf
Solo gestori archvio
Tipologia:
Versione Editoriale
Dimensione
136.89 kB
Formato
Adobe PDF
|
136.89 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.