Bender RH, Lenderking WR. Nuances of assessing clinician agreement in clinician reported outcomes (ClinROs). Poster presented at the 2017 ISPOR 22nd International Meeting; May 2017. Boston, MA. [abstract] Value Health. 2017 May; 20(2017):A345-6.

Increased interest of pharmaceutical companies in using ClinROs for clinical trial endpoints and FDA submissions has resulted in renewed interest in the standards for their validation. ClinROs require some different approaches to validation when VALUE IN HEALTH 20 (2017) A1 – A383 A345 compared to Patient-Reported Outcomes (PROs). For example, ClinROs can be based on readings, ratings, or even performance, which adds variability to them and nuances to their validation. One of the key components of validating a ClinRO is establishing its reliability. For most ClinROs, this means inter- and intra-rater reliability. Although often considered straightforward, reliability assessment can be the most technically challenging part of the validation analysis. An overview will be given of the primary competing statistics for assessing reliability (Pearson r, kappa, etc.) including a brief rationale and some pros and cons for each. We will focus on the intraclass correlation (ICC) statistic as the most useful for rating scale data, discussing its many forms and how to correctly choose among them. In particular, we will consider the distinction between aiming to demonstrate consistency or agreement in choosing an ICC. We will also discuss key design considerations that can seriously impact ICCs and undermine the meaningfulness of the validation, for example sample size and how flexible to be around departures from the intended test-retest interval. Also included will be a discussion of framing or identifying ICC standards or criteria for ClinRO validation. Throughout, we will share noteworthy insights from our experience with actual assessments, illustrating how key elements of this whole discussion may be realized in actual ClinRO validation work in the FDA context.

Share on: