%0 Journal Article %@ 1438-8871 %I JMIR Publications %V 25 %N %P e41870 %T The Development, Deployment, and Evaluation of the CLEFT-Q Computerized Adaptive Test: A Multimethods Approach Contributing to Personalized, Person-Centered Health Assessments in Plastic Surgery %A Harrison,Conrad %A Apon,Inge %A Ardouin,Kenny %A Sidey-Gibbons,Chris %A Klassen,Anne %A Cano,Stefan %A Wong Riff,Karen %A Pusic,Andrea %A Versnel,Sarah %A Koudstaal,Maarten %A Allori,Alexander C %A Rogers-Vizena,Carolyn %A Swan,Marc C %A Furniss,Dominic %A Rodrigues,Jeremy %+ Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, The Botnar Research Centre, Oxford, OX3 7LD, United Kingdom, 44 1865227374, conrad.harrison@medsci.ox.ac.uk %K cleft lip %K cleft palate %K patient-reported outcome measures %K outcome assessment %K CLEFT-Q %K computerized adaptive test %K CAT %D 2023 %7 27.4.2023 %9 Original Paper %J J Med Internet Res %G English %X Background: Routine use of patient-reported outcome measures (PROMs) and computerized adaptive tests (CATs) may improve care in a range of surgical conditions. However, most available CATs are neither condition-specific nor coproduced with patients and lack clinically relevant score interpretation. Recently, a PROM called the CLEFT-Q has been developed for use in the treatment of cleft lip or palate (CL/P), but the assessment burden may be limiting its uptake into clinical practice. Objective: We aimed to develop a CAT for the CLEFT-Q, which could facilitate the uptake of the CLEFT-Q PROM internationally. We aimed to conduct this work with a novel patient-centered approach and make source code available as an open-source framework for CAT development in other surgical conditions. Methods: CATs were developed with the Rasch measurement theory, using full-length CLEFT-Q responses collected during the CLEFT-Q field test (this included 2434 patients across 12 countries). These algorithms were validated in Monte Carlo simulations involving full-length CLEFT-Q responses collected from 536 patients. In these simulations, the CAT algorithms approximated full-length CLEFT-Q scores iteratively, using progressively fewer items from the full-length PROM. Agreement between full-length CLEFT-Q score and CAT score at different assessment lengths was measured using the Pearson correlation coefficient, root-mean-square error (RMSE), and 95% limits of agreement. CAT settings, including the number of items to be included in the final assessments, were determined in a multistakeholder workshop that included patients and health care professionals. A user interface was developed for the platform, and it was prospectively piloted in the United Kingdom and the Netherlands. Interviews were conducted with 6 patients and 4 clinicians to explore end-user experience. Results: The length of all 8 CLEFT-Q scales in the International Consortium for Health Outcomes Measurement (ICHOM) Standard Set combined was reduced from 76 to 59 items, and at this length, CAT assessments reproduced full-length CLEFT-Q scores accurately (with correlations between full-length CLEFT-Q score and CAT score exceeding 0.97, and the RMSE ranging from 2 to 5 out of 100). Workshop stakeholders considered this the optimal balance between accuracy and assessment burden. The platform was perceived to improve clinical communication and facilitate shared decision-making. Conclusions: Our platform is likely to facilitate routine CLEFT-Q uptake, and this may have a positive impact on clinical care. Our free source code enables other researchers to rapidly and economically reproduce this work for other PROMs. %M 37104031 %R 10.2196/41870 %U https://www.jmir.org/2023/1/e41870 %U https://doi.org/10.2196/41870 %U http://www.ncbi.nlm.nih.gov/pubmed/37104031