Published on in Vol 18, No 1 (2016): January

Crowdsourcing Diagnosis for Patients With Undiagnosed Illnesses: An Evaluation of CrowdMed

Crowdsourcing Diagnosis for Patients With Undiagnosed Illnesses: An Evaluation of CrowdMed

Crowdsourcing Diagnosis for Patients With Undiagnosed Illnesses: An Evaluation of CrowdMed

Original Paper

1Houston Veterans Affairs Center for Innovations in Quality, Effectiveness and Safety, Health Services Research and Development, Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, United States

2Section of Health Services Research, Department of Medicine, Baylor College of Medicine, Houston, TX, United States

3Department of Biomedical Informatics, University of California San Diego, La Jolla, CA, United States

Corresponding Author:

Hardeep Singh, MD, MPH

Houston Veterans Affairs Center for Innovations in Quality, Effectiveness and Safety

Michael E. DeBakey Veterans Affairs Medical Center

VA HSR&D Center of Innovation (152)

2002 Holcombe Boulevard

Houston, TX, 77030

United States

Phone: 1 713 794 8601

Fax:1 713 748 7359

Email: hardeeps@bcm.edu


Background: Despite visits to multiple physicians, many patients remain undiagnosed. A new online program, CrowdMed, aims to leverage the “wisdom of the crowd” by giving patients an opportunity to submit their cases and interact with case solvers to obtain diagnostic possibilities.

Objective: To describe CrowdMed and provide an independent assessment of its impact.

Methods: Patients submit their cases online to CrowdMed and case solvers sign up to help diagnose patients. Case solvers attempt to solve patients’ diagnostic dilemmas and often have an interactive online discussion with patients, including an exchange of additional diagnostic details. At the end, patients receive detailed reports containing diagnostic suggestions to discuss with their physicians and fill out surveys about their outcomes. We independently analyzed data collected from cases between May 2013 and April 2015 to determine patient and case solver characteristics and case outcomes.

Results: During the study period, 397 cases were completed. These patients previously visited a median of 5 physicians, incurred a median of US $10,000 in medical expenses, spent a median of 50 hours researching their illnesses online, and had symptoms for a median of 2.6 years. During this period, 357 active case solvers participated, of which 37.9% (132/348) were male and 58.3% (208/357) worked or studied in the medical industry. About half (50.9%, 202/397) of patients were likely to recommend CrowdMed to a friend, 59.6% (233/391) reported that the process gave insights that led them closer to the correct diagnoses, 57% (52/92) reported estimated decreases in medical expenses, and 38% (29/77) reported estimated improvement in school or work productivity.

Conclusions: Some patients with undiagnosed illnesses reported receiving helpful guidance from crowdsourcing their diagnoses during their difficult diagnostic journeys. However, further development and use of crowdsourcing methods to facilitate diagnosis requires long-term evaluation as well as validation to account for patients’ ultimate correct diagnoses.

J Med Internet Res 2016;18(1):e12

doi:10.2196/jmir.4887

Keywords



Errors of clinical diagnosis affect at least 5% of US adults every year and approximately half of these errors could result in serious harm to the patients [1]. To address the extent and severity of this problem, both systems and cognitive solutions have been proposed. However, only a few of these have been tested and only a fraction of those tested have been shown to improve diagnostic outcomes [2-4]. Patients with difficult-to-diagnose conditions often seek care from several physicians and institutions before obtaining a diagnosis. One intervention that could benefit patients is the use of second opinions [5-7], and this has been shown to catch previously missed diagnoses, at least in the realms of radiology and pathology [6]. Several formal programs currently exist to provide second opinions to patients. [7] For example, in the NIH Undiagnosed Diseases Network based at several centers across the US [8], medical experts diagnose undiagnosed individuals or those with rare diseases. The program, however, has strict eligibility requirements for patients and requires a clinician referral. Additional programs include Best Doctors’ second-opinion program that is open to employee beneficiaries only and Cleveland Clinic’s MyConsult program [5,9], both of which involve comprehensive review of patients’ medical records, but no dynamic interactions with the patients.

A recently developed software platform, CrowdMed [10], aims to overcome some limitations of the aforementioned programs, namely; strict eligibility requirements, needed referrals, and limited interaction with patients; by leveraging the “wisdom of the crowd” or crowdsourcing to help undiagnosed or misdiagnosed patients. Crowdsourcing is a “participative online activity” in which a group of individuals of varying knowledge, heterogeneity, and number comes together to solve a problem [11]. It has been used for a variety of problems in different fields ranging from simple text translation to more complicated tasks, such as solving the BP oil spill disaster in the Gulf of Mexico [12]. In medicine, it has been utilized for health and medical research, such as estimating flu prevalence [13]; for informatics solutions, including establishing problem-related medication pairs [14], and for examining specific diseases through image analysis. In the latter situation, crowdsourcing has been used to inspect blood samples to determine the presence or absence of malarial infections [15-17] and to categorize colorectal polyps [18,19] or diabetic retinopathy [20]. However, until now crowdsourcing had not been used to come up with a diagnosis from all possible diagnoses a patient might have. Of note, this platform allows laypersons without health care training or experience to participate. Although patients have been “googling for a diagnosis” for more than a decade and even using online symptom checkers [21,22], this is the first description of a crowd of people working together online towards a more accurate diagnosis. We conducted an independent evaluation of this untested approach to determine whether this could be beneficial to patient care.


A Description of CrowdMed

For a small fee, the CrowdMed website allows undiagnosed patients to submit their clinical information and obtain potential diagnoses expeditiously. Patients anonymously answer a comprehensive set of medical questions and upload relevant test results and images related to their cases (Figure 1).

Patients also decide how long they want their cases open and whether they wish to compensate the case solvers. Anyone (including nonmedical persons) can sign up to be a case solver and select cases they think they can help solve (Figure 2).

While the cases are open, patients and case solvers can discuss details online about potential diagnoses, further work-up that should be done, and newly obtained test results and/or appointments completed with the patients’ physicians. Thus, case details can unfold online while the case is still open. All diagnostic suggestions and all case discussions are available to all case solvers as they are suggested and discussed throughout the open period. This enables the entire group of case solvers to work in concert to solve each case.

When a patient’s case is closed, the patient receives a detailed report containing the entire list of diagnostic suggestions made by the case solvers and suggested next steps, so that they can discuss them with their physicians. Diagnoses are ranked in decreasing order of “relative popularity.” The relative popularity of diagnoses is determined by case solvers’ “bets” on each diagnosis in terms of their beliefs that the diagnosis is the most specific, accurate, root cause of the symptoms presented. CrowdMed takes these bets and assigns points to each diagnosis using a prediction market algorithm, thereby determining the “relative popularity” of each diagnosis suggested. Finally, patients are provided with case solvers’ reasoning for choosing particular diagnoses. Patients choose which case solver(s) to compensate based on whose answers they found helpful. If the patient decides to reward multiple solvers, they also decide how to divvy up the compensation. Afterward, patients are invited to fill out surveys about their outcomes.

Figure 1. Screenshot of case submission.
View this figure
Figure 2. Screenshot of case selection for solvers (names are fictitious).
View this figure

Independent Evaluation

We independently analyzed all CrowdMed data collected from May 2013 to April 2015. Specifically, we analyzed data on patients’ demographic and case characteristics; case solvers’ demographic and performance characteristics; and preliminary case outcomes. Outcomes included whether patients would recommend CrowdMed, if the program provided insights leading them closer to correct diagnoses, and estimated improvements in patients’ productivity and medical expenses. Data were summarized using descriptive statistics and independent samples t tests using IBM SPSS Statistics 22.


Patients and Cases

During the study period, 397 cases were completed (350 from the United States). Patients’ self-reported mean (SD) age was 47.8 (18.8) years (age range 2-90) and 182 were males (45.8%).

Before case submission, patients reported visiting a median of 5 physicians (interquartile range [IQR] 3-10; range 0-99), incurred a median of US $10,000 in medical expenses (IQR US $2500-US $50,000; range US $0-US $5,000,000) including payments by both patients and payers, spent a median of 50 hours (IQR 15-150; range 0-12,000) researching their illnesses online, and had symptoms for a median of 2.6 years (IQR 1.1-6.9; range 0.0-70.6). Online case activity lasted a median of 60 days (IQR 30-90; range 2-150) and case solvers were offered a median of US $100 in compensation (IQR US $0-US $200; range US $0-US $2700) for diagnostic suggestions. A total of 59.7% (237/397) of the cases were compensated with a median compensation of US $200 (IQR US $100-US $300; range US $15-US $2700).

Case Solvers

During the study period, CrowdMed had 357 active case solvers; of which 37.9% (132/348) were male, 76.7% (264/344) were from the US, and 58.3% (208/357) worked or studied in the medical industry; including 36 physicians and 56 medical students. Mean (SD) age was 39.6 (13.8) years (range 17-77 years).

Solvers participated in a median of 3 cases (IQR 1.0-12.8; range 0-415), earned a median of US $0 (IQR US $0-US $1.18; range US $0-US $3952) and a mean (SD) of US $93.97 (US $364.72; the majority earned US $0). Median solver rating was 3 (out of 10; IQR 3-6; range 1-10) and significantly higher (P=.006) for medical industry-based solvers (mean [SD] 4.8 [2.5]; range 1-10) than for others (mean [SD] 4.1 [2.2]; range 1-10).

Outcomes

At completion, 50.9% (202/397) of patients were likely to recommend CrowdMed to a friend, 59.6% (233/391) reported that the process gave insights leading them closer to correct diagnoses, 57% (52/92) reported estimated decreases in medical expenses, and 38% (29/77) reported estimated improvements in school or work productivity (Table 1).

Table 1. Case outcomes as assessed in a postcase survey.
Case outcomes
n (%)
On a scale of 1-5, How likely are you to recommend CrowdMed to a friend (with 5 being most likely)?(391/397 surveyed answered; 98.5% response rate)


139 (10.0)

243 (11.0)

3107 (27.4)

476 (19.4)

5126 (32.2)
Did CrowdMed Medical Detective community provide insights that lead you closer to a correct diagnosis or cure?(391/397 surveyed answered; 98.5% response rate)


No158 (40.4)

Yes233 (59.6)
How much do you estimate that your CrowdMed results will reduce the cost of your medical case going forward?(92/147 surveyed answered; 62.6% response rate)a


1-20%25 (27.2)

21-50%15 (16.3)

51-80%10 (10.9)

>80%2 (2.2)

Not at all40 (43.5)
How much lost work or school productivity do you estimate that your CrowdMed results will help you regain going forward?(77/147 surveyed answered; 52.4% response rate)a


1-20%12 (15.6)

21-50%8 (10.4)

51-80%7 (9.1)

81-99%1 (1.3)

All1 (1.3)

None48 (62.3)

aThese questions were added to the postcase survey later.

Patients reporting helpful insights from CrowdMed saw fewer doctors (mean [SD] 7.2 [7.3]; range 0-99) before participating than those who did not report receiving helpful insights (mean [SD] 9.2 [10.7]; range 0-50), P=.047. The 14 most common diagnoses suggested as the most popular diagnosis for a case are presented in Table 2.

Table 2. The 14 most common diagnoses suggested as the most popular diagnosis across 397 cases.
Diagnosisn (%)
Lyme disease8 (2.0)
Dysautonomia7 (1.8)
Chronic fatigue syndrome6 (1.5)
Irritable bowel syndrome6 (1.5)
Mast cell activation disorder6 (1.5)
Postural orthostatic tachycardia syndrome5 (1.3)
Ehlers-Danlos syndrome4 (1.0)
Sjögren’s syndrome4 (1.0)
Abdominal cutaneous nerve entrapment syndrome3 (0.8)
Gastroesophageal reflux disease3 (0.8)
Hypothyroidism3 (0.8)
Multiple sclerosis3 (0.8)
Myasthenia gravis3 (0.8)

In addition, some patients informally reported to CrowdMed that the program helped them find diagnoses that their physicians previously were unable to determine, including Sjögren’s syndrome and chorda tympani dysfunction.


Main Findings

Our independent evaluation suggests that at least some patients with undiagnosed illnesses reported receiving helpful guidance from crowdsourcing their diagnoses during their difficult diagnostic journeys. Several of the conditions most commonly suggested by case solvers are conditions well known to represent diagnostic challenges. The crowdsourcing strategy enabled dynamic interaction between patients and case solvers as more case details unfolded over time.

Novel approaches are needed to help patients who experience difficulties in obtaining a correct and timely diagnosis. In that regard, advantages of using “wisdom of the crowd” could include low cost, increased program accessibility for patients, and relatively quick opinions. Although the data we obtained were useful for understanding this program, there were several limitations of our study. The postparticipation survey was rather limited in scope as it was designed for business purposes and not for research. In addition, there was no way to verify patient-reported data and some patient-reported data might be outside of realistic boundaries (eg, 1 patient reported spending 12,000 hours researching illnesses online). Furthermore, downstream outcomes of patients were not systematically collected, so it is not known what their eventual diagnoses were or if the program identified them accurately. Further development and use of crowdsourcing methods to facilitate diagnosis requires long-term evaluation as well as validation to account for patients’ ultimate correct diagnoses.

Although crowdsourcing appears to have potential, it is important to identify factors that lead to successful crowdsourcing to improve the process and help improve patient care. Multidisciplinary research is needed to gain both technical and nontechnical insights into how this can be done. For example, previous researchers have identified the importance of both finding crowd members with the appropriate skills to the relevant problem and providing adequate motivation to the crowd for the successful use of crowdsourcing for problem solving [23]. Finally, the potential legal ramifications of giving individuals without medical degrees (who make up a substantial portion of the case solvers) the ability to render diagnostic opinions would need to be considered [24].

Conclusions

In conclusion, our independent evaluation suggests that some patients with undiagnosed illnesses report receiving helpful guidance from crowdsourcing their diagnosis. Further development and use of crowdsourcing methods to facilitate diagnosis require multidisciplinary research and long-term evaluation that includes validation to account for patients’ ultimate correct diagnoses.

Acknowledgments

We thank Jared Heyman and CrowdMed for providing us access to their data and for help in verifying the details of their program’s process. Drs Meyer and Singh are supported in part by the Houston VA Center for Innovations in Quality, Effectiveness and Safety (Grant No CIN 13-413). Dr Singh is additionally supported by the VA Health Services Research and Development Service (Grant No CRE 12-033; Presidential Early Career Award for Scientists and Engineers USA 14-274), the VA National Center for Patient Safety, and the Agency for Health Care Research and Quality (Grant Nos R01HS022087 and R21HS023602). CrowdMed provided the details of the CrowdMed process and the raw data for our analysis, but otherwise did not have input on the analysis, conclusions reached, or manuscript preparation; and did not commission this report or provide funding for it. The views expressed in this article are those of the authors and do not necessarily represent the views of the Department of Veterans Affairs or any other funding agency.

Conflicts of Interest

None declared.

  1. Singh H, Meyer AN, Thomas EJ. The frequency of diagnostic errors in outpatient care: Estimations from three large observational studies involving US adult populations. BMJ Qual Saf 2014 Sep;23(9):727-731 [FREE Full text] [CrossRef] [Medline]
  2. Graber ML, Kissam S, Payne VL, Meyer AN, Sorensen A, Lenfestey N, et al. Cognitive interventions to reduce diagnostic error: A narrative review. BMJ Qual Saf 2012 Jul;21(7):535-557. [CrossRef] [Medline]
  3. Singh H, Graber ML, Kissam SM, Sorensen AV, Lenfestey NF, Tant EM, et al. System-related interventions to reduce diagnostic errors: A narrative review. BMJ Qual Saf 2012 Feb;21(2):160-170 [FREE Full text] [CrossRef] [Medline]
  4. National Academies of Sciences Engineering and Medicine. Improving Diagnosis in Health Care. Washington, DC: The National Academies Press; Sep 22, 2015.
  5. Meyer AN, Singh H, Graber ML. Evaluation of outcomes from a national patient-initiated second-opinion program. Am J Med 2015 Oct;128(10):1138.e25-1138.e33. [CrossRef] [Medline]
  6. Payne VL, Singh H, Meyer AN, Levy L, Harrison D, Graber ML. Patient-initiated second opinions: Systematic review of characteristics and impact on diagnosis, treatment, and satisfaction. Mayo Clin Proc 2014 May;89(5):687-696. [CrossRef] [Medline]
  7. Reddy S. The Wall Street Journal. 2015 Aug 24. New ways for patients to get a second opinion: Online services from established medical centers and independent businesses   URL: http://www.wsj.com/articles/new-ways-to-get-a-second-opinion-1440437584 [accessed 2015-10-15] [WebCite Cache]
  8. Undiagnosed Diseases Network. The Undiagnosed Diseases Network. 2015.   URL: http://undiagnosed.hms.harvard.edu/ [accessed 2015-12-22] [WebCite Cache]
  9. MyConsult Online Expert Opinion. Cleveland Clinic. 2015.   URL: http://www.eclevelandclinic.org/aboutMyConsultHome [accessed 2015-10-15] [WebCite Cache]
  10. CrowdMed. CrowdMed Inc. 2015.   URL: https://www.crowdmed.com/ [accessed 2015-06-26] [WebCite Cache]
  11. Estellés-Arolas E, González-Ladrón-de-Guevara F. Towards an integrated crowdsourcing definition. J Infor Sci 2012 Mar 09;38(2):189-200. [CrossRef]
  12. Schenk E, Guittard C. Towards a characterization of crowdsourcing practices. J Innov Econ Manage 2011;7(1):93-107 [FREE Full text] [CrossRef]
  13. Ranard BL, Ha YP, Meisel ZF, Asch DA, Hill SS, Becker LB, et al. Crowdsourcing—Harnessing the masses to advance health and medicine, a systematic review. J Gen Intern Med 2014 Jan;29(1):187-203 [FREE Full text] [CrossRef] [Medline]
  14. McCoy AB, Wright A, Krousel-Wood M, Thomas EJ, McCoy JA, Sittig DF. Validation of a crowdsourcing methodology for developing a knowledge base of related problem-medication pairs. Appl Clin Inform 2015;6(2):334-344. [CrossRef] [Medline]
  15. Luengo-Oroz MA, Arranz A, Frean J. Crowdsourcing malaria parasite quantification: An online game for analyzing images of infected thick blood smears. J Med Internet Res 2012;14(6):e167 [FREE Full text] [CrossRef] [Medline]
  16. Mavandadi S, Dimitrov S, Feng S, Yu F, Sikora U, Yaglidere O, et al. Distributed medical image analysis and diagnosis through crowd-sourced games: A malaria case study. PLoS One 2012;7(5):e37245 [FREE Full text] [CrossRef] [Medline]
  17. Mavandadi S, Dimitrov S, Feng S, Yu F, Yu R, Sikora U, et al. Crowd-sourced BioGames: Managing the big data problem for next-generation lab-on-a-chip platforms. Lab Chip 2012 Oct 21;12(20):4102-4106 [FREE Full text] [CrossRef] [Medline]
  18. McKenna MT, Wang S, Nguyen TB, Burns JE, Petrick N, Summers RM. Strategies for improved interpretation of computer-aided detections for CT colonography utilizing distributed human intelligence. Med Image Anal 2012 Aug;16(6):1280-1292 [FREE Full text] [CrossRef] [Medline]
  19. Nguyen TB, Wang S, Anugu V, Rose N, McKenna M, Petrick N, et al. Distributed human intelligence for colonic polyp classification in computer-aided detection for CT colonography. Radiology 2012 Mar;262(3):824-833 [FREE Full text] [CrossRef] [Medline]
  20. Brady CJ, Villanti AC, Pearson JL, Kirchner TR, Gupta OP, Shah CP. Rapid grading of fundus photographs for diabetic retinopathy using crowdsourcing. J Med Internet Res 2014;16(10):e233 [FREE Full text] [CrossRef] [Medline]
  21. Semigran HL, Linder JA, Gidengil C, Mehrotra A. Evaluation of symptom checkers for self diagnosis and triage: Audit study. BMJ 2015;351:h3480 [FREE Full text] [Medline]
  22. Tang H, Ng JH. Googling for a diagnosis—Use of Google as a diagnostic aid: Internet based study. BMJ 2006 Dec 2;333(7579):1143-1145 [FREE Full text] [CrossRef] [Medline]
  23. Parvanta C, Roth Y, Keller H. Crowdsourcing 101: A few basics to make you the leader of the pack. Health Promot Pract 2013 Mar;14(2):163-167. [CrossRef] [Medline]
  24. Zettler P. The Health Care Blog. 2015 Jun 19. Do you need a medical degree to crowdsource medicine?   URL: http://thehealthcareblog.com/blog/2015/06/19/do-you-need-a-medical-degree-to-crowdsource-medicine/ [accessed 2015-06-30] [WebCite Cache]

Edited by D Giordano; submitted 07.07.15; peer-reviewed by Q Zhang, Y Wang, M Graber; comments to author 16.08.15; revised version received 15.10.15; accepted 30.11.15; published 14.01.16

Copyright

©Ashley N.D. Meyer, Christopher A. Longhurst, Hardeep Singh. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 14.01.2016.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.