Published on in Vol 23, No 10 (2021): October

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/29406, first published .
Questioning the Yelp Effect: Mixed Methods Analysis of Web-Based Reviews of Urgent Cares

Questioning the Yelp Effect: Mixed Methods Analysis of Web-Based Reviews of Urgent Cares

Questioning the Yelp Effect: Mixed Methods Analysis of Web-Based Reviews of Urgent Cares

Original Paper

1Department of Engineering Management and Systems Engineering, School of Engineering and Applied Science, George Washington University, Washington, DC, United States

2Department of Environmental and Occupational Health, Milken Institute School of Public Health, George Washington University School of Medicine and Health Sciences, Washington, DC, United States

3Division of Infectious Diseases, Children's National Hospital, Washington, DC, United States

4Department of Pediatrics, George Washington University, Washington, DC, United States

5Department of Health Policy and Management, Milken Institute School of Public Health, George Washington University School of Medicine and Health Sciences, Washington, DC, United States

6Department of Communication, College of Communication, Arts, and Sciences, Michigan State University, East Lansing, MI, United States

Corresponding Author:

Dian Hu, BSc

Department of Engineering Management and Systems Engineering

School of Engineering and Applied Science

George Washington University

B1800, Science and Engineering Hall 2700

800 22nd St NW

Washington, DC, 20052

United States

Phone: 1 2027251564

Email: hudian@gwmail.gwu.edu


Background: Providers of on-demand care, such as those in urgent care centers, may prescribe antibiotics unnecessarily because they fear receiving negative reviews on web-based platforms from unsatisfied patients—the so-called Yelp effect. This effect is hypothesized to be a significant driver of inappropriate antibiotic prescribing, which exacerbates antibiotic resistance.

Objective: In this study, we aimed to determine the frequency with which patients left negative reviews on web-based platforms after they expected to receive antibiotics in an urgent care setting but did not.

Methods: We obtained a list of 8662 urgent care facilities from the Yelp application programming interface. By using this list, we automatically collected 481,825 web-based reviews from Google Maps between January 21 and February 10, 2019. We used machine learning algorithms to summarize the contents of these reviews. Additionally, 200 randomly sampled reviews were analyzed by 4 annotators to verify the types of messages present and whether they were consistent with the Yelp effect.

Results: We collected 481,825 reviews, of which 1696 (95% CI 1240-2152) exhibited the Yelp effect. Negative reviews primarily identified operations issues regarding wait times, rude staff, billing, and communication.

Conclusions: Urgent care patients rarely express expectations for antibiotics in negative web-based reviews. Thus, our findings do not support an association between a lack of antibiotic prescriptions and negative web-based reviews. Rather, patients’ dissatisfaction with urgent care was most strongly linked to operations issues that were not related to the clinical management plan.

J Med Internet Res 2021;23(10):e29406

doi:10.2196/29406

Keywords



The World Health Organization has deemed antibiotic resistance, which is primarily caused by antibiotic overuse, to be one of the world's most pressing health problems [1]. Antibiotic overuse is widespread; approximately one-third of all antibiotics prescribed in US outpatient settings are unnecessary [2]. Care providers have admitted to prescribing antibiotics—even when antibiotics are unnecessary—when they assume that patients will be unsatisfied without an antibiotic prescription [3,4]. However, care providers' assumptions about a patient's expectations frequently do not match patients' actual expectations for an antibiotic [5]. Furthermore, prior literature on patient satisfaction and provider-patient communication have suggested that there are other factors that drive patient satisfaction [6]. For example, Welschen et al [7] found that receiving information or reassurance was more strongly associated with satisfaction than receiving an antibiotic prescription in primary care. Ong et al [8] found that patient satisfaction was not related to the receipt of antibiotics but was related to the belief that patients had a better understanding of their illness. Stearns et al [9] found that patients generally had equal levels of visit satisfaction regardless of their antibiotic treatment status.

Despite findings that do not support a link between patient satisfaction and antibiotics, care providers still report prescribing antibiotics unnecessarily because they fear that dissatisfied patients will leave negative reviews [10]. Authors in several media outlets [11,12] have coined the term Yelp effect; they propose that providers of on-demand care, such as those in urgent care centers, may prescribe antibiotics unnecessarily to prevent patients from leaving negative web-based reviews. There is a perception among urgent care providers that many of their patients expect to receive antibiotics, even when they are clinically unnecessary, and that patients will leave negative reviews on web-based platforms if these expectations are not met. To avoid negative web-based reviews, which could impact care providers’ pay and the performance of urgent care centers, care providers are driven to prescribe antibiotics, even when they are clinically unnecessary.

Concerns about the Yelp effect are magnified by the surging popularity of review websites, such as reviews on Google Pages, Yelp, and Healthgrades [11]. Even though most physician ratings on web-based platforms are positive [13,14], at least 1 study has demonstrated that in web-based physician reviews, the words medication and prescription are mentioned in more negative contexts [15]. This perceived connection between negative web-based reviews and antibiotics is hypothesized to be a significant driver of inappropriate antibiotic prescribing and is evident in at least 1 petition that has received 40,000 signatures supporting the removal of web-based doctor reviews [16].

Concerns about the impact of negative reviews appear to be valid; patients often use web-based reviews when deciding whether and where to seek treatment [17]. Furthermore, these reviews may not accurately reflect the quality of medical care. For example, Daskivich et al [18] conducted an analysis of 5 popular web-based platforms and showed that there was no significant association between web-based consumer review scores and standard quality guidelines. Likewise, Yaraghi et al [19] found that consumers tend to perceive care provider ratings from nonclinical websites to be as important as the ratings from government websites. Thus, web-based reviews can drive care providers’ behaviors in ways that negatively impact public health.

At present, the extent to which patients’ expectations for antibiotic treatment drive negative web-based reviews is unknown. Even though web-based reviews have been mined for other health topics [20,21], to our knowledge, ours is the first study to evaluate web-based reviews and antibiotic prescribing. Thus, we sought to determine whether there truly is a Yelp effect by evaluating how frequently patients leave negative web-based reviews regarding a lack of antibiotic prescriptions.

To determine the prevalence of the Yelp effect, we analyzed a large sample of web-based reviews of urgent care centers in the US by calculating the proportion of negative reviews that exhibit a message regarding a lack of antibiotic prescriptions. Specifically, we sought to (1) quantify the proportion of negative reviews that were posted due to patients (reviewers) not receiving an antibiotic and (2) evaluate the content of negative reviews of urgent care centers.


Data Collection

We used the Yelp application programming interface (API) between October 1 and October 20, 2018, to obtain a list of facilities that were tagged as “urgent care” in the United States. We also retrieved the star ratings for all reviews that were associated with these facilities. We removed all facilities that did not have a US zip code. We were unable to obtain the full texts of these reviews—Yelp's terms of service prohibit web scraping, and Yelp declined to provide us with permission to use the text. Thus, by using the list of facilities that was obtained from the Yelp API, we ran the data collection algorithm from January 21 and February 10, 2019, and obtained all of the Google Maps reviews of each urgent care facility that were posted before January 21, 2019. Unlike Yelp, Google's terms of service permit the collection of their reviews as long as it does not put undue burdens on Google's servers. To collect data from Google Maps reviews, we designed a computer program (Multimedia Appendix 1) for automatically collecting these reviews based on their Google Maps URLs. The Google Maps URLs of the urgent care facilities were collected after we presented their titles and addresses (obtained from the Yelp API) to 20 workers on Amazon Mechanical Turk, a crowdsourcing marketplace. These 20 workers were selected based on their prior demonstrated ability to successfully collect a set of 100 known URLs. Workers received US $0.09 for each URL collected. To summarize the contents of web-based reviews, we used a machine learning algorithm that was designed to summarize text—the latent Dirichlet allocation (LDA) topic model [22]. This algorithm was implemented in the LDA python package [23] (default settings were used) to the fit the model to the Google Maps data set. Topic models identify review topics automatically without human intervention by examining the word co-occurrence statistics within each review [24]. For each topic, we calculated the total number of word tokens that were found in positive reviews (4 or 5 stars) and negative reviews (1 or 2 stars). We fitted 3 topic models (one model with 10 topics, another with 20 topics, and another with 50 topics) to the data. We selected the model that generated the most coherent topics without a large increase in perplexity (a measure of model goodness of fit that is commonly used in natural language processing; Multimedia Appendix 2). Afterward, we extracted antibiotic-related reviews by using a list of keywords that was generated by one of the authors (RH)—a pediatrician who specializes in antibiotic stewardship (see Multimedia Appendix 3 for a keyword list). For each review, we examined the proportion of words in each topic. We applied the same procedure to the subset of reviews containing antibiotic-related keywords. We then developed a qualitative codebook to determine the content and sentiment of 200 reviews that were sampled at random from all reviews containing these keywords. This study was approved by The George Washington University Committee on Human Research Institutional Review Board (Federal Wide Assurance number: FWA00005945; institutional review board registration number: 180804).

Data Annotation

Four authors (LR, MF, MC, and SD) affiliated with The George Washington University Antibiotic Resistance Action Center collectively reviewed a subset of the 200 randomly sampled reviews to determine the types of messages that were present in the reviews [25]. After this initial review and the development of inductive codes, the reviews were categorized into one of the following categories in the codebook: (1) the Yelp effect category (the patient wanted antibiotics but did not receive them); (2) the opposite of the Yelp effect category (the patient received antibiotics but did not want them); (3) the convenience, inconvenience, and wait times category; (4) the staff competence or incompetence, courtesy and attitude, and satisfaction of care category; (5) the cost and price of drugs per visit (including sticker shock) category; (6) the other prescription-related complaints category; and (7) the other or none of the above category. Additionally, all reviews were annotated as “positive” (eg, the patient was satisfied with their care, and the review had 4 or 5 stars) or “negative” (eg, the patient was dissatisfied with care, and the review had fewer than 4 stars).

The four annotators then independently reviewed the same 200 randomly sampled reviews to assign them to 1 of the 7 categories. The final categories for the reviews were assigned based on the majority category among annotators. If there was no majority category, disagreements were resolved discursively until a consensus category was agreed upon.

Some reviews mentioned that the reviewer did not receive an antibiotic when it was expected, even if that was not the main message. Thus, after assigning each message a primary category, the same four annotators revisited all 200 reviews to determine if they mentioned the Yelp effect in passing (ie, whether the review mentioned an unfulfilled expectation for antibiotics). A code for mentioning the presence or absence of the Yelp effect—even if it was mentioned in passing—was then assigned to each review as a secondary code. By using these 200 annotated samples, we inferred population proportions for each category in our codebook and calculated 95% CIs.


Distribution of Google and Yelp Data

By using the Yelp API, we identified 8662 unique urgent care facilities that had 84,127 unique reviews. We collected 481,825 US-based reviews from Google Maps. Of these, 340,328 (70.63%) contained some text. The average star rating in Yelp reviews was significantly lower than the average star rating in Google Map reviews (t565,950=82.38; P<.001).

Figures 1–4 display the distributions of the number of reviews and mean reviews stars from both Yelp and Google Maps on a 2-by-2 map. The distributions show an apparent difference between various geographical regions of the United States. The biased distribution is intriguing and can prompt many other research questions. Therefore, all state-by-state maps featuring the same information are being hosted on The George Washington University cloud, which can be made available to other researchers upon request.

Figure 1. Total number of reviews in US counties (Yelp data).
View this figure
Figure 2. Mean review stars in US counties (Yelp data).
View this figure
Figure 3. Total number of reviews in US counties (Google Maps data).
View this figure
Figure 4. Mean review stars in US counties (Google Maps data).
View this figure

We categorized the 481,825 reviews with text from Google Maps into the following three groups: (1) reviews without any text (n=141,497; star rating: mean 4.53); (2) reviews with text that did not mention antibiotic-related keywords (n=332,566; star rating: mean 3.94); and (3) reviews with text that mentioned antibiotic-related keywords (n=7762; star rating: mean 2.40; Figure 5). We found significant differences in the average star ratings across these three groups; a posthoc Tukey honestly significant difference test for multiple comparisons showed that reviews with antibiotic-related keywords had the lowest ratings compared to those in reviews without text (mean difference=−2.13; P<.001) and reviews with text but no keywords (mean difference=−1.54; P<.001). Reviews without text were significantly more positive than reviews with text but no keywords (mean difference=0.59; P<.001; all descriptive statistics are in Multimedia Appendix 4).

Figure 5. Distribution of Google reviews. Reviews with text but no antibiotic-related keywords are shown in green. Reviews with antibiotic-related keywords are shown in red. Reviews without text are shown in black.
View this figure

Topic Modeling Results of Google Data

The LDA topic modeling analysis (Figure 6; Multimedia Appendix 2) yielded 20 topics. The most negative topics pertained to rude staff, wait times, billing, callbacks, and other aspects of customer experience. Although the “infections and symptoms” topic was also predominantly negative, this topic did not make up a plurality of the reviews (481,760/6,795,468, 7.09%). With regard to topics with antibiotic-related keywords, Figure 7 shows that the most common topic pertained to infections and symptoms and was predominantly negative; however, several other topics pertaining to customer experience were also predominantly negative.

Figure 6. Topics generated by the latent Dirichlet allocation algorithm based on the text of all Google reviews in our data set. The size of each topic is proportional to the number of words that were assigned to each topic by the algorithm. Words are further segmented according to the sentiment of each review. Reviews with 4-5 stars are positive, reviews with 1-2 stars are negative, and reviews with 3 stars are neutral.
View this figure
Figure 7. Subset of topic modeling results for reviews containing antibiotic-related keywords. The size of each topic is proportional to the number of words that were assigned to each topic by the algorithm. Words are further segmented according to the sentiment of each review. Reviews with 4-5 stars are positive, reviews with 1-2 stars are negative, and reviews with 3 stars are neutral.
View this figure

Annotation Results

The annotators who labeled the seven inductive content categories achieved moderate reliability (Fleiss κ=0.42) with the first set of 100 reviews. All annotators agreed on sentiment. After disagreements were resolved, a second round of annotation for the next set of 100 reviews yielded substantial agreement (Fleiss κ=0.65), and disagreements were again resolved by consensus among reviewers. Table 1 summarizes the results of these annotations. Of a total of 200 reviews, we found that only 5 reviews (2.5%; 95% CI 0.3%-4.7%) exhibited the Yelp effect as the primary message category. By applying CIs to the full set of 8078 reviews containing antibiotic-related keywords, we expected that between 27 and 377 reviews would exhibit the Yelp effect as the primary message with 95% confidence. Thus, in our data set of 481,825 reviews, at most, 377 (0.08%) were expected to exhibit the Yelp effect.

Table 1. The seven primary message categories of the 200 messages.
Primary messageReviews (N=200), n (%)
Yelp effect (the patient expected to receive antibiotics but did not receive an antibiotic)5 (2.5)
Counter Yelp effect (the patient received antibiotics but did not want them)1 (0.5)
Convenience, inconvenience, and wait times (positive and negative sentiment)36 (18)
Staff competence or incompetence, courtesy and attitude, and satisfaction of care (positive and negative sentiment)138 (69)
Cost and price of drugs per visit (including sticker shock; positive and negative sentiment)13 (6.5)
Other complaints (positive and negative sentiment)3 (1.5)
Other or none of the above 4 (2)

The annotators also reexamined all reviews to determine if they mentioned a Yelp effect in passing. We found that of a total of 200 reviews, 42 (21%) had some mention of reviewers not having received antibiotics when they were expected (Fleiss κ=0.75). Thus, with 95% confidence, between 1240 and 2152 of our 8078 reviews with antibiotic-related keywords exhibited the Yelp effect, even if it was only mentioned in passing. In our data set of 481,825 reviews, at most, 2152 (0.45%) were expected to exhibit the Yelp effect.


Principal Findings

Our data suggest that the Yelp effect is quite rare. Out of a set of almost half a million reviews (N=481,824), fewer than 1 in 1250 (0.08%) seemed to contain the Yelp effect as the primary message (with 95% confidence). Furthermore, with 95% confidence, fewer than 1 in 225 (0.45%) reviews seemed to contain the Yelp effect as the primary or secondary message.

In contrast, we found that (138/200) 69% of the reviews in our annotated data set focused primarily on assessments of staff competence and the quality of personal interactions. This suggests that in terms of the extent that a Yelp effect exists, patients express this effect by questioning the expertise or personal qualities of urgent care staff. This may put urgent care providers in a bind; although they should not prescribe antibiotics inappropriately, a failure to explain to patients why the patients' preferred treatment is ineffective may lead to reviews that are designed to undermine care providers' credibility, expertise, and personal qualities. Thus, it is of paramount importance that both care providers and urgent care staff provide high-quality care and leave patients with a meaningful understanding of why they received the treatment that they did. For example, prior work has shown that patients' expectations for antibiotics are associated with categorical gist perceptions of the risks and benefits of antibiotics [26-28] and that patients more likely to be satisfied when they understand the gist of appropriate prescribing. This promotes the need to better communicate rationales for prescribing antibiotics in a manner that enhances patients' insights into why decisions are made and, by extension, their assessments of care providers' competence. Naturally, care providers' attitudes toward patient care are also important.

We aimed to answer the following question: is there a Yelp effect? The 2.28% (7762/340,328) of Google Maps reviews that mentioned antibiotics were indeed significantly more negative than those without antibiotic-related keywords (P<.001). Furthermore, our results show that reviews of urgent care centers on Yelp are significantly more negative compared to those on Google Maps (P<.001). Thus, we cannot rule out the existence of a Yelp effect on either Yelp or Google Maps. However, our results show that antibiotic prescription is merely one of the many potentially addressable issues in doctor-patient communication and may not be the primary source of negative web-based reviews. Indeed, patient satisfaction seems to have been most strongly linked to customer service issues (eg, wait times, rude staff, billing practices, etc). Thus, we must question whether claims regarding the impact of antibiotic prescriptions on negative reviews of urgent care centers are exaggerated. In recent years, some authors have suggested the presence of an effect that is similar to the Yelp effect in the context of opioid prescription [29,30]. However, similar to our findings, other studies have shown that these negative reviews are primarily comments on physicians’ attributes or administrative attributes [31].

Limitations

The limitations of our work include our inability to hand-annotate all of the 481,825 reviews in our data set. Instead, we annotated 200 of the 7762 (2.58%) messages that were identified to have antibiotic-related keywords. This limitation was mitigated by the fact that these 200 messages were selected uniformly at random, meaning that they are likely to be representative of messages with antibiotic-related keywords. It is possible that our choice of keywords might have introduced selection bias; specifically, we assumed that patients who expected to (but did not) receive antibiotics would have said so in their reviews. Thus, we cannot rule out the possibility that patients were insincere when providing their reasons for negative reviews. However, our findings clearly indicate that patients were willing to express dissatisfaction with several other topics that do not directly pertain to antibiotics. Web-based reviews also often lack key patient information (eg, visit reason, medical history, and demographics). Finally, we do not claim that our results generalize beyond urgent care settings.

Conclusion

Our analysis shows that the Yelp effect may not be a major driver of negative sentiments in web-based reviews. Rather than compromise medical and public health recommendations by acceding to the potentially faulty perceptions resulting from patients' desires, urgent care facilities should instead invest in efforts for improving patients' overall experience, such as reducing wait times, making billing practices transparent, and investing in training staff members to adhere to the best standards of customer service. Although these steps may not prevent all negative reviews, our analysis suggests that antibiotic prescribing need not be the focal point for patient satisfaction in urgent care settings.

Acknowledgments

This work was supported by a grant from The George Washington University's Cross-Disciplinary Research Fund.

Conflicts of Interest

None declared

Multimedia Appendix 1

Google Maps review collection algorithm.

DOCX File , 28 KB

Multimedia Appendix 2

Results and interpretations of 3 latent Dirichlet allocation models.

XLSX File (Microsoft Excel File), 49 KB

Multimedia Appendix 3

Antibiotic-related keywords.

DOCX File , 17 KB

Multimedia Appendix 4

Descriptive statistics.

XLSX File (Microsoft Excel File), 29 KB

  1. Antibiotic resistance: Multi-country public awareness survey. World Health Organization. 2015.   URL: https://apps.who.int/iris/bitstream/handle/10665/194460/9789241509817_eng.pdf?sequence=1&isAllowed=y [accessed 2021-09-01]
  2. Fleming-Dutra KE, Hersh AL, Shapiro DJ, Bartoces M, Enns EA, File TM, et al. Prevalence of inappropriate antibiotic prescriptions among US ambulatory care visits, 2010-2011. JAMA 2016 May 03;315(17):1864-1873. [CrossRef] [Medline]
  3. Mangione-Smith R, McGlynn EA, Elliott MN, Krogstad P, Brook RH. The relationship between perceived parental expectations and pediatrician antimicrobial prescribing behavior. Pediatrics 1999 Apr;103(4 Pt 1):711-718. [CrossRef] [Medline]
  4. May L, Gudger G, Armstrong P, Brooks G, Hinds P, Bhat R, et al. Multisite exploration of clinical decision making for antibiotic use by emergency medicine providers using quantitative and qualitative methods. Infect Control Hosp Epidemiol 2014 Sep;35(9):1114-1125 [FREE Full text] [CrossRef] [Medline]
  5. Stivers T, Mangione-Smith R, Elliott MN, McDonald L, Heritage J. Why do physicians think parents expect antibiotics? What parents report vs what physicians believe. J Fam Pract 2003 Feb;52(2):140-148. [Medline]
  6. What drives inappropriate antibiotic use in outpatient care? The Pew Charitable Trusts. 2017 Jun 28.   URL: https:/​/www.​pewtrusts.org/​en/​research-and-analysis/​issue-briefs/​2017/​06/​what-drives-inappropriate-antibiotic-use-in-outpatient-care/​ [accessed 2021-09-01]
  7. Welschen I, Kuyvenhoven M, Hoes A, Verheij T. Antibiotics for acute respiratory tract symptoms: patients' expectations, GPs' management and patient satisfaction. Fam Pract 2004 Jun;21(3):234-237. [CrossRef] [Medline]
  8. Ong S, Nakase J, Moran GJ, Karras DJ, Kuehnert MJ, Talan DA, EMERGEncy ID NET Study Group. Antibiotic use for emergency department patients with upper respiratory infections: prescribing practices, patient expectations, and patient satisfaction. Ann Emerg Med 2007 Sep;50(3):213-220. [CrossRef] [Medline]
  9. Stearns CR, Gonzales R, Camargo Jr CA, Maselli J, Metlay JP. Antibiotic prescriptions are associated with increased patient satisfaction with emergency department visits for acute respiratory tract infections. Acad Emerg Med 2009 Oct;16(10):934-941 [FREE Full text] [CrossRef] [Medline]
  10. Zgierska A, Rabago D, Miller MM. Impact of patient satisfaction ratings on physicians and clinical care. Patient Prefer Adherence 2014 Apr 03;8:437-446. [CrossRef] [Medline]
  11. Mckenna M. The Yelping of the American doctor. Wired. 2018 Mar 22.   URL: https://www.wired.com/story/the-yelping-of-the-american-doctor/ [accessed 2021-09-01]
  12. Lagasse J. Doctors' online reviews show more polarization, have more impact than those of other industries. Healthcare Finance. 2019 Dec 20.   URL: https:/​/www.​healthcarefinancenews.com/​news/​doctors-online-reviews-show-more-polarization-have-more-impact-those-other-industries/​ [accessed 2021-09-01]
  13. López L, Weissman JS, Schneider EC, Weingart SN, Cohen AP, Epstein AM. Disclosure of hospital adverse events and its association with patients' ratings of the quality of care. Arch Intern Med 2009 Nov 09;169(20):1888-1894. [CrossRef] [Medline]
  14. Kadry B, Chu LF, Kadry B, Gammas D, Macario A. Analysis of 4999 online physician ratings indicates that most patients give physicians a favorable rating. J Med Internet Res 2011 Nov 16;13(4):e95 [FREE Full text] [CrossRef] [Medline]
  15. Paul MJ, Wallace BC, Dredze M. What affects patient (dis) satisfaction? Analyzing online doctor ratings with a joint topic-sentiment model. In: Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence. 2013 Presented at: AAAI-13: Expanding the Boundaries of Health Informatics Using AI; July 14-18, 2013; Phoenix, Arizona, USA.
  16. Remove online reviews of doctors!. Change.org.   URL: https://www.change.org/p/yelp-remove-online-reviews-of-doctors/ [accessed 2021-09-01]
  17. Zhang KZK, Zhao SJ, Cheung CMK, Lee MKO. Examining the influence of online reviews on consumers' decision-making: A heuristic–systematic model. Decis Support Syst 2014 Nov;67:78-89. [CrossRef]
  18. Daskivich TJ, Houman J, Fuller G, Black JT, Kim HL, Spiegel B. Online physician ratings fail to predict actual performance on measures of quality, value, and peer review. J Am Med Inform Assoc 2018 Apr 01;25(4):401-407 [FREE Full text] [CrossRef] [Medline]
  19. Yaraghi N, Wang W, Gao GG, Agarwal R. How online quality ratings influence patients' choice of medical providers: Controlled experimental survey study. J Med Internet Res 2018 Mar 26;20(3):e99 [FREE Full text] [CrossRef] [Medline]
  20. Cawkwell PB, Lee L, Weitzman M, Sherman SE. Tracking hookah bars in New York: Utilizing Yelp as a powerful public health tool. JMIR Public Health Surveill 2015 Nov 20;1(2):e19 [FREE Full text] [CrossRef] [Medline]
  21. Ranard BL, Werner RM, Antanavicius T, Schwartz HA, Smith RJ, Meisel ZF, et al. Yelp reviews of hospital care can supplement and inform traditional surveys of the patient experience of care. Health Aff (Millwood) 2016 Apr;35(4):697-705 [FREE Full text] [CrossRef] [Medline]
  22. Blei DM, Ng AY, Jordan MI. Latent dirichllocation. J Mach Learn Res 2003 Mar 01;3:993-1022 [FREE Full text] [CrossRef]
  23. Riddell A, Hopper T, Luo S, Leinweber K, Grivas A. lda-project/lda: 1.1.0. Zenodo.   URL: https://zenodo.org/record/1412135 [accessed 2021-09-01]
  24. Blei DM. Probabilistic topic models. Commun ACM 2012 Apr 01;55(4):77-84. [CrossRef]
  25. Krippendorff K. Content Analysis: An Introduction to Its Methodology. Thousand Oaks, California: SAGE Publications Inc; 2013.
  26. Broniatowski DA, Klein EY, Reyna VF. Germs are germs, and why not take a risk? Patients' expectations for prescribing antibiotics in an inner-city emergency department. Med Decis Making 2015 Jan;35(1):60-67 [FREE Full text] [CrossRef] [Medline]
  27. Klein EY, Martinez EM, May L, Saheed M, Reyna V, Broniatowski DA. Categorical risk perception drives variability in antibiotic prescribing in the emergency department: A mixed methods observational study. J Gen Intern Med 2017 Oct;32(10):1083-1089 [FREE Full text] [CrossRef] [Medline]
  28. Broniatowski DA, Klein EY, May L, Martinez EM, Ware C, Reyna VF. Patients' and clinicians' perceptions of antibiotic prescribing for upper respiratory infections in the acute care setting. Med Decis Making 2018 Jul;38(5):547-561 [FREE Full text] [CrossRef] [Medline]
  29. Wenghofer EF, Wilson L, Kahan M, Sheehan C, Srivastava A, Rubin A, et al. Survey of Ontario primary care physicians' experiences with opioid prescribing. Can Fam Physician 2011 Mar;57(3):324-332 [FREE Full text] [Medline]
  30. Sundling RA, Logan DB, Tawancy CH, So E, Lee J, Logan K. Opioid prescribing habits of podiatric surgeons following elective foot and ankle surgery. Foot (Edinb) 2020 Dec;45:101710. [CrossRef] [Medline]
  31. Orhurhu MS, Salisu B, Sottosanti E, Abimbola N, Urits I, Jones M, et al. Chronic pain practices: An evaluation of positive and negative online patient reviews. Pain Physician 2019 Sep;22(5):E477-E486 [FREE Full text] [Medline]


API: application programming interface
LDA: latent Dirichlet allocation


Edited by T Kool; submitted 06.04.21; peer-reviewed by N Yaraghi, DK Yon; comments to author 14.05.21; revised version received 12.08.21; accepted 26.08.21; published 08.10.21

Copyright

©Dian Hu, Cindy Meng-Hsin Liu, Rana Hamdy, Michael Cziner, Melody Fung, Samuel Dobbs, Laura Rogers, Monique Mitchell Turner, David André Broniatowski. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 08.10.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.