Published on in Vol 22, No 8 (2020): August

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/18374, first published .
Rejected Online Feedback From a Swiss Physician Rating Website Between 2008 and 2017: Analysis of 2352 Ratings

Rejected Online Feedback From a Swiss Physician Rating Website Between 2008 and 2017: Analysis of 2352 Ratings

Rejected Online Feedback From a Swiss Physician Rating Website Between 2008 and 2017: Analysis of 2352 Ratings

Authors of this article:

Stuart McLennan1, 2 Author Orcid Image

Original Paper

1Institute of History and Ethics in Medicine, Technical University of Munich, Munich, Germany

2Institute for Biomedical Ethics, University of Basel, Basel, Switzerland

Corresponding Author:

Stuart McLennan, MBHL, PhD

Institute of History and Ethics in Medicine

Technical University of Munich

Ismaninger Straße 22

Munich, 81675

Germany

Phone: 49 89 4140 4041

Email: stuart.mclennan@tum.de


Background: Previous research internationally has only analyzed publicly available feedback on physician rating websites (PRWs). However, it appears that many PRWs are not publishing all the feedback they receive. Analysis of this rejected feedback could provide a better understanding of the types of feedback that are currently not published and whether this is appropriate.

Objective: The aim of this study was to examine (1) the number of patient feedback rejected from the Swiss PRW Medicosearch, (2) the evaluation tendencies of the rejected patient feedback, and (3) the types of issues raised in the rejected narrative comments.

Methods: The Swiss PRW Medicosearch provided all the feedback that had been rejected between September 16, 2008, and September 22, 2017. The feedback were analyzed and classified according to a theoretical categorization framework of physician-, staff-, and practice-related issues.

Results: Between September 16, 2008, and September 22, 2017, Medicosearch rejected a total of 2352 patient feedback. The majority of feedback rejected (1754/2352, 74.6%) had narrative comments in the German language. However, 11.9% (279/2352) of the rejected feedback only provided a quantitative rating with no narrative comment. Overall, 25% (588/2352) of the rejected feedback were positive, 18.7% (440/2352) were neutral, and 56% (1316/2352) were negative. The average rating of the rejected feedback was 2.8 (SD 1.4). In total, 44 subcategories addressing the physician (n=20), staff (n=9), and practice (n=15) were identified. In total, 3804 distinct issues were identified within the 44 subcategories of the categorization framework; 75% (2854/3804) of the issues were related to the physician, 6.4% (242/3804) were related to the staff, and 18.6% (708/3804) were related to the practice. Frequently mentioned issues identified from the rejected feedback included (1) satisfaction with treatment (533/1903, 28%); (2) the overall assessment of the physician (392/1903, 20.6%); (3) recommending the physician (345/1903, 18.1%); (4) the physician’s communication (261/1903, 13.7%); (5) the physician’s caring attitude (220/1903, 11.6%); and (6) the physician’s friendliness (203/1903, 10.6%).

Conclusions: It is unclear why the majority of the feedback were rejected. This is problematic and raises concerns that online patient feedback are being inappropriately manipulated. If online patient feedback is going to be collected, there needs to be clear policies and practices about how this is handled. It cannot be left to the whims of PRWs, who may have financial incentives to suppress negative feedback, to decide which feedback is or is not published online. Further research is needed to examine how many PRWs are using criteria for determining which feedback is published or not, what those criteria are, and what measures PRWs are using to address the manipulation of online patient feedback.

J Med Internet Res 2020;22(8):e18374

doi:10.2196/18374

Keywords



There remains relevant unwarranted variation in health care systems and deficiencies regarding all key aspects of health care [1]. Members of the public, however, have traditionally had few ways of knowing who the “good” health care organizations and professionals are [2]. As part of a wider move toward transparency, public reporting activities have been developed in a number of countries with the aim of providing information about health care organizations or professionals to the public to correct this asymmetry of information in order to inform patient decision-making and drive quality improvement [3-6].

One type of public reporting activity that has been developed in recent decades is physician rating websites (PRWs) [7-10]. PRWs represent a “bottom-up” approach to public reporting, allowing users to post ratings and comments regarding their physician as a source of information for others [11-14]. Although patients have always been able to share their opinions about their physicians with others, the ability to share these opinions via the internet and social media now means that these opinions have the potential to reach a far wider audience. With a growing number of patients utilizing the internet in relation to health care [15], it is expected that PRWs will play an increasingly important role.

A recent systematic search of PRWs internationally analyzed 143 different websites from 12 countries [16] and found that the majority (76.9%) of websites provided options to give feedback both on a predefined quantitative rating scale and as narrative comments. Previous research internationally has often focused on analyzing the ratings and comments publicly available on PRWs. This research has reported that many PRWs have incomplete lists of physicians, a low number of physicians rated, and a low number of ratings per physician that are overwhelmingly positive, which has raised concerns about the representativeness, validity, and usefulness of information on PRWs [14,17]. Furthermore, the medical profession has often expressed concerns that feedback on PRWs will be manipulated for “doctor bashing” or defamation [10].

The first PRWs in Switzerland were established in 2008, at the same time as many international PRWs. However, in comparison with other countries that have established PRWs, there has been limited research conducted on PRWs in Switzerland. This author recently conducted a study involving a random stratified sample of 966 physicians generated from the regions of Zürich and Geneva [18,19]. Selected physicians were searched on a total of four websites (OkDoc, Medicosearch, DocApp, and Google) between November 2017 and July 2018, and it was recorded whether the physician could be found. Moreover, the physician’s rating, the number of ratings and narrative comments, and the text of narrative comments were recorded. As far as the author is aware, this was the first inclusion of Google in a study examining physician ratings internationally.

With regard to the frequency of quantitative ratings and narrative comments on Swiss PRWs, similar issues as those identified in the international literature were found. Many of the selected physicians could not be identified (the proportion of physicians who could be identified ranged from 42.4% on OkDoc to 87.3% on DocApp), few of the identifiable physicians had been rated quantitatively (4.5% on DocApp to 49.8% on Google) or received a narrative comment (4.5% on DocApp to 31.2% on Google) at least once, rated physicians had, on average, a low number of quantitative ratings (1.47 ratings on OkDoc to 3.74 rating on Google) and narrative comments (1.23 comments on OkDoc to 3.03 comments on Google), and all three websites that allowed quantitative ratings had very positive average ratings on a 5-star rating scale (DocApp, 4.71; Medicosearch, 4.69; and Google, 4.41) [18].

With regard to the contents of narrative comments, it was found that the selected physicians had a total of 849 comments [19]. Narrative comments were analyzed and classified according to a theoretical categorization framework previously developed by Emmert et al [10]. In total, 43 subcategories addressing the physician, staff, and practice were identified. None of the PRWs’ comments covered all 43 subcategories of the categorization framework; comments on Google covered 86% of the subcategories, those on Medicosearch covered 72%, those on DocApp covered 60%, and those on OkDoc covered 56%. In total, 2441 distinct issues were identified within the 43 subcategories of the categorization framework; 83.65% of the issues were related to the physician, 6.63% were related to the staff, and 9.70% were related to the practice. Overall, 95% of the subcategories of the categorization framework and 81.60% of the distinct issues identified were concerning aspects of performance (interpersonal skills of the physician and staff, infrastructure, and organization and management of the practice) considered assessable by patients [19]. Furthermore, this research raised concerns that user feedback is being suppressed by Swiss PRWs [18,19], which risks undermining the overall aim of PRWs of providing a reliable source of unbiased information regarding patients’ experiences and satisfaction with physicians.

As far as this author is aware, previous research internationally has only analyzed publicly available feedback on PRWs. However, as it appears that many PRWs are not publishing all the feedback they receive. Analysis of this rejected feedback could provide a better understanding of the types of feedback that are currently not published and whether this is appropriate. This study therefore aimed to examine (1) the number of patient feedback rejected from the Swiss PRW Medicosearch, (2) the evaluation tendencies of the rejected patient feedback, and (3) the types of issues raised in the rejected narrative comments. Gaining a better understanding of feedback rejected by PRWs may help to identify issues in the way PRWs are currently determining which feedback are and are not to be publicly published.


Sample

Switzerland is a Central European country with a population of about 8.4 million people and 4 official languages (German, French, Italian, and Romansh). The Swiss health care system is highly complex and decentralized, and all Swiss residents are required to purchase basic mandatory health insurance that is offered by competing nonprofit insurers. Mandatory health insurance covers most general practitioner and specialist services, and people not enrolled in managed care plans generally have free choice of professionals [18]. The first PRWs in Switzerland, OkDoc and Medicosearch, were established in 2008. A systematic web-based search conducted in June 2016 identified that the websites DocApp and Google also allow users to view quantitative ratings and/or narrative comments about Swiss physicians in a structured manner without having to open an account or log onto the website [18,19]. It appears that other websites have also subsequently started to allow users to view quantitative ratings and/or narrative comments about Swiss physicians (eg, DeinDoktor and Doctena) [19]. Nevertheless, out of the dedicated Swiss PRWs, Medicosearch appears to be one of the best established and used [18,19]. Medicosearch allows users to search for physicians by location and specialty. Physician profiles provide general information about the physician (specialties, languages spoken, and contact details). In recent years, Medicosearch has shifted its business strategy toward online appointments, where physicians pay a fee and their booking systems are integrated with Medicosearch, allowing patients to book an appointment with a physician directly on Medicosearch. Users can also leave reviews of physicians, but Medicosearch requires both a quantitative rating (5-star rating scale) and a narrative comment in every patient feedback. Although Medicosearch allows negative comments, it informs the concerned physician before publishing it on the website, so that the physician can decide whether to activate the negative feedback function. If the physician refuses, the feedback function is deactivated, removing positive comments as well [19].

As part of a larger project examining Swiss PRWs, the author approached the CEO of Medicosearch, Beat Burger. Discussions confirmed that Medicosearch does not publish all the feedback they receive. The author enquired about the possibility of receiving these rejected feedback for analysis. Medicosearch agreed to provide the rejected feedback to the author in an anonymous form. On October 24, 2017, Medicosearch sent the author an excel file that included all the feedback that Medicosearch had rejected between September 16, 2008, and September 22, 2017. The details of the rated physicians were not included. Medicosearch did not provide any reasons for why the feedback were rejected. The following data were imported into a Statistical Package for the Social Sciences (SPSS version 26 for Windows, IBM Corporation) file: the date the feedback was created, the quantitative rating out of 5, and the narrative comments provided under “title” and “description.” In early 2019, the author inquired with Medicosearch whether it would be possible to receive the rejected feedback from September 23, 2017, to December 31, 2018. Medicosearch told the author on April 11, 2019, that owing to new data-protection rules, Medicosearch had deleted all rejected feedback, and therefore, it was not possible to provide an updated file.

Data Analysis

Medicosearch uses a 5-star rating scale; a rating of 4 to 5 stars was considered a positive rating, 3 stars was considered a neutral rating, and 1 to 2 stars was considered a negative rating. The content of each narrative comment was analyzed and classified by the author according to a theoretical categorization framework of physician-, staff-, and practice-related issues. The categorization framework from Emmert et al was initially used [10], with modifications where necessary. This included removing categories that were not identified in the comments, adding categories that were identified but were not adequately covered by the previous framework, and separating categories (eg, friendliness and caring attitude) that were discussed in comments as distinct issues. Narrative comments were analyzed in their original language. Descriptive statistics included means and standard deviations for continuous variables and percentages for categorical variables. To analyze whether differences existed between German and French comments, chi-squared tests were used. All analyses were performed with the significance level α set to .05 and two-tailed tests, using SPSS version 26.


Characteristics of Ratings

Between September 16, 2008, and September 22, 2017, Medicosearch rejected a total of 2352 patient feedback (Table 1).

The majority of feedback rejected (1754/2352, 74.6%) had narrative comments in German (Table 2). Other rejected feedback had narrative comments in French (275/2352, 11.7%), Italian (31/2352, 1.3%), English (12/2352, 0.5%), and Spanish (1/2352, 0.04%). However, 11.9% (279/2352) of the rejected feedback only provided a quantitative rating with no narrative comment.

Overall, 25% (588/2352) of the quantitative ratings were positive, 18.7% (440/2352) were neutral, and 56% (1316/2352) were negative (Table 3). Additionally, the average rating of the rejected feedback was 2.8 (SD 1.4).

Table 1. Distribution of rejected feedback according to year.
Distribution of comments (year)a (N=2352), n (%)
2008200920102011201220132014201520162017
26
(1.1)
259
(11.0)
232
(9.9)
344
(14.6)
392
(16.7)
321
(13.6)
268
(11.4)
142
(6.0)
236
(10.0)
132
(5.6)

aFrom September 16, 2008, to September 22, 2017.

Table 2. Distribution of rejected feedback according to language.
Language (N=2352), n (%)
GermanFrenchItalianEnglishSpanishMissing
1754 (74.6)275 (11.7)31 (1.3)12 (0.5)1 (0.04)279 (11.9)
Table 3. Quantitative rating evaluation results.
MeasureLanguageaTotal (N=2352), n (%)
German (N=1754), n (%)French (N=275), n (%)Italian (N=31), n (%)English (N=12), n (%)Spanish (N=1), n (%)Missing (N=279), n (%)
Evaluation







Positive399 (22.7)81 (29.5)4 (12.9)4 (33.3)1/1 (100)99 (35.5)588 (25.0)

Neutral296 (16.9)59 (21.5)6 (19.4)1 (8.3)0 (0)77 (26.3)440 (18.7%)

Negative1065 (60.4)135 (49.1)21 (67.7)7 (58.3)0 (0)88 (30)1316 (56%)

Missing0 (0)0 (0)0 (0)0 (0)0 (0)8 (2.9)8 (0.3)
Average rating (SD)2.7 (1.3)3.0 (1.4)2.4 (1.3)2.9 (1.5)5.03.2 (1.4)2.8 (1.4)

Of the 2073 ratings that provided narrative comments, analysis found that a total of 170 comments were not feedback from a patient concerning the physician, staff, or practice (92 comments were not comprehensible, 29 comments were explicitly labelled as test ratings, 15 comments were about the PRW, 10 comments reported that the person had not visited the physician yet, 10 comments simply reported that the physician’s details were not up to date, eight comments were abusive, four comments were second-hand reports, two comments were asking for advice regarding their or their family member’s condition, and two comments were not concerning a visit to a physician). Consequently, this feedback was excluded from the categorization framework.

The 1903 included narrative comments had a mean length of 158 characters (SD 214), ranging from 1 to 2788 characters. There was a significant difference in the mean character length between positive comments (mean 88, SD 130) and negative comments (mean 193, SD 241) (t1314=−11, P<.001). There was no significant difference in the mean character length between German comments (mean 158, SD 205) and French comments (mean 154, SD 206) (t1862=0.2, P=.82).

Categorization of Issues

Content analysis of the included 1903 narrative comments identified 44 subcategories addressing the physician (n=20), staff (n=9), and practice (n=15) (Textbox 1).

Categorization framework.

Physician (n=20)

Satisfaction with treatment; overall assessment; recommendation; communication; caring attitude; friendliness; competence; treatment cost/billing; being taken seriously; time spent with patient; trust; professionalism; cooperation with medical specialists; alternative medicine; telephone availability; privacy; health insurance differentiation; patient involvement; individualized service; child friendliness

Staff (n=9)

Friendliness; overall assessment; service/assistance; communication; professionalism; availability by telephone; time spent with patient; health insurance differentiation; trust

Practice (n=15)

Overall assessment; waiting time within the practice; atmosphere; organization; ability to get appointment; equipment; recommendation; consultation hours; location; waiting room entertainment; parking space; availability by telephone; privacy; barrier-free access; online appointment

Textbox 1. Categorization framework.

In total, 3804 distinct issues were identified within the 44 subcategories of the categorization framework; 75% (2854/3804) of the issues were related to the physician, 6.4% (242/3804) were related to the staff, and 18.6% (708/3804) were related to the practice (Table 4). The most frequent issue mentioned in the rejected comments was satisfaction with treatment (533/1903, 28%); 73.2% (390/533) of these ratings were negative. Other frequently mentioned issues regarding the physician were as follows: 20.6% (392/1903) of comments provided an overall assessment of the physician (53.8% negative); 18.1% (345/1903) provided a recommendation regarding the physician (76.2% negative); 13.7% (261/1903) referred to the physician’s communication (77.8% negative); 11.6% (220/1903) referred to the physician’s caring attitude (75.9% negative); and 10.6% (203/1903) referred to the physician’s friendliness (73.9% negative). In relation to staff issues, the most frequently mentioned issue was regarding the staffs’ friendliness (109/1903; 5.7%); 43.1% of these ratings were negative. Concerning practice issues, the frequently mentioned issues were as follows: 15.5% (295/1903) of the comments provided an overall assessment (38.6% positive), 8.1% (155/1903) referred to the waiting time within the practice (58.7% negative), and 5% (96/1903) referred to the atmosphere of the practice (40.6% positive).

However, there were some significant differences between German and French comments in a number of subcategories (Multimedia Appendix 1). For instance, German comments referred significantly more often to the physician’s friendliness (χ21=5.9, P=.01), being taken seriously by the physician (χ21=8.5, P=.002), staff friendliness (χ21=7.0, P=.005), waiting time in the practice (χ21=6.3, P=.01), and practice atmosphere (χ21=6.6, P=.007).

Table 4. Categorization of issues.
IssueTotal (N=1903), n (%)Quantitative rating evaluation
Positive, n (%)Neutral, n (%)Negative, n (%)
Physician




Satisfaction with treatment533 (28.0)82 (15.4)61 (11.4)390 (73.2)

Overall assessment392 (20.6)122 (31.0)59 (15.1)211 (53.8)

Recommendation345 (18.1)35 (10.1)47 (13.6)263 (76.2)

Communication261 (13.7)39 (14.9)19 (7.3)203 (77.8)

Caring attitude220 (11.6)40 (18.2)13 (5.9)167 (75.9)

Friendliness203 (10.6)35 (17.2)18 (8.9)150 (73.9)

Treatment cost/billing173 (9.1)8 (4.6)30 (17.3)135 (78.0)

Competence170 (8.9)43 (25.3)13 (7.6)114 (67.1)

Being taken seriously141 (7.4)10 (7.1)10 (7.1)121 (85.8)

Time spent with patient136 (7.1)17 (12.5)18 (13.2)101 (74.3)

Trust133 (6.9)44 (33.1)11 (8.3)78 (58.6)

Professionalism97 (5.1)11 (11.3)6 (6.2)80 (82.5)

Cooperation with medical specialists13 (0.7)4 (30.8)09 (69.2)

Alternative medicine8 (0.4)6 (75.0)02 (25.0)

Telephone availability8 (0.4)1 (12.5)3 (37.5)4 (50.0)

Privacy7 (0.4)02 (28.6)5 (71.4)

Health insurance differentiation6 (0.3)006 (100)

Patient involvement5 (0.3)01 (20.0)4 (80.0)

Individualized service2 (0.1)002 (100)

Child friendliness1 (0.04)001 (100)
Staff




Friendliness109 (5.7)39 (35.8)23 (21.1)47 (43.1)

Overall assessment60 (3.2)19 (31.7)18 (30.0)23 (38.3)

Service/assistance31 (1.6)11 (35.5)3 (9.7)17 (54.8)

Communication22 (1.2)4 (18.2)7 (31.8)11 (50.0)

Professionalism10 (0.5)2 (20.0)2 (20.0)6 (60.0)

Availability by telephone6 (0.3)1 (16.7)4 (66.7)1 (16.7)

Time spent with patient2 (0.1)1 (50)01 (50.0)

Health insurance differentiation1 (0.04)01 (100)0

Trust1 (0.04)001 (100)
Practice




Overall assessment295 (15.5)114 (38.6)90 (30.5)91 (30.8)

Waiting time within practice155 (8.1)22 (14.2)42 (27.1)91 (58.7)

Atmosphere96 (5.0)39 (40.6)29 (30.2)28 (29.2)

Organization37 (1.9)7 (18.9)11 (29.7)19 (51.4)

Ability to get appointment36 (1.9)4 (11.1)11 (30.6)21 (58.3)

Equipment31 (1.6)8 (25.8)10 (32.3)13 (41.9)

Recommendation25 (1.3)4 (16.0)5 (20.0)16 (64.0)

Consultation hours8 (0.4)3 (37.5)3 (37.5)2 (25.0)

Location7 (0.4)2 (28.6)2 (28.6)3 (42.9)

Waiting room entertainment5 (0.3)4 (80.0)1 (20.0)0

Parking space4 (0.2)4 (100)00

Availability by telephone3 (0.2)1 (33.3)1 (33.3)1 (33.3)

Privacy3 (0.2)03 (100)0

Barrier-free access2 (0.1)01 (50.0)1 (50.0)

Online appointment1 (0.04)01 (100)0

Principal Findings

As far as this author is aware, this is the first study internationally to examine feedback that has been rejected from a PRW. The key findings of this study are as follows: (1) the Swiss PRW Medicosearch rejected a total of 2352 patient feedback between September 16, 2008, and September 22, 2017, (2) just over half of all the rejected feedback were negative, and (3) the most frequently mentioned issue in the rejected feedback was satisfaction with treatment. Medicosearch has shown a lot of transparency in providing this rejected feedback for analysis. It is, however, unclear why the majority of the feedback were rejected. This is problematic and raises concerns that online patient feedback are being inappropriately manipulated.

Medicosearch did not provide the reasons why it rejected the feedback, and as far as this author is aware, Medicosearch does not use formal criteria for determining which feedback should be published or rejected. Of the 2352 ratings rejected by Medicosearch, 170 comments were excluded from the categorization framework for various reasons. For example, the feedback were not comprehensible, were explicitly labelled as test ratings, were about the PRW, etc. These would also appear to be legitimate reasons for Medicosearch to reject the feedback. However, the appropriateness of rejecting the remaining 92.7% of feedback is less clear, particularly as they appear to be qualitatively the same as the published feedback for a sample of Swiss physicians recently analyzed [19].

Twelve percent of the rejected feedback only provided a quantitative rating with no narrative comment. Medicosearch requires that both a quantitative rating and a narrative comment be provided in every patient feedback, and this is likely the reason for rejecting this feedback. Narrative comments often provide a richer source of information than quantitative ratings [10]; however, making narrative comments mandatory seems inappropriate. Some patients may not be willing or able to describe what happened in a narrative comment, but may still want to share their satisfaction with their physician with others. It is unclear why these ratings should simply be excluded because the patient did not want to also write a narrative comment.

In terms of the evaluation tendencies of online patient feedback, previous Swiss and international research has found that the published online patient feedback on PRWs is overwhelmingly positive [8,10,14,17-32]. However, recent research also raised concerns that negative feedback is being suppressed by Swiss PRWs [19]. For instance, the PRW OkDoc explicitly states on its website that any negative comments will be deleted, and while Medicosearch allows negative comments, it informs the concerned physician before publishing it online, so the physician can decide whether to activate the negative feedback function [19]. There was therefore an expectation that the majority of the rejected feedback would be negative. However, this analysis of 2352 rejected feedback from Medicosearch found that just over half of all rejected feedback were negative, and the average rejected rating was 2.8 out of 5.

The proportion of rejected negative feedback, however, is substantially higher than the proportion of negative feedback that has been published in the international literature [8,10,14,17-32]. Analysis of the published feedback for a sample of Swiss physicians also reported that only 4.3% (10/234) of the feedback published on Medicosearch was negative and that the average rating was 4.68 out of 5. It is unclear why there is such a large discrepancy between published and rejected negative feedback. It has previously been suggested that Switzerland’s restrictive legal framework regarding data protection may be having a big impact on the types of online patient feedback that are published [18]. However, it may also be that PRWs like Medicosearch are also deciding themselves not to publish a lot of the negative feedback they receive owing to conflicts of interest. Medicosearch has shifted its business strategy toward online appointments, where physicians pay a fee and their booking systems are integrated with Medicosearch, which allows patients to book an appointment with a physician directly on Medicosearch. Consequently, Medicosearch will likely be reluctant to upset paying physicians by publishing too much negative feedback, as their business is now reliant on physicians using their online appointment system.

Users of PRWs can also manipulate online patient feedback, and there is some indication that physicians or practice staff sometimes pose as patients on PRWs to post either positive comments about themselves or negative comments about competitors [33]. Indeed, 25% (588/2352) of the rejected feedback were positive, and it is possible some of these were rejected because Medicosearch suspected that these were fake reviews. However, without a clear and consistent way to determine which feedback is rejected, there is a danger that feedback will be inappropriately rejected.

With regard to the contents of narrative comments, the most frequently mentioned issues identified from the rejected feedback included (1) satisfaction with treatment; (2) the overall assessment of the physician; (3) recommending the physician; (4) the physician’s communication; (5) the physician’s caring attitude; and (6) the physician’s friendliness. In comparison, the top five mentioned issues identified in the analysis of the published feedback for a sample of Swiss physicians were (1) the overall assessment of the physician and the physician’s competence; (2) the physician’s communication; (3) recommending the physician; (4) the physician’s friendliness; and (5) the physician’s caring attitude [19]. This suggests that online patient feedback is raising similar issues, regardless of whether it is published or rejected. Indeed, just like in the analysis of the published feedback for a sample of Swiss physicians [19], it is important to recognize that 95% (42/44) of the subcategories of the categorization framework and 81.5% (3101/3804) of the distinct issues identified were concerning aspects of performance (interpersonal skills of the physician and staff, infrastructure, and organization and management of the practice) that are considered to be assessable by patients.

If online patient feedback is going to be collected, there needs to be clear policies and practices about how this is handled. It cannot be left to the whims of PRWs, who may have financial incentives to suppress negative feedback, to decide which feedback is or is not published online. It has previously been recommended that “there is a need for consensus-based criteria that applies to all Swiss PRWs for determining which comments are and are not to be publicly published and which are clearly publicized so users of PRWs are aware of it” [19]. This analysis of 2352 rejected feedback from Medicosearch further highlights the need for such a consensus-based criteria. To support this, further research is needed to examine how many Swiss PRWs are using criteria for determining which feedback is published or not, what those criteria are, and what measures PRWs are using to address the manipulation of online patient feedback. It appears that research examining these issues would be helpful in most countries that have PRWs.

Limitations

This study has some limitations that should be taken into account when interpreting the results. First, it is unknown how many patient feedback Medicosearch received in total during the time period covered. The author contacted Medicosearch asking for this information but never received a response, and the information is not freely available on the website. It would be helpful to know the proportion of patient feedback that is being rejected. Second, the sample of rejected feedback was only taken from one Swiss PRW. Although Medicosearch is one of the oldest and most used Swiss PRWs, it is unclear how generalizable the results are to other PRWs and other countries. Future research examining whether PRWs are using criteria for determining which feedback is published or not should include all Swiss PRWs. Third, the specialty and sociodemographic information of the rated physicians are unknown, and there may be important differences between the different specialties and physicians. Finally, the sociodemographic information of the rating patients is unknown and may not be representative of Swiss patients in general.

Acknowledgments

This work was funded by the Swiss Academy of Medical Sciences’ Käthe-Zingg-Schwichtenberg-Fonds, which had no role in the project design; in the collection, analysis, or interpretation of data; in the writing of the article; or in the decision to submit the paper for publication.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Categorization of issues by language.

DOCX File , 25 KB

  1. Institute of Medicine. Best care at lower cost: The path to continuously learning health care in America. Washington, DC: The National Academies Press; 2013.
  2. Paterson R. The Good Doctor: What Patients Want. Auckland, New Zealand: Auckland University Press; 2010.
  3. Faber M, Bosch M, Wollersheim H, Leatherman S, Grol R. Public reporting in health care: how do consumers use quality-of-care information? A systematic review. Med Care 2009 Jan;47(1):1-8. [CrossRef] [Medline]
  4. Marshall MN, Shekelle PG, Leatherman S, Brook RH. The public release of performance data: what do we expect to gain? A review of the evidence. JAMA 2000 Apr 12;283(14):1866-1874. [CrossRef] [Medline]
  5. Fung CH, Lim Y, Mattke S, Damberg C, Shekelle PG. Systematic review: the evidence that publishing patient care performance data improves quality of care. Ann Intern Med 2008 Jan 15;148(2):111-123. [CrossRef] [Medline]
  6. Berger ZD, Joy SM, Hutfless S, Bridges JF. Can public reporting impact patient outcomes and disparities? A systematic review. Patient Educ Couns 2013 Dec;93(3):480-487. [CrossRef] [Medline]
  7. Emmert M, Sander U, Esslinger AS, Maryschok M, Schöffski O. Public reporting in Germany: the content of physician rating websites. Methods Inf Med 2012;51(2):112-120. [CrossRef] [Medline]
  8. Emmert M, Meier F. An analysis of online evaluations on a physician rating website: evidence from a German public reporting instrument. J Med Internet Res 2013 Aug 06;15(8):e157 [FREE Full text] [CrossRef] [Medline]
  9. Emmert M, Sander U, Pisch F. Eight questions about physician-rating websites: a systematic review. J Med Internet Res 2013 Feb 01;15(2):e24 [FREE Full text] [CrossRef] [Medline]
  10. Emmert M, Meier F, Heider A, Dürr C, Sander U. What do patients say about their physicians? an analysis of 3000 narrative comments posted on a German physician rating website. Health Policy 2014 Oct;118(1):66-73. [CrossRef] [Medline]
  11. Hennig-Thurau T, Gwinner KP, Walsh G, Gremler DD. Electronic word-of-mouth via consumer-opinion platforms: What motivates consumers to articulate themselves on the Internet? Journal of Interactive Marketing 2004 Jan;18(1):38-52. [CrossRef]
  12. Kamel Boulos MN, Wheeler S. The emerging Web 2.0 social software: an enabling suite of sociable technologies in health and health care education. Health Info Libr J 2007 Mar;24(1):2-23 [FREE Full text] [CrossRef] [Medline]
  13. Terlutter R, Bidmon S, Röttl J. Who uses physician-rating websites? Differences in sociodemographic variables, psychographic variables, and health status of users and nonusers of physician-rating websites. J Med Internet Res 2014 Mar 31;16(3):e97 [FREE Full text] [CrossRef] [Medline]
  14. Lagu T, Hannon NS, Rothberg MB, Lindenauer PK. Patients' evaluations of health care providers in the era of social networking: an analysis of physician-rating websites. J Gen Intern Med 2010 Sep;25(9):942-946 [FREE Full text] [CrossRef] [Medline]
  15. Eysenbach G. The impact of the Internet on cancer outcomes. CA Cancer J Clin 2003;53(6):356-371 [FREE Full text] [CrossRef] [Medline]
  16. Rothenfluh F, Schulz PJ. Content, Quality, and Assessment Tools of Physician-Rating Websites in 12 Countries: Quantitative Analysis. J Med Internet Res 2018 Jun 14;20(6):e212 [FREE Full text] [CrossRef] [Medline]
  17. López A, Detz A, Ratanawongsa N, Sarkar U. What patients say about their doctors online: a qualitative content analysis. J Gen Intern Med 2012 Jun;27(6):685-692 [FREE Full text] [CrossRef] [Medline]
  18. McLennan S. Quantitative Ratings and Narrative Comments on Swiss Physician Rating Websites: Frequency Analysis. J Med Internet Res 2019 Jul 26;21(7):e13816 [FREE Full text] [CrossRef] [Medline]
  19. McLennan S. The Content and Nature of Narrative Comments on Swiss Physician Rating Websites: Analysis of 849 Comments. J Med Internet Res 2019 Sep 30;21(9):e14336 [FREE Full text] [CrossRef] [Medline]
  20. Emmert M, Meier F, Pisch F, Sander U. Physician choice making and characteristics associated with using physician-rating websites: cross-sectional study. J Med Internet Res 2013 Aug 28;15(8):e187 [FREE Full text] [CrossRef] [Medline]
  21. Black E, Thompson L, Saliba H, Dawson K, Black NM. An analysis of healthcare providers' online ratings. Inform Prim Care 2009;17(4):249-253 [FREE Full text] [CrossRef] [Medline]
  22. Kadry B, Chu LF, Kadry B, Gammas D, Macario A. Analysis of 4999 online physician ratings indicates that most patients give physicians a favorable rating. J Med Internet Res 2011 Nov 16;13(4):e95 [FREE Full text] [CrossRef] [Medline]
  23. Gao GG, McCullough JS, Agarwal R, Jha AK. A changing landscape of physician quality reporting: analysis of patients' online ratings of their physicians over a 5-year period. J Med Internet Res 2012 Feb 24;14(1):e38 [FREE Full text] [CrossRef] [Medline]
  24. Ellimoottil C, Hart A, Greco K, Quek ML, Farooq A. Online reviews of 500 urologists. J Urol 2013 Jun;189(6):2269-2273. [CrossRef] [Medline]
  25. Sobin L, Goyal P. Trends of online ratings of otolaryngologists: what do your patients really think of you? JAMA Otolaryngol Head Neck Surg 2014 Jul;140(7):635-638. [CrossRef] [Medline]
  26. Murphy GP, Awad MA, Osterberg EC, Gaither TW, Chumnarnsongkhroh T, Washington SL, et al. Web-Based Physician Ratings for California Physicians on Probation. J Med Internet Res 2017 Aug 22;19(8):e254 [FREE Full text] [CrossRef] [Medline]
  27. Strech D, Reimann S. [German language physician rating sites]. Gesundheitswesen 2012 Aug;74(8-9):e61-e67. [CrossRef] [Medline]
  28. Emmert M, Gerstner B, Sander U, Wambach V. Eine Bestandsaufnahme von Bewertungen auf Arztbewertungsportalen am Beispiel des Nürnberger Gesundheitsnetzes Qualität und Effizienz (QuE). Gesundh ökon Qual manag 2013 Jul 9;19(04):161-167. [CrossRef]
  29. McLennan S, Strech D, Reimann S. Developments in the Frequency of Ratings and Evaluation Tendencies: A Review of German Physician Rating Websites. J Med Internet Res 2017 Aug 25;19(8):e299 [FREE Full text] [CrossRef] [Medline]
  30. Liu JJ, Matelski JJ, Bell CM. Scope, Breadth, and Differences in Online Physician Ratings Related to Geography, Specialty, and Year: Observational Retrospective Study. J Med Internet Res 2018 Mar 07;20(3):e76 [FREE Full text] [CrossRef] [Medline]
  31. Greaves F, Pape UJ, Lee H, Smith DM, Darzi A, Majeed A, et al. Patients' ratings of family physician practices on the internet: usage and associations with conventional measures of quality in the English National Health Service. J Med Internet Res 2012 Oct 17;14(5):e146 [FREE Full text] [CrossRef] [Medline]
  32. Lagu T, Goff SL, Hannon NS, Shatz A, Lindenauer PK. A mixed-methods analysis of patient reviews of hospital care in England: implications for public reporting of health care quality data in the United States. Jt Comm J Qual Patient Saf 2013 Jan;39(1):7-15. [CrossRef] [Medline]
  33. Bodkin H. 2018. GPs are posing as patients and posting 'fake reviews' online, health chiefs reveal, The Telegraph.   URL: https://www.telegraph.co.uk/news/2018/07/16/gps-posting-fake-reviews-online-health-chiefs-reveal/ [accessed 2019-06-28]


PRW: physician rating website


Edited by G Eysenbach; submitted 22.02.20; peer-reviewed by E Brunson, P Schulz, F Kaliyadan; comments to author 15.04.20; revised version received 22.05.20; accepted 11.06.20; published 03.08.20

Copyright

©Stuart McLennan. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.08.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.