Published on in Vol 22, No 5 (2020): May

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/17002, first published .
Exploring the Vast Choice of Question Prompt Lists Available to Health Consumers via Google: Environmental Scan

Exploring the Vast Choice of Question Prompt Lists Available to Health Consumers via Google: Environmental Scan

Exploring the Vast Choice of Question Prompt Lists Available to Health Consumers via Google: Environmental Scan

Original Paper

1Ask, Share, Know: Rapid Evidence for General Practice Decisions Centre for Research Excellence, School of Public Health, The University of Sydney, NSW, Australia

2Centre for Medical Psychology and Evidence-based Decision-making (CeMPED), The University of Sydney, NSW, Australia

*all authors contributed equally

Corresponding Author:

Marguerite Clare Tracy, MBBS (Hons), MPH, FRACGP

Ask, Share, Know: Rapid Evidence for General Practice Decisions Centre for Research Excellence

School of Public Health

The University of Sydney

Room 323A, Edward Ford Building A27

NSW, 2006

Australia

Phone: 61 293512942

Email: marguerite.tracy@sydney.edu.au


Background: There is increasing interest in shared decision making (SDM) in Australia. Question prompt lists (QPLs) support question asking by patients, a key part of SDM. QPLs have been studied in a variety of settings, and increasingly the internet provides a source of suggested questions for patients. Environmental scans have been shown to be useful in assessing the availability and quality of online SDM tools.

Objective: This study aimed to assess the number and readability of QPLs available to users via Google.com.au.

Methods: Our environmental scan used search terms derived from literature and reputable websites to search for QPLs available via Google.com.au. Following removal of duplicates from the 4000 URLs and 22 reputable sites, inclusion and exclusion criteria were applied to create a list of unique QPLs. A sample of 20 QPLs was further assessed for list length, proxy measures of quality such as a date of review, and evidence of doctor endorsement. Readability of the sample QPL instructions and QPLs themselves was assessed using Flesch Reading Ease and Flesch-Kincaid Grade Level scores.

Results: Our environmental scan identified 173 unique QPLs available to users. Lists ranged in length from 1 question to >200 questions. Of our sample, 50% (10/20) had a listed date of creation or update, and 60% (12/20) had evidence of authorship or source. Flesch-Kincaid Grade Level scores for instructions were higher than for the QPLs (grades 10.3 and 7.7, respectively). There was over a 1 grade difference between QPLs from reputable sites compared with other sites (grades 4.2 and 5.4, respectively).

Conclusions: People seeking questions to ask their doctor using Google.com.au encounter a vast number of question lists that they can use to prepare for consultations with their doctors. Markers of the quality or usefulness of various types of online QPLs, either surrogate or direct, have not yet been established, which makes it difficult to assess the value of the abundance of lists. Doctor endorsement of question asking has previously been shown to be an important factor in the effectiveness of QPLs, but information regarding this is not readily available online. Whether these diverse QPLs are endorsed by medical practitioners warrants further investigation.

J Med Internet Res 2020;22(5):e17002

doi:10.2196/17002

Keywords



The role of shared decision making (SDM) as a part of patient-centered care in clinical consultations is being increasingly recognized as having positive outcomes for patients [1]. Internationally, there are efforts to make SDM a part of routine health care [2]. The National Safety and Quality Health Service Standards (Second Edition) developed by the Australian Commission on Safety and Quality in Health Care include a statement that “Integral to the process is encouraging patients to be more involved and ask their doctor more questions during consultations” [3]. To facilitate patient question asking as a part of SDM, question prompt list (QPL) tools have been developed, and some have been evaluated and published in the peer-reviewed literature, some of which are available via the internet [4,5].

Increasingly, people are turning to the internet for health information [6]. Search engines are the predominant tool used by people to search for health information online [7]. In 2018, 78% of Australian adults reported using the internet to find health-related information [8]. Online QPLs are being used, and those using online question lists have been shown to prefer questions that specifically support SDM [9]. Physician endorsement has been found to be key to the successful implementation of QPLs into practice [4]. Similarly, a dominant factor for patient online health information seeking on the doctor-patient relationship is the doctor’s willingness to discuss the information [10].

With an increasing focus on question asking in consultations, many websites include lists of questions for someone to take to their doctor as an additional resource [11-13]. While there has been some research into the implementation of QPLs into practice, there has not been an assessment of the prevalence of such lists available online [5]. ​Given that online material can be created and hosted by anyone and there is no regulation of the quality of information or available tools, a study of these QPLs is warranted [14]. Assessing the readability of QPL resources as well as their prevalence is important in addressing usefulness.

Adequate literacy and health literacy of patients are important in the use of tools to support question asking and SDM [15]. In their systematic review published in 2012, Sørensen et al [16] identified several competencies of health literacy around having the ability to access, understand, and use information to make decisions about health. Supporting people with lower health literacy by making such tools readable and accessible is one way to increase access to SDM [15]. Several standards exist for the readability of patient information and other SDM tools such as decision aids, to increase the accessibility of materials for target audiences [17].

While standards exist for assessing the quality of patient information resources, such as the adaption of DISCERN for internet information [18,19], there is ongoing difficulty in assessing the quality of online medical information and resources as well as how users perceive and use that information [20-23]. In addition, there are factors other than the information itself that influence information preferences, such as domain bias [24] and webpage design [21]. For example, the extension “.com” is used for commercial sites, while “.edu,” “.gov,” and others denote non-commercial or government sites. Domain extensions have some bearing on how users view the information provided and their trust in the source [24,25].

Environmental scan processes have previously been used to assess available online decision aids and risk calculators [26,27]. They allow a real-time snapshot of the availability of online resources available to users. The aim of this study was to conduct an environmental scan to describe and assess the number and readability of QPLs readily available to health consumers in the online environment. 


Overview

We used a previously published methodology to conduct an environmental scan to search Google.com.au for question lists relating to patient-doctor clinical interactions [27]. Google.com.au was chosen as it is the most frequently used search engine in Australia (94.11% as of Dec 2018) [28]. To assess the number of question lists available to health care consumers online, 2 stages were needed. ​A third stage involved the assessment of the readability of a sample of QPLs and their instructions.

Stage 1: Choice of Reputable Websites and Search Term Development

The authors have expertise and backgrounds in nursing and medicine; knowledge of organizations and websites used by health professionals and consumers was used to identify a range of reputable organizations’ websites to reflect the clinical areas in which QPLs have been studied [4]. The aim was to ensure that selected sites included both disease-specific lists (eg, cancer websites, parent information about their child’s attention deficit hyperactivity disorder) as well as sites with more generic lists, such as those with consumer health information. We also aimed to ensure a mix of local (Australian) and international organizations with patient-focused information. The final list of 22 URLs for these organizations was decided by consensus between the authors (Table 1). These websites were accessed via the URLs to confirm that they referred to, or included, QPLs.

Using previous systematic reviews of QPLs and citation snowballing, we found 11 terms in the published literature that have been used to describe patient question lists [4,29]. In addition, the reputable organization list websites were accessed to find the language and terms used to describe QPLs on these sites; there were a further 9 terms found. Using these 2 sources, we had a total of 20 search terms for use in Stage 2 of the scan (Table 2).

Table 1. URLs of the 22 selected reputable websites.
Site or organization type, organizationURL
Consumer-directed organizations

Healthdirect Australia Ltd.healthdirect.gov.au
Consumers Health Forumchf.org.au
Information about medicine or prescribing

National Prescribing Service – Choosing Wiselychoosingwisely.org.au/home
National Prescribing Servicenps.org.au
Government entities

Department of Healthhealth.gov.au
Health Canadacanada.ca/en/health-canada
NHSa United Kingdomnhs.uk
Therapeutic Goods Administrationtga.gov.au
NHS Networks (Ask 3 questions)personcentredcare.health.org.uk/resources/ask-3-questions-materials
Institute for Healthcare Improvement (Ask Me 3)ihi.org/resources/Pages/Tools/Ask-Me-3-Good-Questions-for-Your-Good-Health.aspx
Disease-specific organizations

Cancer Council Australiacancer.org.au
Cancer Australiacanceraustralia.gov.au
Raising Children Networkraisingchildren.net.au
ADHDb Australiaadhdaustralia.org.au
Breast Cancer Network Australiabcna.org.au
Family Planning NSWcfpnsw.org.au
Pediatric organizations

Royal Children’s Hospital (RCH) Melbournerch.org.au
Sydney Children’s Hospital Networkschn.health.nsw.gov.au
HealthyChildren.org (American Academy of Pediatrics)healthychildren.org
Quality and safety organizations

Australian Commission for Safety and Quality in Health Caresafetyandquality.gov.au
Agency for Healthcare Research and Qualityahrq.gov
Wiser Healthcarewiserhealthcare.org.au

aNHS: National Health Service.

bADHD: attention deficit hyperactivity disorder.

cNSW: New South Wales.

Table 2. Search terms derived from reputable sites and literature.
Search termSource
Ask health professional questionsWiser Healthcare
Ask for informationNPSa MedicineWise
Questions to ask your doctorChoosing Wisely, Cancer Council, Healthdirect
Questions to askFamily Planning NSWb
Asking questionsChoosing wisely
Ask your health professionalNPS MedicineWise
Patient ask questionsScottish Health Council (Ask Me 3 search)
Patients ask providerAsk Me 3
Asking (these 3) questions during appointmentAsk 3 questions
Question prompt listLiterature
Question prompt sheetLiterature
Patient question prompt listLiterature
Patient question prompt sheetLiterature
Question sheetLiterature
Question listLiterature
Patient question aidLiterature
Shared decision-making toolLiterature
Patient agenda formLiterature
Patient agenda listLiterature
Patient question asking support toolLiterature

aNPS: National Prescribing Service.

bNSW: New South Wales.

Stage 2: Search Strategy 

A systematic search of Google.com.au using each of the 20 search terms was then conducted by 2 independent researchers (MT and PP), one term at a time after clearing the browser cache. The first 100 URL results for each term searched were downloaded to Excel spreadsheets and included in the first round. Users of Google have a strong bias to the order of results presented by Google [30], and few users look for a result beyond the first results page [31]. There are only 10 results displayed on the results page with Google’s default settings. We aimed to assess the breadth of lists available with our search terms, which was the reason for assessing the first 100 results for each term.

Inclusion Criteria

Websites, accessed via search URLs, included in the evaluation needed to meet the following criteria: provide a list of questions; question lists were described as for use by patients, carers, or parents in medical consultations (general practitioner or specialist medical consultations); lists were freely accessible (without registration or requiring payment); lists were written in English; and lists were visible as part of the website, not requiring downloading to be viewed, such as video files.

Exclusion Criteria

Websites or lists were excluded if the question list stated it was for doctor’s use; the use of the question list required additional supporting software, such as third-party document viewers; the website required registration or incurred a cost to access the question list; the list was on a sponsored site, such as a paid site that appears before the search results; the list required downloading to be able to be viewed, such as video files; the list was provided in an academic paper, unless the journal was aimed at consumers and the list was visible; and questions list was not focused on general practice and specialist consultations (eg, counselling for entry into clinical trials). URLs blocked by the university security software system were also not included as they were deemed a potential threat​.

The inclusion and exclusion criteria were then applied to the data by opening each of the URL links, again by MT and PP. Discrepancies regarding inclusion were resolved using a third reviewer (HS). Where there were duplications of lists, such as where the question list was identified on the website as sourced from another site or the webpage linked to a list already included in the review, these sites were then excluded to ascertain the final number of unique lists.

Further data about the URLs, websites, and lists were also collected. All URLs were assessed for the URL domain extension (ie, .edu, .com, .org). A sample of 20 lists was collated to consist of 10 URLs from the reputable organizations and, for comparison, a further 10 lists from other sites were selected using random number generation and matching to URLs from the Excel spreadsheet. The sample lists were assessed for the date of creation, review, or update, if available (copyright date for the website was not considered to be an indication that the website material had been reviewed); any evidence of review of the page, such as the name of a writer or reviewer; and an assessment of the number of questions in the lists.

Stage 3: Readability

To assess the readability of the QPLs and their instructions for use, we utilized the Flesch Reading Ease (FRE) and Flesch-Kincaid Grade Level (FKGL) scores. The FRE is one measure of the complexity of a piece of text and is a score between 0-100 calculated using average sentence length and the average number of syllables of words in the text [32]. The higher the FRE score, the easier the text is to read. The FKGL uses the same data with different weightings with a result that equates to the number of years of schooling (in the United States) required to read the text [33]. We applied these formulae to the sample lists. Each of the 20 URLs were accessed again, and both the online instructions for use (or the available preamble to the question list) and then the list of question(s) were copied into separate documents. The readability tool was then applied to the copied texts to calculate FKGL and FRE scores (Table 3).

Table 3. Description of reading ease scores, reproduced from Flesch [32].
Flesch Readability Scores (reading ease), pointsDescription of style
90-100Very easy
80-89Easy
70-79Fairly easy
60-69Standard
50-59Fairly difficult
30-49Difficult
0-29Very confusing

Following removal of duplicates from the 4000 URLs and 22 reputable sites and the application of the inclusion and exclusion criteria, there were a total of 235 lists. A further review of websites revealed 62 instances of list duplication (eg, links to reputable lists and multiple uses of a list within an organization). Using our search method, 173 unique lists were identified. See Figure 1 for the study diagram of the search. There were 15 websites that had lists used, referred to, or had links to them from other websites; 9 of these were from our reputable website list, which accounted for 46 links.

We noted a wide range in the number of questions in resources, from a single question to over 200 questions in a single resource. The most common URL domain extension was .org (115/235, 48.9%), followed by .com (63/235, 26.8%), .gov (24/235, 10.2%), .edu (11/235, 4.7%), and country code extensions such as .uk and .ca (18/235, 7.7%). More detailed analysis of the sample of 20 lists found that the number of questions ranged from 3 to 56 (mean 20.5 questions, mode 3 questions, median 18 questions; Table 4). Half (10/20, 50%) of the sample lists had a date of creation or review, with the range being 2002 to the date of the preliminary review, October 29, 2018. Evidence of authorship of the QPL was available for 80% of the reputable sample sites and 40% of the other sample sites.

Table 4 also shows the readability data scores for the 20 sample lists and their online instructions. The FRE scores were higher for lists from the reputable sites compared to other sites and for question lists compared to instructions. Similarly, the grade level required to read question lists was lower than for the instructions, with reputable sites also requiring lower grade levels for readability of instructions than for other sites.

The readability was found to be easier on the reputable websites compared to the other websites, with scores showing content ranged from “very easy” to “fairly easy” for question lists and from “easy” to “difficult” for the instructions.

Figure 1. Search strategy and results.
View this figure
Table 4. Sample list data: readability of instructions and question lists, number of questions, source/author information, and date of creation or review.
Organization or websiteInstructions and/or
introduction for list
Question listNumber of questionsEvidence of source or authorDate of creation or review
Flesch readabilityFKGLaFlesch readabilityFKGL
Reputable sites







Wiser HealthcareN/Abn/a85.539YescNo

Cancer Australia72.47.777.24.751YescYesd

NPSe Choosing wisely76.57.570.66.010YesYesd

Healthdirect48.011.280.74.256YesYesd

NHSf71.87.792.32.120NoYesg,h

Ask 3 Questions NHS85.54.898.12.13YescNo

IHIi Ask Me 364.69.4100.00.53YesNo

Ask Share Know78.85.575.45.23NoNoj

Ask 3 Questions Cardiff, UK78.24.871.15.43YescNo

Cancer Council (Australia)58.78.777.44.634YesYesh

Average of all reputable sites70.57.782.84.21980% (Yes)50% (Yes)
Other sites







Australian Thyroid Foundationn/an/a70.16.38NoNoi

American Heart Association (Heart Failure)5610.382.23.831NoNoi

HSSk Orthopaedic Hospital54.711.664.17.550YesYesd

Association for Children’s
Mental Health
59.39.278.35.530NoNoj

Beyond Blue50.410.885.63.65NoNoj

HealthyWomen.org73.07.771.75.818YesYesd,h

Readers Digest56.610.584.93.512YesYesl

Psych Central53.710.5637.518YescYesd,h

The Foundation for Peripheral Neuropathy58.29.8646.823NoNoj

MSm Trust UK67.27.885.63.527NoYesd

Average of other websites56.010.375.05.42240% (Yes)50% (Yes)
Overall63.259.078.94.620.560% (Yes)50% (Yes)

aFKGL: Flesch-Kincaid Grade Level.

bThere were no instructions preceding the QPL.

creference.

dcreation.

eNPS: National Prescribing Service.

fNHS: National Health Service.

goriginal.

hupdate.

iIHI: Institute for Healthcare Improvement.

jsite copyright date.

kHSS: Hospital for Special Surgery.

lonline publication date.

mMS: multiple sclerosis.


Principal Findings

This environmental scan of the internet for patient QPLs designed for use by patients to ask questions when seeing a doctor identified 173 unique lists on 235 websites. Of the lists found in our search, 15 lists had been used by other websites, with the majority (9/15) of these lists from our original list of reputable websites. The remaining duplicated lists were also from websites we regarded as reputable. In addition, there remained an abundance of advice to users about questions they could, or should, be asking at medical appointments from a wide variety of sources.

We noted that question lists appeared in a wide variety of types of websites, both from the categories of our reputable websites, such as government health information websites (eg, Healthdirect Australia) and disease-specific websites (eg, Cancer Council Australia and America), and from news websites (eg, globalnews.ca), commercial sites and blogs (eg, yourgpsdoc.com), sites of charities (eg, thebraintumourcharity.org), and educational institution websites (eg, sydney.edu.au). URL domain extensions do not always reflect the type of institution publishing the site and materials, yet they influence the user perception of site information [23]. While the source of medical information has not been shown to correlate with the accuracy of medical information [23,34], we noted the majority of lists came from .org extensions, which are generally associated with non-commercial organizations.

While there are detailed and extensive guidelines for the development of decision aids [17], there are currently no accepted standards that apply to QPLs. Many indicators of the quality of patient health information websites have been used in the past, with few variables showing any correlation to information quality, as measured by the accuracy or currency of information presented [34]. Display of creation and update dates, the age of the site, authorship, and URL extension have not been found useful in assessing sites. Our findings of these “quality” indicators were similar to other investigations of online patient materials in these respects. The sample QPL websites had higher rates of display of creation or update dates than those in previous studies of health information on the internet [23,34]. Display of some form of authorship or attribution of the sample materials was also higher than in similar studies of online health resources [23,34]. This is likely due to at least half of the sample being what we regarded as reputable.

In our environmental scan for online QPLs for use when attending a medical appointment, it was unclear in most cases whether doctors endorsed the questions for use in consultations. Prior research has shown that doctor endorsement significantly improves the use of QPLs by patients [4]. There were a couple of exceptions where the lists were placed on the website of particular practitioners where it could be assumed that the doctors in that practice supported patients using the questions included in the lists. In general, however, we were not able to ascertain specific endorsement by medical practitioners for the lists in practice. Further qualitative research to determine doctors’ views on implementing such a wide variety of QPL resources into practice is needed.

Our results show that there is room for improvement in the readability of instructions for using online QPLs. International Patient Decision Aid Standards recommendations for reading grade level for decision aids is a less than an 8th grade level [35]. While all the QPLs in the sample we tested had a grade level that met the International Patient Decision Aid Standards recommendations, accompanying instructional text was often well above this level, especially in our “other” website category.

Strengths and Limitations

By utilizing the strengths of the environmental scan method, we were able to rapidly assess the online environment, where most Australians look for health information, for the presence of QPLs. While literature reviews have been able to show the benefits of QPLs in practice, no assessment has been made of the number and availability of similar tools in the real-world environment. The online environment changes rapidly, and this scan provides a survey at a point in time of what is available to users. Our search tool was Google.com.au; hence, a similar search strategy in other countries might yield different results. We also limited our search to sites in English.

We noted website updates occurring even as we reviewed the websites; if the scan was to be replicated in the future, the results may be different. Further assessment of the quality of the lists beyond readability would provide additional information. Some proxy measures were used to assess sites, even though the presence of, for example, a date and authorship has been shown to have little correlation with quality. It is not clear which, if any, QPLs we discovered have been evaluated for their efficacy in improving patient and doctor outcomes in consultations.

It is unlikely that Google users might search specifically for question lists using the search terms we used. We were not testing the utility of terms to find QPLs; rather, the intent of our search and choice of search terms were to find as many as possible of the question lists available to users. Research into how users actually access, assess, and use these resources is warranted.

Conclusions

People seeking information on Google.com.au have a vast number of question lists available to them to use in consultations with their doctor. Surrogate markers of the quality or usefulness of various types of online QPLs have not yet been established, which makes it difficult to assess the value of the abundance of lists. Quality measures for QPLs should follow further assessment of usage and endorsement.

Physician endorsement has been shown to be an important factor in the usefulness of QPLs, but there is little information to suggest whether the QPLs found in this scan would be endorsed by physicians. Whether these diverse QPLs are endorsed by medical practitioners warrants further investigation.

Ensuring that online QPLs and the instructions for their intended use are accessible to patients is important. To improve usage of online QPLs, instructions for their use should have an equivalent readability or at least be accessible to the majority of people in the target groups for which the lists have been created.

Acknowledgments

This study was funded by a National Health and Medical Research Council, Grant/Award Number: Centre for Research Excellence; Postgraduate Research Scholarship in the Evaluation of the Healthdirect Question Builder (SC2396).

Conflicts of Interest

None declared.

Multimedia Appendix 1

Source and URL for QPLs identified.

DOCX File , 31 KB

  1. Shay LA, Lafata JE. Where Is the Evidence? A Systematic Review of Shared Decision Making and Patient Outcomes. Med Decis Making 2014 Oct 28;35(1):114-131. [CrossRef]
  2. Härter M, Moumjid N, Cornuz J, Elwyn G, van der Weijden T. Shared decision making in 2017: International accomplishments in policy, research and implementation. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen 2017 Jun;123-124:1-5. [CrossRef]
  3. Australian Commission on Safety and Quality in Health Care. Australian Commission on Safety and Quality in Health Care. Sydney: ACSQHC; 2017. National Safety and Quality Health Service (NSQHS) Standards (second edition)   URL: https://www.safetyandquality.gov.au/ [accessed 2019-11-14]
  4. Sansoni JE, Grootemaat P, Duncan C. Question Prompt Lists in health consultations: A review. Patient Educ Couns 2015 Jun 03;98(12):1454-1464. [CrossRef] [Medline]
  5. Shepherd HL, Barratt A, Jones A, Bateson D, Carey K, Trevena LJ, et al. Can consumers learn to ask three questions to improve shared decision making? A feasibility study of the ASK (AskShareKnow) Patient-Clinician Communication Model(®) intervention in a primary health-care setting. Health Expect 2016 Dec;19(5):1160-1168 [FREE Full text] [CrossRef] [Medline]
  6. Tan SS, Goonawardene N. Internet Health Information Seeking and the Patient-Physician Relationship: A Systematic Review. J Med Internet Res 2017 Jan 19;19(1):e9 [FREE Full text] [CrossRef] [Medline]
  7. Lee K, Hoti K, Hughes JD, Emmerton L. Dr Google and the consumer: a qualitative study exploring the navigational needs and online health information-seeking behaviors of consumers with chronic health conditions. J Med Internet Res 2014 Dec;16(12):e262 [FREE Full text] [CrossRef] [Medline]
  8. Australian Institute of Health and Welfare. Australian Government, 20 June. 2018. Australia's health 2018: in brief   URL: https://www.aihw.gov.au/reports/australias-health/australias-health-2018/contents/table-of-contents [accessed 2020-04-28]
  9. Tracy M, Shepherd H, Ivers R, Mann M, Chiappini L, Trevena L. What patients want to ask their doctors: Data analysis from Question Builder, an online question prompt list tool. Patient Educ Couns 2020 May;103(5):937-943 [FREE Full text] [CrossRef] [Medline]
  10. Tan SS, Goonawardene N. Internet Health Information Seeking and the Patient-Physician Relationship: A Systematic Review. J Med Internet Res 2017 Jan 19;19(1):e9 [FREE Full text] [CrossRef] [Medline]
  11. Bowel Cancer Austalia. 2017 Oct 17. What I need to ask (updated 2014)   URL: https://www.bigbowel.com.au/what-i-need-to-ask [accessed 2017-10-17]
  12. Cancer Council Australia. 2016 Mar 09. Questions to ask your doctor (updated March 9 2016)   URL: http://www.cancer.org.au/about-cancer/after-a-diagnosis/questions-to-ask-your-doctor.html [accessed 2017-10-17]
  13. St Vincent's Health Australia. 2014. Questions to ask your doctor and health care team   URL: https:/​/svha.​org.au/​home/​our-care/​being-involved-in-your-care/​questions-to-ask-your-doctor-and-health-care-team [accessed 2017-10-17]
  14. Walsh K, Pryor T, Reynolds K, Walker J, Mobilizing Minds Research Group. Searching for answers: How well do depression websites answer the public's questions about treatment choices? Patient Educ Couns 2019 Jan;102(1):99-105 [FREE Full text] [CrossRef] [Medline]
  15. Muscat DM, Shepherd HL, Morony S, Smith SK, Dhillon HM, Trevena L, et al. Can adults with low literacy understand shared decision making questions? A qualitative investigation. Patient Educ Couns 2016 Nov;99(11):1796-1802. [CrossRef] [Medline]
  16. Sørensen K, Van DBS, Fullam J, Doyle G, Pelikan J, Slonska Z, et al. Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health 2012 Jan 25;12:80 [FREE Full text] [CrossRef] [Medline]
  17. Elwyn G, O'Connor A, Stacey D, Volk R, Edwards A, Coulter A, et al. Developing a quality criteria framework for patient decision aids: online international Delphi consensus process. BMJ 2006 Aug 26;333(7565):417 [FREE Full text] [CrossRef] [Medline]
  18. Charnock D, Shepperd S, Needham G, Gann R. DISCERN: an instrument for judging the quality of written consumer health information on treatment choices. J Epidemiol Community Health 1999 Feb;53(2):105-111 [FREE Full text] [Medline]
  19. Khazaal Y, Chatton A, Cochand S, Coquard O, Fernandez S, Khan R, et al. Brief DISCERN, six questions for the evaluation of evidence-based content of health-related websites. Patient Educ Couns 2009 Oct;77(1):33-37. [CrossRef] [Medline]
  20. Reavley N, Jorm A. The quality of mental disorder information websites: a review. Patient Educ Couns 2011 Nov;85(2):e16-e25 [FREE Full text] [CrossRef] [Medline]
  21. Harris PR, Sillence E, Briggs P. The Effect of Credibility-Related Design Cues on Responses to a Web-Based Message About the Breast Cancer Risks From Alcohol: Randomized Controlled Trial. J Med Internet Res 2009 Aug 25;11(3):e37. [CrossRef]
  22. Sillence E, Briggs P, Harris PR, Fishwick L. How do patients evaluate and make use of online health information? Soc Sci Med 2007 May;64(9):1853-1862. [CrossRef] [Medline]
  23. Bernstam EV, Walji MF, Sagaram S, Sagaram D, Johnson CW, Meric-Bernstam F. Commonly cited website quality criteria are not effective at identifying inaccurate online information about breast cancer. Cancer 2008 Mar 15;112(6):1206-1213 [FREE Full text] [CrossRef] [Medline]
  24. Ieong S, Mishra N, Sadikov E, Zhang L. Domain bias in web search. 2012 Presented at: Epub International Conference on Web Search and Data Mining (WSDM); February 2012; Seattle, WA. [CrossRef]
  25. Aggarwal S, Oostendorp H, Reddy Y, Indurkhya B. Providing Web Credibility Assessment Support. 2637260: ACM; 2014 Presented at: 2014 European Conference on Cognitive Ergonomics; 2014; Vienna, Austria p. 1-8. [CrossRef]
  26. Bonner C, Fajardo MA, Hui S, Stubbs R, Trevena L. Clinical Validity, Understandability, and Actionability of Online Cardiovascular Disease Risk Calculators: Systematic Review. J Med Internet Res 2018 Feb 01;20(2):e29. [CrossRef]
  27. Fajardo MA, Balthazaar G, Zalums A, Trevena L, Bonner C. Favourable understandability, but poor actionability: An evaluation of online type 2 diabetes risk calculators. Patient Educ Couns 2019 Mar;102(3):467-473. [CrossRef] [Medline]
  28. statcounter. 2018. Search Engine Market Share in Australia - December 2018   URL: http://gs.statcounter.com/search-engine-market-share/all/australia/2016 [accessed 2019-01-08]
  29. Kinnersley P, Edwards A, Hood K, Cadbury N, Ryan R, Prout H, et al. Interventions before consultations for helping patients address their information needs. Cochrane Database Syst Rev 2007 Jul 18(3):CD004565-CD004568 [FREE Full text] [CrossRef] [Medline]
  30. Kim J, Thomas P, Sankaranarayana R, Gedeon T, Yoon H. Eye-tracking analysis of user behavior and performance in web search on large and small screens. J Assn Inf Sci Tec 2014 Jun 11;66(3):526-544. [CrossRef]
  31. Singer G, Norbisrath U, Lewandowski D. Ordinary search engine users carrying out complex search tasks. Journal of Information Science 2012 Dec 17;39(3):346-358. [CrossRef]
  32. Flesch R. A new readability yardstick. J Appl Psychol 1948 Jun;32(3):221-233. [CrossRef] [Medline]
  33. Kincaid JP, Fishburne RP, Rogers RL, Chissom BS. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Institute for Simulation and Training 1975;56:1-49.
  34. Buhi ER, Daley EM, Oberne A, Smith SA, Schneider T, Fuhrmann HJ. Quality and accuracy of sexual health information web sites visited by young people. J Adolesc Health 2010 Aug;47(2):206-208. [CrossRef] [Medline]
  35. International Patient Decision Aid Standards (IPDAS) Collaboration. IPDAS 2005: Checklist for judging the quality of patient decision aids   URL: http://www.ipdas.ohri.ca/using.html2006 [accessed 2017-10-17]


ADHD: attention deficit hyperactivity disorder.
FKGL: Flesch-Kincaid Grade Level.
FRE: Flesch Reading Ease.
HSS: Hospital for Special Surgery.
IHI: Institute for Healthcare Improvement.
MS: multiple sclerosis.
NHS: National Health Service.
NPS: National Prescribing Service.
NSW: New South Wales.
QPL: question prompt list.
SDM: shared decision making.


Edited by G Eysenbach; submitted 11.11.19; peer-reviewed by Z Michaleff, T Wieringa; comments to author 16.12.19; revised version received 13.02.20; accepted 23.03.20; published 29.05.20

Copyright

©Marguerite Clare Tracy, Heather L Shepherd, Pinika Patel, Lyndal Jane Trevena. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.05.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.