Published on in Vol 22, No 1 (2020): January

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/15415, first published .
Tools to Assess the Trustworthiness of Evidence-Based Point-of-Care Information for Health Care Professionals: Systematic Review

Tools to Assess the Trustworthiness of Evidence-Based Point-of-Care Information for Health Care Professionals: Systematic Review

Tools to Assess the Trustworthiness of Evidence-Based Point-of-Care Information for Health Care Professionals: Systematic Review

Review

1Belgian Centre for Evidence Based Medicine (CEBAM), Leuven, Belgium

2Department of Public Health and Primary Care, Katholieke Universiteiti Leuven, Leuven, Belgium

3Artevelde Hogeschool, Ghent University Association, Ghent, Belgium

4Federation of the White and Yellow Cross of Flanders, Brussels, Belgium

5Belgian Health Care Knowledge Centre, Brussels, Belgium

Corresponding Author:

Gerlinde Lenaerts, MSc, PhD

Belgian Centre for Evidence Based Medicine (CEBAM)

Kapucijnenvoer 33

blok J bus 7001

Leuven, 3000

Belgium

Phone: 32 16377273

Email: gerlinde.lenaerts@cebam.be


Background: User-friendly information at the point of care should be well structured, rapidly accessible, and comprehensive. Also, this information should be trustworthy, as it will be used by health care practitioners to practice evidence-based medicine. Therefore, a standard, validated tool to evaluate the trustworthiness of such point-of-care information resources is needed.

Objective: This systematic review sought to search for tools to assess the trustworthiness of point-of-care resources and to describe and analyze the content of these tools.

Methods: A systematic search was performed on three sources: (1) we searched online for initiatives that worked off of the trustworthiness of medical information; (2) we searched Medline (PubMed) until June 2019 for relevant literature; and (3) we scanned reference lists and lists of citing papers via Web of Science for each retrieved paper. We included all studies, reports, websites, or methodologies that reported on tools that assessed the trustworthiness of medical information for professionals. From the selected studies, we extracted information on the general characteristics of the tools. As no standard, risk-of-bias assessment instruments are available for these types of studies, we described how each tool was developed, including any assessments on reliability and validity. We analyzed the criteria used in the different tools and divided them into five categories: (1) author-related information; (2) evidence-based methodology; (3) website quality; (4) website design and usability; and (5) website interactivity. The percentage of tools in compliance with these categories and the different criteria were calculated.

Results: Included in this review was a total of 17 tools, all published between 1997 and 2018. The tools were developed for different purposes, from a general quality assessment of medical information to very detailed analyses, all specifically for point-of-care resources. However, the development process of the tools was poorly described. Overall, seven tools had a scoring system implemented, two were assessed for reliability only, and two other tools were assessed for both validity and reliability. The content analysis showed that all the tools assessed criteria related to an evidence-based methodology: 82% of the tools assessed author-related information, 71% assessed criteria related to website quality, 71% assessed criteria related to website design and usability, and 47% of the tools assessed criteria related to website interactivity. There was significant variability in criteria used, as some were very detailed while others were more broadly defined.

Conclusions: The 17 included tools encompass a variety of items important for the assessment of the trustworthiness of point-of-care information. Overall, two tools were assessed for both reliability and validity, but they lacked some essential criteria for the assessment of the trustworthiness of medical information for use at the point-of-care. Currently, a standard, validated tool does not exist. The results of this review may contribute to the development of such an instrument, which may enhance the quality of point-of-care information in the long term.

Trial Registration: PROSPERO CRD42019122565; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=122565

J Med Internet Res 2020;22(1):e15415

doi:10.2196/15415

Keywords



Evidence-based medicine is one of the cornerstones of high-quality health care. This conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients [1] should be facilitated to ensure effective and efficient patient care. With a continuously increasing body of scientific evidence, it is not feasible for health care professionals to access and review the best evidence themselves regularly and independently. Furthermore, they have little time to process large quantities of information during their consultation with patients [2]. Therefore, health care professionals need good quality information that is also user-friendly. This type of information is labeled point-of-care information [3,4], and it is well-structured, rapidly accessible, and comprehensive information for use at the specific point in the workflow when health care professionals and patients interact [3].

Health care professionals routinely use clinical guidelines as reliable sources of information to support their clinical decision-making. Guidelines are statements that include recommendations that are intended to optimize patient care, and that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options [5]. This combination of an assessment of quality of evidence and the benefits and harms means guidelines are most suited to guide clinical decision-making. Furthermore, a validated instrument is available to assess the quality of guidelines [6]. This instrument, known as AGREE II (Appraisal of Guidelines for Research and Evaluation), was developed for the assessment of the validity and trustworthiness of clinical guidelines and is nowadays recognized as an international standard. The use of such an instrument enhances the quality of guidelines [7,8]; however, for many clinical problems or health care professions, there are no or limited guidelines available. In that case, one depends on other information sources. Thanks to the internet, a vast amount of information is accessible within a few mouse clicks, but identification of the most relevant information and assessment of its quality and transparency is indispensable when used in clinical practice. Although different instruments for assessment of the methodological quality of systematic reviews [9-11] or individual studies [12,13] do exist, these instruments are not appropriate for evaluation of the trustworthiness of point-of-care information. Banzi et al [3] reviewed online point-of-care information summary providers. They developed a tool to evaluate the breadth, content development, and editorial policy against their claims of being “evidence-based.” However, this tool was never tested on validity and reliability.

We aimed to search for a valid tool to assess the trustworthiness of point-of-care information. To this end, we performed a systematic review to identify existing tools and examined their validity and reliability.


Overview

We performed a systematic review using the standards for systematic reviewing reported by Cochrane [14], and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were used for the reporting of our findings [15]. The protocol of this review was registered at PROSPERO (CRD42019122565).

Search Strategy

To identify tools, we used three sources of information. First, we searched the internet for institutes or initiatives that worked on the trustworthiness of health information. Second, we searched Medline (via PubMed) for relevant literature. A search from the inception of the database to June 2019 was conducted to identify the studies of interest. A search string was built using the concepts of trustworthiness and point-of-care information (Textbox 1). Terms within a concept were combined using the Boolean operator ‘OR.’ Then, the terms between the concepts were combined with the Boolean operator ‘AND.’ Lastly, we scanned the reference lists and lists of all citing papers via Web of Science for each retrieved paper, to identify additional tools that were not found in the previous searches.

Concepts used to build the search string.

Concept ‘trustworthiness.’

  • Mesh-terms: methods; standards (subheading); healthcare evaluation mechanisms; evaluation studies as topic; health care quality, access, and evaluation; reproducibility of results
  • Free-text words: methodological quality, quality standards, evidence-based methodology, editorial quality, evaluation, validity, reliability

Concept ‘point-of-care.’

  • Mesh-terms: health information systems; point-of-care systems; medical informatics, consumer health informatics;
  • Free-text words: (web-based or electronic or online or internet) and health information; e-Health, e-Health information, point-of-care services, point-of-care information
Textbox 1. Concepts used to build the search string.

Inclusion and Exclusion Criteria

We included all studies, reports, websites, or methodologies that reported on tools, including checklists and criteria, to assess the trustworthiness of medical information for health care professionals. We used the following criteria:

  1. Tools had to evaluate point-of-care information or resources for professionals. The definition of point-of-care information was web-based medical compendia specifically designed to deliver predigested, rapidly accessible, comprehensive, periodically updated, and evidence-based information (and guidance) to clinicians [3]. We excluded tools to assess the quality of information for patients, as well as tools that assessed the quality of systematic reviews or other primary studies.
  2. Tools had to evaluate trustworthiness. Trustworthiness represented features that made users trust the information, including methodological quality and editorial transparency. The tools that only assessed user-friendliness were excluded.
  3. Tools had to be published by multiple authors or an organization.
  4. Tools had to be freely available. In the case of websites that contained multiple tools, they were separated by their methodology.
  5. Additionally, we excluded tools to assess the quality of mobile applications.

Selection of Articles and Web Pages

Tools were selected by two researchers (GB, GL) independently. The selection of journal articles was made in two steps: (1) all titles and abstracts were compared against the selection criteria; and (2) the full texts of potential eligible articles were retrieved and subsequently compared against the inclusion and exclusion criteria. The two researchers resolved discrepancies in selection by discussion and consensus.

Assessment of Methodological Quality

To date, there are no standards to assess the methodological quality of tools to determine the trustworthiness of point-of-care information. Therefore, we could not perform a standard risk-of-bias assessment on each tool. However, we checked each tool for potential risk of bias in the developmental phase, including looking for a lack of validity and performing a reliability assessment. Also, we extracted details on the development of the tools.

Data Extraction and Analysis

A data overview table was used to extract data from the available tools (Textbox 2). We noted the general characteristics and described the purpose for which a tool was developed and the criteria and scoring systems used. The data extraction was then performed by one researcher (GL) and checked by a second researcher (GB). Discrepancies were identified and resolved through discussion. Based on the data overview table, both the similarities and differences of the characteristics of the tools used to assess the trustworthiness of point-of-care information were analyzed. To examine possible overlap between tools, we listed all criteria and mapped them into general ones, then divided those into five main categories: (1) author-related information; (2) evidence-based methodology; (3) website quality; (4) website design and usability; and (5) website interactivity (see Multimedia Appendix 1). Based on these categories and criteria, we described the characteristics of the tools using descriptive statistics.

Data overview table.

Characteristics of the tool

  • Name
  • Aim
  • Developer
  • Is there a sum score of final combined judgment?
  • Description (items, elements)
  • Description (scoring method)
  • Remarks

Development of the tool

  • How was the tool developed? (descriptive)
  • For which purpose was the tool developed? (descriptive)
  • Was the tool assessed for validity? (Y/N)
  • Was the tool assessed for reliability? (Y/N)
  • Other relevant details (descriptive)
Textbox 2. Data overview table.

Search Strategy

The flow chart shown in Figure 1 summarizes the results of the systematic search. After identification of relevant websites and titles and screening of abstracts and full texts for eligibility, 16 papers that reported on tools or criteria to assess the trustworthiness of point-of-care information used by health care professionals were included. One study reported on more than one tool [16]. Finally, 17 tools were included in this review.

Figure 1. Search strategy.
View this figure

General Characteristics of the Tools

Table 1 provides an overview of the general characteristics of the included tools. All tools were developed between 1997 and 2018, and they originated from the United States, Canada, Europe, Switzerland, Singapore, and Iran. The Health on the Net (HON) code [17] and the electronic health (eHealth) Code of Ethics [18] are both codes of conduct and the result of international collaboration. They were developed based on discussion and consensus with international expert panels and underwent peer review. The HONcode aims to control the quality of health information on the internet and provides a quality seal for HONcode-certified websites [17]. The eHealth Code of Ethics provides guiding principles to understand the risk and potential of health information on the internet for professionals, producers, and consumers, and aims to contribute to high-quality information in this way [18].

The Silberg [19], Kapoun [20], Gillois [21], Jiang criteria [22], and CART (Completeness, Accuracy, Relevance, Timeliness) [23] tools aimed to critically appraise and evaluate the quality, credibility, and appropriateness of health information on the internet. The development process of these tools was poorly described. The authors relied on existing criteria for the quality assessment of the information [20-23], or the critical thinking process of the authors was the basis for the definition of the criteria [19]. Likewise, the Sandvik scale [24], QUEST (Quality Evaluation Scoring Tool) [25], and the 11-Point Quality Assessment Scale [26] were developed for the same purpose, but they have a scoring system implemented. The Aslani criteria [16] are based on the HONcode, Silberg criteria, Kapoun criteria, Sandvik scale, and the Health Information Technology Institute (HITI) criteria. However, these criteria were excluded from this review because they are not available anymore.

The AMA (American Medical Association) developed principles [27] as guidelines meant to apply to all AMA websites, but they were also intended to guide the creators of websites that provide medical information for professionals and consumers. The principles are developed and regularly reviewed by AMA staff members and an external advisory panel of experts.

The Grid ULiège [28,29] and the Trumble Tool [30] are the most comprehensive tools and were developed for the analysis and evaluation of medical websites. An Excel-based evaluation form allows the calculation of a final, weighted score based on 38 or 23 items, respectively. The Trumble tool was specifically developed to evaluate evidence-based medical tools at point-of-care. The Banzi tool [3] aims to evaluate and score the breadth, content development, and editorial policy for point-of-care summaries against their claims of being evidence-based. Finally, OncoRX-IQ [31] is a tool developed for the assessment of the quality of information of online drug databases for anticancer drug interactions. The Banzi tool [3], the Trumble tool [26], and the 11-Point Quality Assessment Scale [30] are the only tools that specify the evaluation of evidence-based principles of online health information. The definitions of the content of these tools were done by researchers who arbitrarily postulated criteria that, to their opinion, would best describe the quality of point-of-care information. Only 4/17 (24%) tools were assessed for reliability [21,25,31] and only 2/17 (12%) for validity [25,26] (Table 1).

Table 1. Characteristics of the tools.
Name of toolLanguageDate of publicationCountry of originNumber of itemsAssessed for reliability or validityScoring system
Silberg criteria [19]English1997United States4
HONcodea [32]English1998Based in Switzerland, international working group8
Kapoun criteria [20]English1998United States5
Sandvik scale [24]English1999Norway7Yes
Gillois criteria [21]English1999France9Reliability
Joubert criteria [33]English1999France8
AMAb principles [27]English2000United States14
eHealthc Code of Ethics [18]English2000United States, international working group (WHOd/PAHOe)17
Jiang criteria [22]English2000United States7
Grid ULiege [28,29]English2003Belgium38Yes
CARTf [23]English2006United Kingdom4
Trumble Tool [30]English2006United Kingdom23yes
Banzi tool [3]English2010Italy10yes
OncoRx-IQ [31]English2010Singapore19Reliabilityyes
11 Point Quality Assessment Scale [26]English2012Canada11Reliability and validityyes
Aslani criteria [16]English2014Iran10
QUESTg criteria [25]English2018Canada6Reliability and validityyes

aHON: health on the net.

bAMA: American Medical Association.

ceHealth: electronic health.

dWHO: World Health Organization.

ePAHO: Pan American Health Organization.

fCART: Completeness, Accuracy, Relevance, Timeliness.

gQUEST: Quality Evaluation Scoring Tool.

Content Analysis of the Tools

Multimedia Appendix 1 presents an overview of the 17 included tools with their criteria for the assessment of the trustworthiness of point-of-care information. Altogether, the tools cover 156 criteria. These were combined into 36 general criteria, mapped in five main categories: (1) author-related information with 4 related criteria; (2) evidence-based methodology with 15 related criteria; (3) website quality with 8 related criteria; (4) website design and usability with 7 related criteria; and (5) website interactivity. Some criteria described in the tools were broad and covered more than one general criterion and vice versa, whereas some criteria described in the tools were detailed and therefore summarized in one general criterion. For a few tools [3,18,27,28,30,31], we excluded some of the criteria because they were inappropriate for the assessment of trustworthiness or not applicable in the current context.

Multimedia Appendix 2 presents the prevalence of criteria in the 17 included tools. Overall, 14/17 tools (82%) of the tools addressed author-related information. Only the Joubert criteria, CART, and the 11-Point Quality Assessment Scale did not assess author-related information. All 17 tools (100%) addressed one or more items in the category of evidence-based methodology. The criteria “references to source data” (n=11; 65%) and “content is current and actual” (n=15; 88%) were the most frequently assessed in this category. A total of 12 tools (71%) assessed criteria related to website quality. The most frequently assessed criteria in this category were “transparent ownership” (n=9; 53%) and “financial information” (n=9; 53%). Website design and usability were evaluated by 12 tools (71%). The criterion “ease of use and navigation” (n=11; 65%) was the most frequently used. Website interactivity refers to functions that allow contact or discussion with the authors or site owners. This category was mentioned in 8 tools (47%).

Assessment of Reliability and Validity of Tools

The reliability and validity of the tools were scarcely reported. Interrater reliability was calculated by kappa coefficients [25,26], Kendall coefficients [31], or by calculation of a percentage of agreement between two researchers [21]. QUEST was compared to three other criteria-related tools to calculate convergent validity. The quality scores generated by each pair of tools were compared to calculate Kendall tau-ranked correlation. The 11-Point Quality Assessment Scale stated that the tool was previously validated, but no information on the validation process could be found.


Primary Findings

This review studied 16 articles that reported on 17 tools analyzing the trustworthiness of point-of-care information. Our main finding was that the trustworthiness of information is currently assessed and scored in different ways, as illustrated by essential differences in the number of criteria and the content addressed by the tools. This reveals the need for consistency and completeness in evaluating the quality of health information resources. Therefore, this review extends the current literature by giving an overview of existing tools, including their criteria and general characteristics.

To assess the trustworthiness of health care information, we need reliable tools that have been assessed on reliability and validity. Only QUEST [25] and the 11-Point Quality Assessment Scale [26] were assessed on both reliability and validity. However, QUEST only encompasses criteria on author-related information and evidence-based methodology and is therefore too concise for quality assessment of point-of-care information. The 11-Point Quality Assessment Scale was developed as a quality measure for online texts and covers criteria related to evidence-based methodology and usability. However, criteria for the assessment of author-related information and website quality, such as transparent ownership and financial disclosures, were missing, and the validation process was not described. The absence of reliable and validated tools is an essential finding of this review and a shortcoming in the field.

The criteria used in tools to assess the trustworthiness of medical information showed much variation. We encountered this variation when it became apparent that it would be difficult to structure all the original criteria and to reformulate general criteria. Some criteria overlapped with others, using slightly different terms, which illustrated the lack of uniformity and consistency in tools for quality assessment of point-of-care information.

Back in 2001, Risk and Dzenowagis [34] highlighted the complexity of health information on the internet and analyzed the major quality initiatives. A set of quality criteria for health information and credible enforcement tools were named as essential elements for successful quality programs. Nowadays, the need for a uniform, validated tool for the quality assessment of point-of-care resources is still present. Currently, the quality of most point-of-care information is low [3,35]. Risk has suggested tool-based evaluation of quality and third-party certification of compliance as critical mechanisms for quality improvement [34]. A valid tool may improve the quality of point-of-care information, as was also reported for guidelines [8].

Content of the Tools

All the tools have criteria to assess the evidence-based methodology used to summarize the information. Some use only two [16,19,27] while other tools have seven or more criteria in this category [3,18,26,28,29]. Perhaps the content of the items is more important than the number. For example, “reference to source data” and “content is current and actual” are frequently used, but often these criteria do not guarantee that an information source is truly evidence-based. Remarkably, only a few criteria fit the first three steps in evidence-based medicine: asking a good question, finding the best evidence, and appraising the evidence [36]. For example, “systematic reviews are preferred on primary studies,” “formal grading of evidence,” and “reporting of bias” are not standard and not addressed in the different tools.

A closer look at the weighted scoring system implemented in the Trumble tool [30] and the Grid ULiège [28] reveals that criteria related to the evidence-based methodology are the most important, as they receive the highest weight factor. The Trumble tools gives equal weight to criteria related to usability and currency, while Grid ULiège gives lower weight to criteria related to usability. Banzi et al [3] based their tool on criteria from research on systematic review reporting methods and peer-reviewed medical journals’ policies. The tool was developed to check point-of-care information against their claims of being evidence-based [3,4], which clarifies focus on this topic. The eHealth Code of Ethics [18], the Banzi tool [3], and the 11-Point Quality Assessment Scale [26] focus on the evidence-based aspects but have little or no attention for the category “website design and usability,” which is related to the point-of-care aspect of information. Health information sources that are difficult to navigate will likely be used less since time constraints are an important barrier for health care professionals [2,37]. Therefore, the ideal tool should find the right balance between evidence-based methodology and usability-related criteria.

The Grid ULiège include detailed criteria, but the descriptions were sometimes unclear and seemed to contain overlapping items. Conversely, other tools [17-20,24,25,27,31] addressed multiple content aspects in only one criterion. Some tools were very concise in terms of the number of criteria [19,23] and seemed insufficient for a thorough evaluation of medical information, while others were too extensive and detailed and were therefore difficult to use [28,29]. These findings show that an adequate definition of criteria, together with a rational number of criteria, is indispensable for the usability of a tool.

For a few included tools [3,18,27,28,30,31], some criteria were excluded because they were considered inappropriate for the assessment of trustworthiness or not applicable in the current context (eg, criteria related to electronic commerce and marketing or drug-specific criteria) (see Multimedia Appendix 2). The criterion “breadth and volume” was excluded because it would disadvantage information sources that were designed for one pathology or treatment. Moreover, the specificity of an information source did not necessarily affect the quality, and small volume sources may contain useful information for practitioners.

Practical Implications

As digitization is continuing in the health care sector and point-of-care information may play an increasingly important role in the daily practices of health care professionals, a valid evaluation tool for this kind of medical information is necessary. The usability of such a tool may depend on the user. Tools meant for health care professionals need to be short, whereas tools meant for external organizations that aim to validate information sources may be more comprehensive.

The current situation is problematic: There is no standard, valid tool available for health care professionals for a proper assessment of medical, point-of-care information. The use of the AGREE II instrument for the assessment of clinical guidelines was previously associated with enhanced guideline adoption: increased guideline endorsements, an increase in overall intentions to use guidelines, and an increase in overall quality of guidelines [8]. Therefore, the use of a tool for assessment of point-of-care information may improve the quality and use of this kind of information. Based on the results of this review, we suggest that such a tool should evaluate author-related information and evidence-based methodology. The items from the categories “website quality,” “website design and usability,” and “website interactivity” can be used to assess whether an information source is truly point-of-care information.

Limitations

When we performed the literature search for this review, we noticed the absence of a common terminology for assessment of trustworthiness of point-of-care information. This is a limitation of this review that might have affected the output of the literature search. A broad search was needed to cover all tools used for different applications in medicine. Similarly, Kwag et al [4] noticed that point-of-care information summaries use different terms. We agree with their statement that a standard definition would be beneficial for the PubMed Mesh vocabulary.

A standard risk-of-bias assessment on each tool could not be performed, as no standard to assess this is currently available. Therefore, it was not possible to distinguish methodologically sound tools from those that are methodologically weak. However, each tool was checked for potential risk of bias in the developmental phase, such as lack of validity and reliability assessment.

Conclusion

In conclusion, this systematic literature review identified 17 different tools for the assessment of the trustworthiness of point-of-care information. These tools encompass a variety of items, but to date, a standard, validated tool is nonexistent. The results of this review may contribute to the development of a standard tool, which may enhance the quality and trustworthiness of point-of-care information in the longer term.

Acknowledgments

The authors would like to acknowledge the Belgian National Institution for Health and Disability Insurance (RIZIV) for funding this project.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Included tools and criteria.

DOCX File , 39 KB

Multimedia Appendix 2

Data summary.

DOCX File , 37 KB

  1. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ 1996 Jan 13;312(7023):71-72 [FREE Full text] [CrossRef] [Medline]
  2. Sadeghi-Bazargani H, Tabrizi JS, Azami-Aghdash S. Barriers to evidence-based medicine: a systematic review. J Eval Clin Pract 2014 Dec;20(6):793-802. [CrossRef] [Medline]
  3. Banzi R, Liberati A, Moschetti I, Tagliabue L, Moja L. A review of online evidence-based practice point-of-care information summary providers. J Med Internet Res 2010 Jul 07;12(3):e26 [FREE Full text] [CrossRef] [Medline]
  4. Kwag KH, González-Lorenzo M, Banzi R, Bonovas S, Moja L. Providing Doctors With High-Quality Information: An Updated Evaluation of Web-Based Point-of-Care Information Summaries. J Med Internet Res 2016 Jan 19;18(1):e15 [FREE Full text] [CrossRef] [Medline]
  5. Qaseem A, Forland F, Macbeth F, Ollenschläger G, Phillips S, van der Wees P, Board of Trustees of the Guidelines International Network. Guidelines International Network: toward international standards for clinical practice guidelines. Ann Intern Med 2012 Apr 03;156(7):525-531. [CrossRef] [Medline]
  6. AGREE Next Steps Consortium. 2017. The AGREE II Instrument   URL: https://www.agreetrust.org/ [accessed 2019-12-02]
  7. Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, AGREE Next Steps Consortium. AGREE II: advancing guideline development, reporting, and evaluation in health care. Prev Med 2010 Nov;51(5):421-424. [CrossRef] [Medline]
  8. Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, AGREE Next Steps Consortium. Development of the AGREE II, part 1: performance, usefulness and areas for improvement. CMAJ 2010 Jul 13;182(10):1045-1052 [FREE Full text] [CrossRef] [Medline]
  9. Higgins J, Lasserson T, Chandler J, Tovey D, Churchill R. Cochrane Community.: The Cochrane Collaboration; 2019 Oct. Methodological Expectations of Cochrane Intervention Reviews (MECIR): Standards for the conduct and reporting of new Cochrane Intervention Reviews, reporting of protocols and the planning, conduct and reporting of updates   URL: https://community.cochrane.org/mecir-manual [accessed 2019-12-02]
  10. Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, et al. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol 2009 Oct;62(10):1013-1020. [CrossRef] [Medline]
  11. Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017 Sep 21;358:j4008 [FREE Full text] [CrossRef] [Medline]
  12. Charnock D, Shepperd S, Needham G, Gann R. DISCERN: an instrument for judging the quality of written consumer health information on treatment choices. J Epidemiol Community Health 1999 Feb;53(2):105-111 [FREE Full text] [CrossRef] [Medline]
  13. Moher D, Schulz KF, Altman D, CONSORT Group. The CONSORT Statement: revised recommendations for improving the quality of reports of parallel-group randomized trials 2001. Explore (NY) 2005 Jan;1(1):40-45. [CrossRef] [Medline]
  14. Higgins J, Thomas J, Chandler J, Cumpston M, Li T, Page M, et al. Cochrane Training.: The Cochrane Collaboration; 2019. Cochrane Handbook for Systematic Reviews of Interventions   URL: https://training.cochrane.org/handbook/current [accessed 2019-12-02]
  15. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol 2009 Oct;62(10):1006-1012. [CrossRef] [Medline]
  16. Aslani A, Pournik O, Abu-Hanna A, Eslami S. Web-site evaluation tools: a case study in reproductive health information. Stud Health Technol Inform 2014;205:895-899. [Medline]
  17. Boyer C, Selby M, Scherrer JR, Appel RD. The Health On the Net Code of Conduct for medical and health Websites. Comput Biol Med 1998 Sep;28(5):603-610. [CrossRef] [Medline]
  18. e-Health Ethics Initiative. e-Health Code of Ethics (May 24). J Med Internet Res 2000;2(2):E9 [FREE Full text] [CrossRef] [Medline]
  19. Silberg WM, Lundberg GD, Musacchio RA. Assessing, controlling, and assuring the quality of medical information on the Internet: Caveant lector et viewor--Let the reader and viewer beware. JAMA 1997 Apr 16;277(15):1244-1245. [Medline]
  20. Kapoun J. Teaching undergrads WEB evaluation: A guide for library instruction. College and Research Libraries News 1998 Jul;59(7):522-523 [FREE Full text]
  21. Gillois P, Colombet I, Dréau H, Degoulet P, Chatellier G. A critical appraisal of the use of Internet for calculating cardiovascular risk. Proc AMIA Symp 1999:775-779 [FREE Full text] [Medline]
  22. Jiang YL. Quality evaluation of orthodontic information on the World Wide Web. Am J Orthod Dentofacial Orthop 2000 Jul;118(1):4-9. [CrossRef] [Medline]
  23. Pencheon D, Gray M, Melzer D. Oxford Handbook Of Public Health Practice (2nd edition). Oxford: Oxford University Press; 2006.
  24. Sandvik H. Health information and interaction on the internet: a survey of female urinary incontinence. BMJ 1999 Jul 03;319(7201):29-32 [FREE Full text] [CrossRef] [Medline]
  25. Robillard JM, Jun JH, Lai J, Feng TL. The QUEST for quality online health information: validation of a short quantitative tool. BMC Med Inform Decis Mak 2018 Oct 19;18(1):87 [FREE Full text] [CrossRef] [Medline]
  26. Prorok JC, Iserman EC, Wilczynski NL, Haynes RB. The quality, breadth, and timeliness of content updating vary substantially for 10 online medical texts: an analytic survey. J Clin Epidemiol 2012 Dec;65(12):1289-1295. [CrossRef] [Medline]
  27. Winker MA, Flanagin A, Chi-Lum B, White J, Andrews K, Kennett RL, et al. Guidelines for medical and health information sites on the internet: principles governing AMA web sites. American Medical Association. JAMA 2000;283(12):1600-1606. [CrossRef] [Medline]
  28. Delvenne C. Grille d'analyse des sites médicaux sur l'Internet. Liège, Belgium: University of Liège; 1999.   URL: http://www.ebm.uliege.be/grille.htm [accessed 2019-12-02]
  29. Delvenne C, Pasleau F. Organising access to Evidence-Based Medicine resources on the Web. Comput Methods Programs Biomed 2003 May;71(1):1-10. [CrossRef] [Medline]
  30. Trumble JM, Anderson MJ, Caldwell M, Chuang F, Fulton S, Howard A. Texas Health Science Libraries Consortium. 2006 Nov. A systematic evaluation of evidence based medicine tools for point-of-care   URL: http://www.thslc.org/papers.html [accessed 2019-12-02]
  31. Yap KY, Raaj S, Chan A. OncoRx-IQ: a tool for quality assessment of online anticancer drug interactions. Int J Qual Health Care 2010 Apr;22(2):93-106. [CrossRef] [Medline]
  32. Boyer C, Selby M, Appel RD. The Health On the Net Code of Conduct for medical and health web sites. Stud Health Technol Inform 1998;52 Pt 2:1163-1166. [Medline]
  33. Joubert M, Aymard S, Fieschi D, Fieschi M. Quality criteria and access characteristics of Web sites: proposal for the design of a health Internet directory. Proc AMIA Symp 1999:824-828 [FREE Full text] [Medline]
  34. Risk A, Dzenowagis J. Review of internet health information quality initiatives. J Med Internet Res 2001;3(4):E28 [FREE Full text] [CrossRef] [Medline]
  35. Moja L, Banzi R. Navigators for medicine: evolution of online point-of-care evidence-based services. Int J Clin Pract 2011 Jan;65(1):6-11. [CrossRef] [Medline]
  36. Straus SE, Sackett DL. Using research findings in clinical practice. BMJ 1998 Aug 01;317(7154):339-342 [FREE Full text] [CrossRef] [Medline]
  37. Campbell JM, Umapathysivam K, Xue Y, Lockwood C. Evidence-Based Practice Point-of-Care Resources: A Quantitative Evaluation of Quality, Rigor, and Content. Worldviews Evid Based Nurs 2015 Dec;12(6):313-327. [CrossRef] [Medline]


AGREE II: Appraisal of Guidelines for Research and Evaluation
AMA: American Medical Association
CART: Completeness, Accuracy, Relevance, Timeliness
eHealth: electronic health
HON: health on the net
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
QUEST: Quality Evaluation Scoring Tool


Edited by N Zary; submitted 09.07.19; peer-reviewed by M Lorenzo, M Alshehri; comments to author 18.09.19; revised version received 01.10.19; accepted 25.10.19; published 17.01.20

Copyright

©Gerlinde Lenaerts, Geertruida E Bekkering, Martine Goossens, Leen De Coninck, Nicolas Delvaux, Sam Cordyn, Jef Adriaenssens, Patrick Vankrunkelsven. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.01.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.