Published on in Vol 22, No 8 (2020): August

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/17718, first published .
Exploring the Use of Evidence From the Development and Evaluation of an Electronic Health (eHealth) Trial: Case Study

Exploring the Use of Evidence From the Development and Evaluation of an Electronic Health (eHealth) Trial: Case Study

Exploring the Use of Evidence From the Development and Evaluation of an Electronic Health (eHealth) Trial: Case Study

Authors of this article:

Monika Jurkeviciute1 Author Orcid Image ;   Henrik Eriksson1 Author Orcid Image

Original Paper

Centre for Healthcare Improvement, Chalmers University of Technology, Gothenburg, Sweden

Corresponding Author:

Monika Jurkeviciute, MSc

Centre for Healthcare Improvement

Chalmers University of Technology

Vera Sandbergs allé 8

Gothenburg,

Sweden

Phone: 46 766061558

Email: monika.jurkeviciute@chalmers.se


Background: Evidence-based practice refers to building clinical decisions on credible research evidence, professional experience, and patient preferences. However, there is a growing concern that evidence in the context of electronic health (eHealth) is not sufficiently used when forming policies and practice of health care. In this context, using evaluation and research evidence in clinical or policy decisions dominates the discourse. However, the use of additional types of evidence, such as professional experience, is underexplored. Moreover, there might be other ways of using evidence than in clinical or policy decisions.

Objective: This study aimed to analyze how different types of evidence (such as evaluation outcomes [including patient preferences], professional experiences, and existing scientific evidence from other research) obtained within the development and evaluation of an eHealth trial are used by diverse stakeholders. An additional aim was to identify barriers to the use of evidence and ways to support its use.

Methods: This study was built on a case of an eHealth trial funded by the European Union. The project included 4 care centers, 2 research and development companies that provided the web-based physical exercise program and an activity monitoring device, and 2 science institutions. The qualitative data collection included 9 semistructured interviews conducted 8 months after the evaluation was concluded. The data analysis concerned (1) activities and decisions that were made based on evidence after the project ended, (2) evidence used for those activities and decisions, (3) in what way the evidence was used, and (4) barriers to the use of evidence.

Results: Evidence generated from eHealth trials can be used by various stakeholders for decisions regarding clinical integration of eHealth solutions, policy making, scientific publishing, research funding applications, eHealth technology, and teaching. Evaluation evidence has less value than professional experiences to local decision making regarding eHealth integration into clinical practice. Professional experiences constitute the evidence that is valuable to the highest variety of activities and decisions in relation to eHealth trials. When using existing scientific evidence related to eHealth trials, it is important to consider contextual relevance, such as location or disease. To support the use of evidence, it is suggested to create possibilities for health care professionals to gain experience, assess a few rather than a large number of variables, and design for shorter iterative cycles of evaluation.

Conclusions: Initiatives to support and standardize evidence-based practice in the context of eHealth should consider the complexities in how the evidence is used in order to achieve better uptake of evidence in practice. However, one should be aware that the assumption of fact-based decision making in organizations is misleading. In order to create better chances that the evidence produced would be used, this should be addressed through the design of eHealth trials.

J Med Internet Res 2020;22(8):e17718

doi:10.2196/17718

Keywords



Evidence-based medicine has taken a central role in health care, aiming to increase the quality of clinical practice. In the medical domain, it is conceptualized as building clinical decisions on credible research evidence, professional experience and judgement, and patient preferences [1-3]. This trend has also risen in the evaluation and implementation of information and communication technologies (ICT) in health care (electronic health [eHealth]) [4,5]. Similar to conventional medicine, decision making in the implementations of eHealth solutions should “rely on explicit evidence derived from rigorous studies on what makes systems clinically acceptable, safe, and effective – not on basic science or experts alone” [6]. Hence, evidence should have utility in these decisions (ie, it should be usable and used). However, there is a growing concern that scientific evidence on whether eHealth works and is safe to use is not sufficiently used when forming health care policies and practice [4,7-11].

Evidence produced by evaluations dominates the discourse related to the evidence-based practice of eHealth implementations [7,12,13]. This emphasis and the strategies to support the use of this type of evidence can be seen through the scholarly discussions and sound methodological base developed in the form of evaluation guidelines, standard measures, and evaluation frameworks [7,13,14]. However, evidence-based practice includes additional types of evidence, such as professional experience and judgement, existing scientific evidence from other research, and patient preferences [1,2,7]. The use of these types of evidence generated through testing and implementing eHealth solutions is underexplored, leading to a lack of supporting strategies.

When the expectation is to use the evaluation evidence in making decisions regarding clinical implementations of eHealth [5], it refers to instrumental use, which is the direct use of the information in decision making and taking action, in order to change existing practice [15-19]. When evidence is not used instrumentally, it is referred to as a waste of resources and efforts contributing to the phenomenon of “pilotism” (remaining in a pilot state and not taken to integration) [20,21]. However, previous research has identified a number of other ways of evidence use [15,17-19,22,23]. Conceptual use refers to a nondirect use of information and perspectives to enhance understanding. Strategic or symbolic use happens when the evidence is brought up to support or confront an existing idea or decision. To the best of our knowledge, the different ways of evidence use (instrumental, conceptual, or symbolic) in the context of eHealth have not been addressed by previous research. Discussions limited to instrumental use potentially provide a too narrow view of actual evidence use in practice.

Furthermore, the considered users of evidence in the context of eHealth are usually limited to policy makers or health care professionals. However, eHealth is a multi-disciplinary field [14], and there might be more beneficiaries of the evidence. Exploring the types of evidence and the ways different actors use it can be worthwhile to support the uptake of evidence in the context of eHealth.

The purpose of this study was to analyze how different types of evidence (such as evaluation outcomes [including patient preferences], professional experiences obtained within the development and evaluation of an eHealth trial, and existing scientific evidence from other research) are used by diverse stakeholders. An additional aim was to identify barriers to the use of evidence and ways to support its use.


Context

This study was built on a case of a multinational and interdisciplinary European Union–funded project (for anonymity reasons, called “Alpha” in this paper). It was a nonpharmacological eHealth trial aimed at improving quality of life and increasing the independence of elderly with mild cognitive impairment and mild dementia. The Alpha trial introduced a case manager role and an ICT platform that consisted of web-based physical and cognitive exercise programs and an activity monitoring device to be used at home. As such, innovations were introduced on both the service model and technological levels.

The Alpha approach was implemented and tested in 4 countries. The project involved 8 partners: 4 care centers (in the aforementioned countries), 2 research and development companies that provided the eHealth solutions, and 2 science institutions. The trial lasted for 6 months. The evaluation included a number of variables, such as clinical efficacy, quality of life, patient adherence to technology, patient and health care professional’s satisfaction, process effectiveness, and a cost-benefit analysis. Clinical professionals were asked to collect patient data using a number of standardized and custom-made questionnaires, as well as to register some data in the registries in Excel files. The evaluation was performed at the end of the project and was finalized in June 2018. During the evaluation, several project partners were charged analyzing the different variables. As it frequently happens in multinational projects, the evaluation had to overcome several practical circumstances such as ethical approvals and systems integration issues that delayed patient recruitment and the start of the trial. This created a situation in which the trial time had to be shortened for some patients, resulting in a smaller dataset than planned.

Data Collection

The evidence considered in this study included evaluation results (including patient preferences) and professional experiences from the Alpha trial as well as existing scientific evidence from other research. Data were collected through 9 semistructured interviews with all the partners involved in the Alpha project (4 care centers, 2 research and development companies that provided the eHealth solutions, and 2 science institutions) that were conducted 8 months after the evaluation was concluded. The stakeholders were delimited to the partners of the Alpha project, since they had deep knowledge and experience from the trial and they were the primary candidates to consider using the evidence developed. However, the funding institution did not require the project partners to use evidence from the Alpha project.

The interviewee selection followed the principles of purposive sampling [24] and involved the key members of the Alpha teams in every country (see Table 1). At least one interview per partner was conducted. The interviewees were in a position to either use the evidence directly in making decisions that change practice (clinical, technological, scientific) or to decide whether the evidence is worthy of suggesting or presenting to the decision makers in the organizations. The positions of the interviewees and the industry of their work are presented in Table 1. All the interviews lasted one hour, were conducted via Skype, were recorded, and were transcribed.

Table 1. Interview respondents.
StakeholdersInterviewee occupation
Care centers

Care center 1Clinical neuropsychologist

Care center 2Quality director

Care center 2Senior physician

Care center 3Head of eHealtha research

Care center 4Project manager
Research and development companies

Research and development company 1Coordinator of the eHealth group

Research and development company 2Project manager and scientific coordinator
Science institutions

Science institution 1Director of the research center

Science institution 2Scientific coordinator

aeHealth: electronic health.

The interviews followed a guide structured around the components of evidence use [18,19,25], such as evidence users, types of impact, evidence already used (and useful) depending on the agenda of the stakeholder, agenda or purpose, quality of research, methodological credibility, relevance and timing for the organization to use the evidence, presentation of the results, and future plans in relation to the evidence.

Data Analysis

For every stakeholder, we analyzed the following: (1) the types of activities and decisions that were made after the Alpha project ended; (2) the types of evidence (evaluation results from the trial [including patient preferences], professional experiences, or existing scientific evidence from other research) that were used for those activities and decisions, based on the definitions of evidence [1]; (3) in what way the evidence was used (instrumental, conceptual, symbolic) [15,17-19,22]; and (4) barriers to the use of evidence. Instrumental use of evidence was assumed if the evidence obtained from the trial had a direct impact on practice decisions. Conceptual use of evidence was assumed when the evidence from the trial was used indirectly in ways that impacted the understanding, attitudes, and knowledge of the stakeholders but did not cause a change in practice. Symbolic use of evidence was concluded when the stakeholder had used evidence in confirming previous decisions. If necessary, the data were validated with professionals from the Alpha partners.

The findings were grouped by the types of activities and decisions made by the stakeholders using evidence from Alpha. For each activity or decision, the use of evidence is discussed.


At 8 months after the Alpha project ended, partners in the project (stakeholders) used evidence in the following decisions and activities: (1) integrating or abandoning the Alpha approach in clinical practice, (2) publishing results from the Alpha study, (3) applying for new research funding, (4) supporting regional policymaking, (5) improving technology, and (6) teaching students and health care professionals. Next, the ways that evidence were used by the stakeholders in every decision and activity are explained.

Integrating the Alpha Approach Into Clinical Practice

At the time of this study, 2 of the 4 care centers had decided to integrate the Alpha approach into clinical practice (care centers 1 and 2). In these care centers, the decision was mainly informed by the evidence from professional experiences. The health care professionals decided to adopt the Alpha approach based on their experiences with usability and adherence to the technology, as well as on patient satisfaction with the service. At the time of the decision, the evaluation results were not available yet. However, the professionals relied on their experience and the existing scientific evidence from other research that demonstrated that technologies and care models similar to Alpha can be beneficial for the patients targeted. The existing research was quite explicit on the benefits of physical and cognitive exercise (with and without the help of technology) for patients with cognitive impairments.

Specific clinical data are not a reason to not try to implement this model. Existing research can provide us such information. <…> Data related to adherence are enough to be interpreted as a useful model for these patients.
[Care center 1, clinical neuropsychologist]

The decisions to adopt the Alpha approach in care centers 1 and 2 were facilitated by the fact that resources for implementation were available from regional policies supporting and financing care models like Alpha. Care centers 1 and 2 planned to perform deeper statistical analyses on the clinical outcomes, cost, and savings. If a positive effect was found, the results would be disseminated, which would support their decision to integrate the Alpha approach in practice. Hence, in the case of care centers 1 and 2, professional experiences and existing scientific evidence from other research were used instrumentally (directly in decision making and action), while evaluation evidence was used symbolically (to strengthen the already taken decision).

When the decision was made, we didn’t have any results yet. <...> Our experience and preliminary data showed that this model is quite good. <…> Managers trusted our previous evaluations of similar models and thought that it will be the same. <…> We have to redo the economic evaluation to see how much it actually costs and how much we save.
[Care center 2, quality director]

Care center 3 planned to use the evaluation results to make a decision regarding adopting the Alpha approach in its clinical practice. The organization planned to present the evaluation results to the board and express a need for an eHealth solution like that tested in the Alpha project. Once approval from the board is obtained, the technology can be purchased and integrated in clinical practice. In this case, using the evaluation results in decision making for practice improvement indicates instrumental use.

Once we demonstrate that the results are OK, we are in a position to escalate it to decision makers, and we are able to incorporate it in our organization.
[Care center 3, Head of eHealth research]

Care center 4 decided to abandon the Alpha approach. Professional experiences were the primary influence on this decision. The concept was abandoned when the staff realized, over the course of the trial, how many resources the new model requires when applying it within the context of care center 4. It was deemed not the right time for the concept to be adopted in the organization. After the project finished, staff’s experiences in the care process of Alpha were presented to management. In the case of care center 4, professional experiences were used instrumentally (directly in decision making and action).

We didn’t pay so much attention to the results of the evaluation. We looked at what does it mean to work with patients in a situation like that. <…> For management, the descriptive conclusions were more interesting than the analysis.
[Care center 4, project manager]

Publishing Results of the Alpha Project

Publishing the results from the Alpha project in scientific outlets was initiated by almost all the partners of Alpha (except care center 4). The care centers used the clinical outcomes, quality of life, patient and employee satisfaction, and cost data in the scientific publications. The research and development companies used the adherence data and the feedback from patients and health care professionals related to their specific technology. Publishing the evaluation results helped these companies strengthen their image by demonstrating a case of the technology application in a real setting. In addition to the evaluation results, the partners used the existing scientific evidence from other research to build a case for research.

Since scientific publishing did not directly relate to decision making in practice, such use of evaluation evidence and existing evidence from other research was deemed conceptual.

Applying for New Research Funding

Most of the partners (care centers 1, 2, and 3; science institutions; and research and development companies) used Alpha evaluation results and professional experiences when applying for further research funding. Evaluation outcomes, experiences, and lessons learned in Alpha provided the basis for the case and allowed ideas to be built that could be applied in the next project. In such a case, Alpha evaluation results and professional experiences did not change local practices but increased the knowledge and understanding of the partners that subsequently helped to develop better research ideas and improve the design of the future studies. Hence, such use of evaluation evidence and professional experiences was deemed conceptual.

Supporting Regional Policy

Science institutions 1 and 2 and care centers 1, 2, and 3 presented Alpha results in regional policymaking activities as a concrete local example of an eHealth-supported care model within their regions. For this purpose, science institution 2 used the managerial and economic evidence from the evaluation of Alpha to demonstrate the possible impact of the eHealth solution on the local care facility. In addition, science institutions 1 and 2 used the health care professionals’ feedback and perceptions on working with Alpha in their local contexts to demonstrate local applicability. Since the Alpha case served as an example and was not meant to make decisions based on its results, such use of evaluation evidence and professional experiences in policymaking was deemed conceptual.

When you discuss a real case here in <region>, these messages are stronger than to discuss cases in <another country> or to say that literature says these things are useful.
[Science institution 2, scientific coordinator]

Improving eHealth Technology

The evaluation results of Alpha helped research and development company 2 in making decisions to improve its technology. The company focused on the patients’ and health care professionals’ feedback and preferences in relation to its technology and initiated actions to improve it. Since the evaluation evidence was used directly for decisions and action, we classified such use of evidence as instrumental.

Teaching Students and Health Care Professionals

Professional experiences with Alpha were used by science institution 1 in teaching students and health care professionals. Science institution 1 relied on the professionals’ feedback and perceptions on working with Alpha in their local contexts, as it demonstrates local applicability. Since the Alpha case served as an example and decisions were not meant to be made based on its results, such use of professional experiences in teaching was deemed conceptual.

We used the experience with the care models as an example in our courses. <…> We use it for practitioners as a subject to discuss and reflect upon.
[Science institution 1, director of the research center]

Table 2 describes situations of using the evidence 8 months after the project ended.

Table 2. Evidence use by different stakeholders.
Decisions taken and activitiesUse of different types of evidence in making the decision or performing the activity
Alpha evaluation resultsProfessional experience with AlphaExisting scientific evidence from other research
Care center 1



Adopt Alpha approachSymbolicInstrumentalInstrumental

Publish resultsConceptualNo use observedConceptual

Support regional policyNo use observedConceptualNo use observed
Care center 2



Adopt Alpha approachSymbolicInstrumentalInstrumental

Publish resultsConceptualNo use observedConceptual

Apply for research fundingConceptualConceptualNo use observed

Support regional policyNo use observedConceptualNo use observed
Care center 3



Present Alpha approach to decision makers for full implementationInstrumental (planned)Instrumental (planned)Instrumental (planned)

Publish resultsConceptualNo use observedNo use observed

Apply for research fundingConceptualConceptualNo use observed

Support regional policyNo use observedConceptualNo use observed
Care center 4



Abandon Alpha approachNo use observedInstrumentalNo use observed
Science institution 1



Publish resultsConceptualNo use observedConceptual

Teach students and health care professionalsNo use observedConceptualNo use observed

Support regional policyNo use observedConceptualNo use observed

Apply for research fundingConceptualConceptualNo use observed
Science institution 2



Support regional policyConceptualConceptualNo use observed

Apply for research fundingConceptualConceptualNo use observed

Publish resultsConceptualConceptualConceptual
Research and development company 1



Publish resultsConceptualNo use observedConceptual

Apply for research fundingConceptualConceptualNo use observed
Research and development company 2



Improve technologyInstrumentalInstrumentalNo use observed

Publish resultsConceptualNo use observedConceptual

Barriers to the Use of Evidence From the Alpha Trial

The first barrier to the use of evidence was related to the number of variables included in the Alpha evaluation. In most of the project locations, the scope of evaluation was deemed too extensive (it included several variables related to clinical efficacy, quality of life, patient adherence to technology, patient and health care professionals’ satisfaction, a number of variables to assess process effectiveness, and a cost-benefit analysis). The interviewees indicated that the time needed to collect this amount of data for every patient was too long. The clinicians had to fit the data collection into their routine work during meetings with the patients. Consequently, the clinicians were making choices about which data to collect at a particular time. Such trade-offs between data collection for the project and time spent with a patient affected the completeness of data collected and consequently the quality of evidence produced in the evaluation.

When you want to monitor a lot of variables, it is directly related to the time you need to spend with the patients. <…> The target should be to optimize how we collect the variables and information using not that much time.
[Care center 3, Head of eHealth research]

The second barrier to the use of evidence was related to the Alpha evaluation design when comparing before-after situations. Some of the interviewees with a clinical background disagreed that eHealth integration decisions can be purely based on hard facts and not on evidence from the evaluation when making such a decision. According to these respondents, novel eHealth-supported care models tested during trials are complex dynamic systems within the local context that vary, have differences in culture, and are affected by social interaction. Since these care models cannot be assumed as stable controlled systems, the assumption of stability in the traditional before-after measurement design of an eHealth trial provides less valuable information for eHealth integration decisions to improve practice. Additionally, such an evaluation design comparing before-after situations does not maximize the potential to enhance local learning. The interviewees suggested that people’s experiences with an eHealth solution and process measures, both captured continuously, could lead to iterative adaptation and adjustment between the eHealth solution and the context. Such iterative evaluation would provide higher value in these eHealth integration decisions to improve practice.

If we own the evaluation, we would take repeated measurements for improvement efforts and enhanced learning, rather than traditional approaches.
[Care center 2, quality director]

Principal Findings

In this study, we analyzed how different types of evidence, generated through the development and evaluation of an eHealth trial, are used by diverse stakeholders. This work demonstrated that evidence from eHealth trials is used in more ways than decision making regarding clinical integration or policymaking. In addition, different stakeholders can use the evidence for scientific publishing and dissemination, eHealth technology improvement, research funding applications, and teaching.

We found that professional experiences seem to have more influence over decisions regarding eHealth integration into clinical practice than formal evaluations and research. Learning whether and how an eHealth solution could fit within the local context provides key information for local decision making. If the design of an eHealth trial fails to create conditions for professionals to gain experiences, it might prevent learning and obtaining evidence that are crucial to increase the success rate of eHealth trials in their post-pilot phase and reduce “pilotism” [5,20]. Moreover, professional experiences from eHealth trials provide evidence that is valuable for the greatest variety of activities that include disseminating knowledge in various formats, policymaking, teaching, and providing feedback for technology improvement. Evaluation evidence might mostly be valuable for scientific publishing. Existing scientific evidence from other research is another type of evidence that can help to make decisions when it comes to integrating eHealth solutions into clinical practice. However, contextual relevance (eg, location, disease) matters.

To support the use of evaluation evidence, one could consider assessing a few, rather than a large number of, variables during an evaluation. It can help ensure the quality of data collected, preventing from making trade-offs between the time required and quality of the data. Additionally, shorter iterative cycles of evaluation could create better possibilities for health care professionals to gain more experience and use it as evidence in decision making regarding integration of eHealth solutions. Professional experiences could enhance evidence when relevant professionals are included and accumulate experience during eHealth trials. Such an approach could increase the degree of learning and chances that the eHealth solution would be integrated into practice and reach sustainability. Existing scientific evidence collected from previously conducted projects or initiatives in the same location as an eHealth solution is implemented can support decisions regarding eHealth integrations better than scientific evidence obtained from other locations. Research evidence produced in other contexts can be problematic for direct translation into making such decisions. However, it has value for scientific dissemination.

Limitations

The findings of this study were based on a specific innovation research project funded by the European Union. The interviewee sample was delimited to the partners of the project, since they had deep knowledge of the project and its results and were in the favorable position to use the evidence obtained. However, perspectives of the funding agency, industry, or governments could be a valuable avenue for further research. Similarly, evidence produced in other settings and study designs could provide a different view of the use of evidence. Furthermore, the study captured the situation 8 months after the Alpha project was finished. However, the use of evidence might be more extensive in later stages due to the so-called “gestation period” [25].

Comparison With Prior Work

Previous research on evidence-based practice usually described clinical implementations and policymaking [5,7] as evidence use in the context of eHealth. By focusing on a wider ecosystem of stakeholders than the traditional focus on health care providers and policymakers, our work identifies more uses of evidence, such as scientific publishing and dissemination, eHealth product and service improvement, applying for research funding, and teaching. Furthermore, we analyzed the use of additional types of evidence [1] such as professional experiences in addition to the traditional focus on evidence generated by evaluations and research [7,12,13].

Our study shows that the typical discourse on the instrumental use (in making decisions) of evidence (and lack of it) [5] does not sufficiently reflect the actual use of evidence in the context of eHealth. Evidence also serves the stakeholders conceptually (increasing knowledge and understanding) and symbolically (supporting already taken decisions) [15,17-19,22]. This suggests that evidence created through eHealth trials can have utility beyond decision making in clinical implementations of eHealth solutions or policymaking. Therefore, evidence that is not used instrumentally might not be a waste of resources and effort (the problem of so-called “pilotism”), as frequently judged by scholars [5,20]. Viewing the use of evidence through different users and the ways of using it can reveal the actual ways in which evidence from a trial is used, help commissioning bodies have realistic expectations on the influence trials can have, and help to design better trials.

Our study indicates that experiences in eHealth trials matter more than facts when it comes to how evidence is used for eHealth integration. The reason for this could be that “hard facts” from evaluation are difficult to straightforwardly implement in a complex health care reality [26,27]. Although it is attractive to think about organizations as rationale organisms and systems [5,28], organizations are rather characterized by decisions and actions that are far from rational. Instead, personal incentives, organizational culture, and other more subjective fields come into play when explaining how decisions are made and organizations develop [27,29,30]. Hence, we believe that the designers of eHealth trials could benefit from understanding how learning and continuous improvement are created and how knowledge development occurs in contemporary health care organizations. In other words, these fields could explain why opinions and subjective knowledge are more important than “hard facts” from an evaluation.

We identified a number of strategies to support the use of evidence in practice in the context of eHealth trials. First, we suggest focusing on a smaller number of variables during evaluation, to ensure the quality of collected data. Such a strategy is contradictory to the ever-expanding evaluation frameworks that include a growing number of variables (eg, [31,32]) and arguments that evaluations should aim to capture as wide an array of outcomes as possible [13]. Second, alternative designs to eHealth evaluation have been discussed in prior research (eg, [14,33]). Our study showed that such designs leading to shorter cycles of evaluation and enabling learning, iteration, and forming experiences that can support decisions could be more beneficial in improving the practice of different stakeholders [34-36]. Failure to produce timely results for decision making is among the barriers to the use of evidence identified by previous research [26].

Conclusions

We conclude that various stakeholders (such as care centers, research and development companies, science institutions) benefit from evidence differently. Therefore, the delimited focus of research on decision making in clinical settings or policymaking does not capture the actual beneficiaries and realities of evidence use. In addition, when making decisions regarding improving practice, stakeholders do not necessarily rely on the factual evidence produced by evaluations. We conclude that, in the context of eHealth, professional experiences seem to have more influence over decisions than formal evaluations and research. Hence, we suggest that scientific and practical discussions around evidence-based practice in the context of eHealth should include all sorts of evidence (evaluation evidence, professional experiences, existing scientific evidence from other research, and patient preferences). Additionally, it could be beneficial to have an in-depth view on how the evidence is used. This could help expand the conventional focus on clinical settings or policymaking and the direct use of evidence produced by research or evaluation when making decisions. Initiatives to support and standardize evidence-based practice in the context of eHealth should take these complexities into consideration to achieve better uptake of evidence in practice.

Acknowledgments

The Alpha project was funded by the European Union’s Horizon 2020 research and innovation program. The funding organization took no part in designing this study, collecting and analyzing the data, or in preparing this manuscript. The authors would like to express gratitude to the Alpha research consortium for providing data for this article. Written consent to publish was obtained from the project partners whose quotes have been used in this study.

Conflicts of Interest

None declared.

  1. Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA 1992 Nov 04;268(17):2420-2425. [CrossRef] [Medline]
  2. Greenhalgh T, Howick J, Maskrey N, Evidence Based Medicine Renaissance Group. Evidence based medicine: a movement in crisis? BMJ 2014 Jun 13;348(jun13 4):g3725-g3725 [FREE Full text] [CrossRef] [Medline]
  3. Djulbegovic B, Guyatt GH. Progress in evidence-based medicine: a quarter century on. The Lancet 2017 Jul;390(10092):415-423. [CrossRef]
  4. Ammenwerth E, Rigby M, editors. Evidence-based health informatics: Promoting safety and efficiency through scientific methods and ethical policy. Amsterdam, Netherlands: IOS Press; May 20, 2016.
  5. Rigby M, Ammenwerth E, Beuscart-Zephir M, Brender J, Hyppönen H, Melia S, et al. Evidence Based Health Informatics: 10 Years of Efforts to Promote the Principle. Yearb Med Inform 2018 Mar 05;22(01):34-46. [CrossRef]
  6. Wyatt J. Evidence-based Health Informatics and the Scientific Development of the Field. In: Ammenwerth E, Rigby M, editors. Evidence-based health informatics: Promoting safety and efficiency through scientific methods and ethical policy. Amsterdam, Netherlands: IOS Press; May 20, 2016:14-24.
  7. Rigby M, Magrabi F, Scott P, Doupi P, Hypponen H, Ammenwerth E. Steps in Moving Evidence-Based Health Informatics from Theory to Practice. Healthc Inform Res 2016 Oct;22(4):255-260 [FREE Full text] [CrossRef] [Medline]
  8. Koppel R. Is Healthcare Information Technology Based on Evidence? Yearb Med Inform 2018 Mar 05;22(01):07-12. [CrossRef]
  9. Cohen G, Schroeder J, Newson R, King L, Rychetnik L, Milat AJ, et al. Does health intervention research have real world policy and practice impacts: testing a new impact assessment tool. Health Res Policy Syst 2015 Jan 01;13(1):3 [FREE Full text] [CrossRef] [Medline]
  10. Alla K, Oprescu F, Hall WD, Whiteford HA, Head BW, Meurk CS. Can automated content analysis be used to assess and improve the use of evidence in mental health policy? A systematic review. Syst Rev 2018 Nov 15;7(1):194 [FREE Full text] [CrossRef] [Medline]
  11. Patrick J. The Validity of Personal Experiences in Evaluating HIT. Appl Clin Inform 2017 Dec 16;01(04):462-465. [CrossRef]
  12. Haux R. Preface. In: Ammenwerth E, Rigby M, editors. Evidence-based health informatics: Promoting safety and efficiency through scientific methods and ethical policy. Amsterdam, Netherlands: IOS Press; May 20, 2016:i-xv.
  13. Ammenwerth E. Evidence-based Health Informatics: How Do We Know What We Know? Methods Inf Med 2018 Jan 22;54(04):298-307. [CrossRef]
  14. Ossebaard HC, Van Gemert-Pijnen L. eHealth and quality in health care: implementation time. Int J Qual Health Care 2016 Jun;28(3):415-419. [CrossRef] [Medline]
  15. Alkin MC, Taut SM. Unbundling evaluation use. Studies in Educational Evaluation 2002 Mar;29(1):1-12. [CrossRef]
  16. Leviton LC, Hughes EF. Research On the Utilization of Evaluations. Eval Rev 2016 Jul 26;5(4):525-548. [CrossRef]
  17. Rich R. Uses of social science information by federal bureaucrats: Knowledge for action versus knowledge for understanding. In: Weiss C. Using social research in public policy making. Lexington MA: Lexington Books; 1977:199-211.
  18. Nutley S, Walter I, Davies H. Using evidence: How research can inform public services. Bristol, England: The Policy Press; 2007.
  19. Alkin MC, King JA. Definitions of Evaluation Use and Misuse, Evaluation Influence, and Factors Affecting Use. Am J Eval 2017 Aug 04;38(3):434-450. [CrossRef]
  20. Andreassen HK, Kjekshus LE, Tjora A. Survival of the project: a case study of ICT innovation in health care. Soc Sci Med 2015 May;132:62-69. [CrossRef] [Medline]
  21. Urueña A, Hidalgo A, Arenas AE. Identifying capabilities in innovation projects: Evidences from eHealth. Journal of Business Research 2016 Nov;69(11):4843-4848. [CrossRef]
  22. Weiss CH. The Interface between Evaluation and Public Policy. Evaluation 2016 Jul 24;5(4):468-486. [CrossRef]
  23. Estabrooks C, Squires J, Strandberg E, Nilsson-Kajermo K, Scott S, Profetto-McGrath J, et al. Towards better measures of research utilization: a collaborative study in Canada and Sweden. J Adv Nurs 2011 Aug;67(8):1705-1718. [CrossRef] [Medline]
  24. Bryman A, Bell E. Business research methods. Oxford, England: Oxford University Press; 2011.
  25. Feinstein ON. Use of Evaluations and the Evaluation of their Use. Evaluation 2016 Jul 24;8(4):433-439. [CrossRef]
  26. Hammersley M. The myth of research-based policy and practice. London, England: Sage; 2015.
  27. Freeman AC, Sweeney K. Why general practitioners do not implement evidence: qualitative study. BMJ 2001 Nov 10;323(7321):1100-1102 [FREE Full text] [CrossRef] [Medline]
  28. Glasgow RE. eHealth evaluation and dissemination research. Am J Prev Med 2007 May;32(5 Suppl):S119-S126. [CrossRef] [Medline]
  29. Argyris C, Schön DA. Organizational learning: A theory of action perspective. Reading, MA: Addison-Wesley; 1978.
  30. Crossan M. Chris Argyris and Donald Schön's Organizational Learning: There is no silver bullet. AMP 2003 May;17(2):38-39. [CrossRef]
  31. Eslami Andargoli A, Scheepers H, Rajendran D, Sohal A. Health information systems evaluation frameworks: A systematic review. Int J Med Inform 2017 Jan;97:195-209. [CrossRef] [Medline]
  32. Kidholm K, Ekeland AG, Jensen LK, Rasmussen J, Pedersen CD, Bowes A, et al. A model for assessment of telemedicine applications: mast. Int J Technol Assess Health Care 2012 Jan;28(1):44-51. [CrossRef] [Medline]
  33. Baker TB, Gustafson DH, Shah D. How can research keep up with eHealth? Ten strategies for increasing the timeliness and usefulness of eHealth research. J Med Internet Res 2014 Feb 19;16(2):e36 [FREE Full text] [CrossRef] [Medline]
  34. Provost LP. Analytical studies: a framework for quality improvement design and analysis. BMJ Qual Saf 2011 Apr 30;20 Suppl 1(Suppl 1):i92-i96 [FREE Full text] [CrossRef] [Medline]
  35. Moen R, Norman C. Evolution of the PDCA cycle. 2009 Presented at: 7th ANQ Congress; September 17, 2009; Tokyo, Japan p. 1-11. [CrossRef]
  36. Deming W. The new economics for industry, government, education. Cambridge, MA: MIT press; 2018.


eHealth: electronic health.
ICT: Information and communication technologies


Edited by G Eysenbach; submitted 07.01.20; peer-reviewed by A Persson, H Durrani; comments to author 17.03.20; revised version received 21.05.20; accepted 13.06.20; published 28.08.20

Copyright

©Monika Jurkeviciute, Henrik Eriksson. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 28.08.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.