Published on in Vol 23, No 4 (2021): April

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/23961, first published .
Association of Electronic Health Record Vendors With Hospital Financial and Quality Performance: Retrospective Data Analysis

Association of Electronic Health Record Vendors With Hospital Financial and Quality Performance: Retrospective Data Analysis

Association of Electronic Health Record Vendors With Hospital Financial and Quality Performance: Retrospective Data Analysis

Original Paper

School of Health Administration, College of Health Professions, Texas State University, San Marcos, TX, United States

*all authors contributed equally

Corresponding Author:

Clemens Scott Kruse, MBA, MHA, MSIT, PhD

School of Health Administration

College of Health Professions

Texas State University

601 University Dr

San Marcos, TX, 78666

United States

Phone: 1 2103554742

Email: scottkruse@txstate.edu


Background: Electronic health records (EHRs) are a central feature of care delivery in acute care hospitals; however, the financial and quality outcomes associated with system performance remain unclear.

Objective: In this study, we aimed to evaluate the association between the top 3 EHR vendors and measures of hospital financial and quality performance.

Methods: This study evaluated 2667 hospitals with Cerner, Epic, or Meditech as their primary EHR and considered their performance with regard to net income, Hospital Value–Based Purchasing Total Performance Score (TPS), and the unweighted subdomains of efficiency and cost reduction; clinical care; patient- and caregiver-centered experience; and patient safety. We hypothesized that there would be a difference among the 3 vendors for each measure.

Results: None of the EHR systems were associated with a statistically significant financial relationship in our study. Epic was positively associated with TPS outcomes (R2=23.6%; β=.0159, SE 0.0079; P=.04) and higher patient perceptions of quality (R2=29.3%; β=.0292, SE 0.0099; P=.003) but was negatively associated with patient safety quality scores (R2=24.3%; β=−.0221, SE 0.0102; P=.03). Cerner and Epic were positively associated with improved efficiency (R2=31.9%; Cerner: β=.0330, SE 0.0135, P=.01; Epic: β=.0465, SE 0.0133, P<.001). Finally, all 3 vendors were associated with positive performance in the clinical care domain (Epic: β=.0388, SE 0.0122, P=.002; Cerner: β=.0283, SE 0.0124, P=.02; Meditech: β=.0273, SE 0.0123, P=.03) but with low explanatory power (R2=4.2%).

Conclusions: The results of this study provide evidence of a difference in clinical outcome performance among the top 3 EHR vendors and may serve as supportive evidence for health care leaders to target future capital investments to improve health care delivery.

J Med Internet Res 2021;23(4):e23961

doi:10.2196/23961

Keywords



Background

In the first part of the 20th century, health care predominantly revolved around a single health care provider, diagnosing and treating patients within the confines of their office or in the patient’s home, and patient medical histories were recorded on paper. However, as the US health care industry has progressed and modernized, the proliferation of information technology has accelerated. In 2013, US health information technology investment totaled US $2.8 billion; however, by 2017, it had reached a staggering US $7.1 billion [1]. However, some reports indicate that hospitals have struggled to remain profitable during this same time frame [2]. Many health care executives are questioning the return on investment in hospital technology and wondering whether their capital outlay will result in improved financial outcomes [3].

Numerous studies have noted the high expense of care delivery yet low health care information technology proliferation in the United States. Citing the advancement of similar technology in other developed countries, researchers have indicated that through increased investment in health information technology, the United States can lower overall health care spending and simultaneously improve quality of care and patient outcomes [4,5]. As a result, the federal government has been heavily involved in the modernization of health information technology. As early as 2004, President George W Bush called for computerized health records in his State of the Union address and offered a strategy to provide Americans access to electronic health records (EHRs); unfortunately, he did not earn sufficient funding to change provider behavior [6]. In 2009, President Barack Obama signed the Health Information Technology for Economic and Clinical Health (HITECH) Act as part of the American Recovery and Reinvestment Act (ARRA) and originally set aside US $27 billion for an incentive program that encouraged hospitals and providers to adopt EHR systems [7,8]. This legislation prompted the adoption of EHRs in 2014 and established time frames for mandated EHR adoption. Furthermore, with the passage of the Medicare Access and CHIP Reauthorization Act, Medicare introduced the Medicare EHR Incentive Program, and the Merit-Based Incentive Payment System to advance the meaningful use of EHRs. As a result, by 2015, 96% of hospitals and 87% of physician practices implemented EHRs [9]. By mid-2016, the total federal government investment in EHR rose to US $35 billion and continued to rise [10].

One could argue that the proliferation of health care technology spending did not occur organically. The hospital and health care industry shifted to the use of EHR systems primarily because of the financial incentives incorporated within the ARRA and HITECH legislation [11]. HITECH provided eligible professionals who demonstrated the meaningful use of an EHR qualified for payments of US $18,000 in the first year; US $12,000 for the second year; US $8000 for the third year; US $4000 for the fourth year; and US $2000 for the fifth year [12]. An eligible professional was generally considered to be a physician. After 2015, physicians who failed to meaningfully use EHRs were subject to reductions in Medicare and Medicaid reimbursement. Meaningful use of an EHR includes 3 components: (1) the EHR must be certified and include e-prescribing capabilities; (2) the technology must provide for the electronic exchange of personal health information with other EHR systems (interoperability); and (3) the system must produce reports utilizing various clinical and quality metrics.

Beginning in 2011, incentive payments were also available for eligible hospitals that showed meaningful use of EHR and that submitted quality metrics based on criteria identified by the US Department of Health and Human Services. Incentive amounts were phased out in 2015 for hospitals that had not implemented a meaningful EHR. In addition, incentive payments were not available for hospitals that were not meaningful EHR users [13]. Despite the robust startup incentives offered by the federal government and continuing support provided via increased Medicare reimbursement, questions remain if the investment in health care information technology is sustainable. The adoption of a comprehensive EHR system can surpass several billion dollars for a large health care system [14]. The sustainability of these systems can approach several hundred million dollars annually, and numerous health care systems report significant implementation and sustainment cost overages [15,16]. Furthermore, despite the prevalent adoption of EHR systems since the passage of the HITECH Act, sharing of health care data and interoperability of information technology remains to be elusive. There remains to be little financial incentive to share and use data to reduce costs or improve the quality of care [17].

Thus, we seek to assess the association between investment in information technology and the dominant EHR platforms each have on the financial and quality outcomes of hospitals in the United States. We hypothesize that there is a difference in outcomes among the 3 vendors for each measure. Although extensive research has focused on the perceived and actual benefits of information technology in health care, this is an area of research that has not been fully evaluated. The body of literature predating the passage of the HITECH and Affordable Care Acts is fairly robust; however, with the passage of these legislative acts and the rate of change in technology, many of these studies are now outdated, particularly with respect to the direct impact of specific EHR vendors [18-20].

Literature Review

Two recent studies examined the relationships between health information technology capital expenditure and both financial and quality outcomes and aligned very closely with our work. Wang et al [21] examined the impact of investment in health information technology on hospital financial performance and productivity. In a later study, Wang and Gibbs [22] offered a framework to compare the performance of EHR systems. We intend to build on these studies in a few key areas [21,22].

First, in the 2018 and 2019 studies, the key financial outcome variable examined was return on assets (ROA). Net revenue per staffed bed was also considered in the 2018 study. Broadly speaking, financial performance can be assessed in 4 main areas: (1) profitability or return on investment, (2) liquidity, (3) leverage, and (4) operating efficiency. Within each category, there are several variables to consider. Although ROA and revenue per bed are important, we believe that greater operational clarity can be achieved with a specific focus on the income statement and net income.

Second, quality was evaluated in the 2019 study via the Hospital Value–Based Purchasing (HVBP) Total Performance Score (TPS). These quality measures are all important to the field; however, we view the level of utilization of beds as a suboptimal measure of quality that is not well supported in the literature. The number of times a bed turns over speaks more to the volume of services demanded but offers little insight with respect to the quality of those services. We intend to focus more on the dimensions of HVBP and its subdomains. The contributory factors that are evaluated to produce the TPS include patient perceptions of care and measures of patient safety, process of clinical care delivery, and efficiency and cost reduction.

Third, both the 2018 and 2019 studies included important market characteristic variables as controls. These included hospital size, market concentration index (MCI), payer mix (Medicare and Medicaid; ie, the percentage of revenue coming from each of those programs), uncompensated care cost, ownership (governmental, proprietary, and nonprofit), teaching status, geographic classification, and year fixed effects. We suggest that additional variables may be insightful, as several other factors have been shown to influence both financial and quality outcomes [23-25]. Additional control variables worth considering are the level of outpatient services rendered, urban versus rural location, average length of stay, case mix index, wage index, sole community provider status, system membership, and geographic region [26]. These variables can further clarify the strength of the association between the independent variable of interest and our targeted dependent variables, while also serving to diminish any possible omitted variable bias.

Fourth, the 2019 study considers various EHR systems but does not clearly identify which vendors perform better or worse based on the available data. To the authors’ credit, they did not want the research to be construed as the promotion of a particular vendor. However, in our view, the health care industry is in an era of evidence-based medicine and management. Thus, we believe that it is appropriate to identify the system in question. This more transparent approach, coupled with more specific outcome data that clearly identifies practitioner actionable evidence, should provide greater practical insight and facilitate improved organizational decision making.

In the following sections, we integrate and broaden the investigation of previous research efforts and evaluate the impact of 3 of the largest EHR providers’ performance on measures of finance and quality. Quite simply, we would like to determine which EHR system performs the best and seek to provide health care leaders with an additional evidentiary basis for making EHR adoption decisions. Given the variation in EHR system cost, options, ease of use, training requirements, and on-site and follow-up support, we recognize that this can be a highly complex decision. Although we hypothesize that there is a difference in performance among the 3 EHR vendors in terms of financial and quality performance, we are uncertain a priori where each system will perform the best on the evaluation measures we have selected.


Data and Sample

The data for this study were extracted from two primary sources: the Definitive Health Care database and the American Hospital Association (AHA) Annual Survey database for 2018. The Definitive Health Care database provided the dependent and independent variables of interest, in addition to most of the control variables for this study. The Definitive Health Care database compiles US hospital data sources including Medicare Cost Reports, commercial claims data, Medicare Standard Analytics Files, Centers for Medicare and Medicaid Services (CMS) Hospital Compare, and many other data elements [27]. The cost report contains provider information such as facility characteristics, utilization data, cost and charges by cost center (in total and for Medicare), Medicare settlement data, and financial statement data. The AHA Annual Survey database provided the remaining data for the geographic region control variables [28]. All variables were linked with the 2 contributing data sources based on the Medicare provider number. Data on a total of 2667 short-term acute care hospitals were accumulated for analysis.

Measures—Dependent Variables

Table 1 shows the full complement of the study variables. Our study included 2 types of dependent variables drawn from the Definitive Health Care data set. The first set of data comes from the hospital income statement: net income scaled in millions of dollars. The second set of dependent variables is drawn from the hospitals’ 2018 value-based purchasing scores and includes (1) the TPS, (2) patient experience score, (3) clinical process score, (4) efficiency score, and (5) the safety score. Each hospital’s TPS is a weighted measure of performance based on each of the other areas listed, while each subordinate measure is an aggregation of several commonly tracked clinical and administrative criteria, as shown in Figure 1.

Under the CMS HVBP Program, Medicare makes incentive payments to hospitals based on how well they perform on each measure compared with other hospitals’ performance during a baseline period and how much they improve their performance on each measure compared with the baseline reporting period [29]. All value-based purchasing variables are unweighted and based on a scale of 0 to 100, with higher scores being better, and have been validated by CMS for validity and reliability [30]. The total number of value-based purchasing participating hospitals provided a size limit to the study. A missingness map using Amelia, a program for missing data developed by Honaker et al [31], revealed approximately 1% missing from 2667 observations and 32 variables (k). The maximum proportion missing from any column was 12.5% and from any row was 10.1%; therefore, these values were conservatively imputed with the median using R Statistical Software [32].

Table 1. Variables and operational definitions.
VariableOriginal sourceDefinition
CernerDefinitive Health CareHospital using the Cerner EHRa as its primary EHR platform
EpicDefinitive Health CareHospital using the Epic EHR as its primary EHR platform
MeditechDefinitive Health CareHospital using the Meditech EHR as its primary EHR platform
TPSbCMScThe TPS is derived from 4 equally weighted domains in financial year 2018:
  • Clinical care
  • Patient experience of care
  • Safety
  • Efficiency and cost reduction
Patient experience scoreCMSComposite of 9 measures extracted from the hospital consumer assessment of health care providers and systems survey
Clinical care scoreCMSComposite of 3 mortality measures: acute myocardial infarction, heart failure, and pneumonia
Efficiency and cost reduction scoreCMSMedicare Spending Per Beneficiary
Safety scoreCMSComposite of 7 safety related rates: catheter-associated urinary tract infections, central line-associated blood stream infection, clostridium difficile infection, methicillin-resistant staphylococcus aureus, patient safety for selected indicators composite, elective delivery before 39 completed weeks gestation, and surgical site infections
For-profit statusDefinitive Health CareHospitals operated by investor-owned organizations
Number of bedsDefinitive Health CareNumber of staffed beds
Rural statusDefinitive Health CareHospital located in a nonmetropolitan county or a hospital within a metropolitan county that is far away from the urban center, as defined by the Health Resource Services Administration
Government statusDefinitive Health CareHospitals operated by local, county, or state government
Teaching statusDefinitive Health CareHospitals affiliated with universities, colleges, medical schools, or nursing schools
Outpatient service mixDefinitive Health CarePercent of care delivered in an outpatient setting
Average length of stayDefinitive Health CareThe average number of days that patients spend in hospital, measured by dividing the total number of days stayed by all inpatients during a year by the number of admissions or discharges
Case mixDefinitive Health CareThe case mix index is the average relative diagnosis related group weight of a hospital’s inpatient discharges, calculated by summing the Medicare Severity-Diagnosis Related Group weight for each discharge and dividing the total by the number of discharges
Government payer mixDefinitive Health CareThe proportion of hospital reimbursement from governmental sources (Medicare, Medicaid, TRICARE, etc)
Wage IndexDefinitive Health CareA labor market area’s wage index value is the ratio of the area’s average hourly wage to the national average hourly wage 
Sole community hospitalDefinitive Health CareA sole community hospital classified by specific criteria from CMS (distance from other like hospitals, rural, travel time, number of beds, etc)
System memberDefinitive Health CareAn entity that owns or has owned 2 or more hospitals. In addition, health systems may also maintain ownership of other postacute or ambulatory sites of care
Market concentrationDefinitive Health CareThe Herfindahl-Hirschman Index measure of market concentration was used to determine market competitiveness. It is calculated by squaring the market share of each firm competing in a market and then summing the resulting numbers
Average age of facilityDefinitive Health CareAverage age of facility is calculated using the accumulated depreciation (total depreciation) and the depreciation expense (depreciation over a single period)
Occupancy rateDefinitive Health CareMeasure of utilization calculated as (inpatient days of care or bed days available)×100
RegionAmerican Hospital AssociationRegions of the United States as defined by the American Hospital Association

aEHR: electronic health record.

bTPS: total performance score.

cCMS: Centers for Medicare and Medicaid Services.

Figure 1. Financial year 2018 Hospital Value–Based Purchasing program measures.
View this figure

Measures—Independent and Control Variables

Our independent variables of interest included the top EHR systems used by hospitals in the United States (ie, Cerner, Epic, Meditech, or other), as reported in the Definitive Health Care database. These variables were included in our analysis as a dichotomous variable for each EHR system of interest (“1” if the system was used, “0” if not). Consistent with previous research, we also included several organizational-level control variables to account for other explanatory factors that could influence financial and quality outcomes. These variables included for-profit ownership status, number of beds, rural or urban geographic location, government ownership, teaching status, outpatient service mix, average length of stay, case mix, government payer percentage, wage index, sole community provider designation, system membership, MCI, occupancy rate, and geographic location by the AHA region. All analyses were performed using collinearity diagnostics. The variance inflation factor exceeded 10 in our analyses. Table 1 shows the data sources and operational definitions of the variables used in this study.


Overview

Descriptive statistics and pairwise correlations were also calculated. The distributions of most of the dependent variables (income, TPS, experience score, clinical score, and safety) were relatively normal; however, the HVBP efficiency score was skewed to the right. Box-Cox analysis of the variable suggested a negative square root transformation, but for interpretability, it was not transformed. All dependent variables were min-max scaled between 0 and 1 for easier interpretation as percentile scores (eg, income percentile). All statistical analyses were performed using the R Statistical Software [32]. In all the analyses, a two-tailed P value <.05 was considered statistically significant.

Table 2 provides detailed descriptive statistics for each variable. Participating hospitals had a mean net income of US $15.01 million (SD US $109.04 million); TPS mean of 37.26 (SD 11.16); patient experience mean of 33.36 (SD 18.07); clinical process score mean of 59.45 (SD 19.24); efficiency score mean of 19.39 (SD 24.75); and safety score mean of 53.01 (SD 17.74). In all cases, higher scores on the HVBP variables were better. EHR vendors are represented in the following proportions in our sample: Epic: 39.85% (1063/2667; SD 0.48); Cerner: 23.39% (624/2667; SD 0.42); Meditech: 19.98% (533/2667; SD 0.39); and “other”: 16.78% (447/2667; SD 0.39).

Among the numerous organizational characteristics included in our analysis as control variables, we observe 18.00% (480/2667) of the hospitals in the study population are for-profit facilities (SD 0.38), 22.98% (613/2667) are in rural locations (SD 0.42), 45.97% (1226/2667) are teaching facilities (SD 0.49), 72.97% (1946/2667) are affiliated with a health care system (SD 0.45), and that the hospitals are widely distributed across each of the AHA geographic regions.

Table 2. Descriptive statistics.
VariableValues, mean (SD)
Net income (in millions; US $)15.01 (109.04)
Total performance score37.26 (11.17)
Patient experience score (unweighted)33.36 (18.07)
Clinical process score (unweighted)59.45 (19.24)
Efficiency score (unweighted)19.39 (24.75)
Safety score (unweighted)53.01 (17.74)
EHRa-Cerner0.23 (0.42)
EHR-Epic0.40 (0.48)
EHR-Meditech0.20 (0.40)
EHR-other0.17 (0.39)
For-profit0.18 (0.40)
Beds214.75 (185.47)
Rural0.23 (0.42)
Government0.13 (0.34)
Teaching0.47 (0.50)
Outpatient service mix0.53 (0.15)
Average length of stay4.30 (0.92)
Case mix index1.61 (0.28)
Government payer percent0.71 (0.11)
Wage index1.00 (0.20)
Sole community provider0.08 (0.28)
System member0.73 (0.45)
Market concentration index0.34 (0.330)
Occupancy rate0.57 (0.17)
Average age of facility12.95 (9.23)
Region 1b (Connecticut, Maine, New Hampshire, Rhode Island, and Vermont)0.04 (0.20)
Region 2 (New Jersey, New York, and Pennsylvania)0.12 (0.32)
Region 3 (Delaware, Kentucky, Maryland, North Carolina, Virginia, West Virginia, and Washington, DC)0.08 (0.28)
Region 4 (Alabama, Florida, Georgia, Mississippi, South Carolina, Tennessee, and Puerto Rico)0.17 (0.37)
Region 5 (Illinois, Michigan, Indiana, Ohio, and Wisconsin)0.17 (0.37)
Region 6 (Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, South Dakota)0.08 (0.27)
Region 7 (Arkansas, Louisiana, and Texas)0.13 (0.35)
Region 8 (Arizona, Colorado, Idaho, Montana, New Mexico, Utah, and Wyoming)0.07 (0.260)
Region 9 (Alaska, California, Hawaii, Nevada, Oregon, and Washington)0.13 (0.34)

aEHR: electronic health record.

bThe representative geographical region is American Hospital Association Region 1 (Connecticut, Maine, New Hampshire, Rhode Island, and Vermont).

Net Income

Table 3 reflects the results of our regression analyses of hospitals’ utilization of the top 3 EHR vendors and the associated hospital financial performance as measured by net income. On the basis of our analysis of net income regressed on EHR vendors (R2=10.6%), we see no significant results for any of the vendors, when compared with facilities that fall into the “other” category. Thus, we can say, on average, and when controlling for the numerous organizational factors as controls, none of the EHRs are associated with favorable or unfavorable financial outcomes as measured by net income.

Table 3 also shows additional significant variables in our analysis that are associated with hospital net income, including the number of hospital beds (β=.0001, SE 0.0000; P<.001), hospital case mix (β=.0067, SE 0.0032; P=.04), hospital wage index (β=−.0200, SE 0.0059; P<.001), and several geographic variables. These findings indicate that with a point increase in hospital case mix, net income increases by 0.67%, and with each point increase in the hospital wage index, net income falls by 2%.

Table 3. Analysis results for net income and total performance score.
VariableNet income (adjusted R2=10.64%)Total performance score (adjusted R2=23.61%)

βSESignificance (P value)βSESignificance (P value)
Intercept.55910.0139<.001.40270.0487<.001
Cerner.00070.0023a−.00210.0080
Epic−.00240.0023.01590.0079.04
Meditech.00180.0023.01170.0079
For-profit.00110.0021−.01760.6203.02
Beds.00010.0000<.001−.00450.0019<.001
Rural−.00120.0024.05560.0084<.001
Government−.00170.0022−.01250.0076
Teaching.00510.0016.01290.0055.02
Outpatient service mix−.00920.0073.21370.0254<.001
Average length of stay−.00150.0009−.02150.0033<.001
Case mix.00670.0032.04.02540.0111.02
Government payer percent−.00170.0022−.01250.0076
Wage index−.02000.0059<.001.05230.0204.01
Sole community provider.00440.0027.03450.0095<.001
System member.00020.0018.00610.0061
Market concentration.00140.0028−.03800.0098<.001
Average age of facility.00000.0001.00000.0003
Occupancy rate.00420.0053.00420.0183
Region 2b (New Jersey, New York, and Pennsylvania)−.00710.0039−.01660.0135
Region 3 (Delaware, Kentucky, Maryland, North Carolina, Virginia, West Virginia, and Washington, DC)−.00770.0043.00700.0151
Region 4 (Alabama, Florida, Georgia, Mississippi, South Carolina, Tennessee, and Puerto Rico)−.01290.0042<.01−.02110.0146
Region 5 (Illinois, Michigan, Indiana, Ohio, and Wisconsin)−.00600.0039.00440.0135
Region 6 (Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, and South Dakota)−.00860.0044.05.02000.0152
Region 7 (Arkansas, Louisiana, and Texas)−.01320.0043<.01−.01870.0150
Region 8 (Arizona, Colorado, Idaho, Montana, New Mexico, Utah, and Wyoming)−.00570.0045−.01380.0155
Region 9 (Alaska, California, Hawaii, Nevada, Oregon, and Washington)−.02180.0040<.001.01670.0140

aNot significant.

bReferent geographical region is Region 1 (Connecticut, Maine, New Hampshire, Rhode Island, and Vermont); referent electronic health record is “other.”

Total Performance Score

Table 3 also shows the results of our regression analyses of the association of EHRs with HVBP quality measures. On the basis of our analysis of TPS regressed on EHR vendors (R2=23.6%), Epic reflects only statistically significant results (β=.0159, SE 0.0079; P=.04). These results indicate that the Epic EHR is associated with a 1.6% higher performance score when compared with facilities that fall into the “other” category. Neither Meditech nor Cerner were associated with significant results.

Table 3 further indicates several control variables that are significantly associated with TPS performance score. These variables include for-profit ownership (β=−.0176, SE 0.6203; P=.02), number of hospital beds (β=−.0045, SE 0.0019; P<.001), rural designation (β=.0556, SE 0.0084; P<.001), teaching designation (β=.0129, SE 0.0055; P=.02), outpatient service mix (β=.2137, SE 0.0254; P<.001), average length of stay (β=−.0215, SE 0.0033; P<.001), case mix (β=.0254, SE 0.0111; P=.02), wage index (β=.0523, SE 0.0204; P=.01), sole community provider designation (β=.0345, SE 0.0095; P<.001), and the MCI (β=−.0380, SE 0.0098; P<.001).

Among other findings, these results imply, on average, for-profit facilities perform 1.7% lower on the TPS measure. In addition, with each day increase in average length of stay, we observed a 2.2% decrease in TPS, and with each point increase in market concentration, we see an associated 3.8% decrease in TPS performance. However, with each point increase in the case mix index, wage index, and outpatient service mix, we observed a 2.5%, 5.2%, and 21.4% increase in TPS outcomes, respectively. Teaching hospitals are also associated with a 1.2% higher level of performance than nonteaching facilities.

Efficiency Score

In Table 4, we also show the evaluation of efficiency performance scores regressed on EHR vendors (R2=31.9%). Cerner (β=.0330, SE 0.0135; P=.01) and Epic (β=.0465, SE 0.0133; P<.001) were positively associated with improved efficiency quality scores approximately 3.3% higher (Cerner) and 4.7% higher (Epic) than hospitals in the “other” category. Meditech was not associated with any significant results.

Table 4 also indicates several variables that are significantly associated with hospital efficiency. These variables include the number of hospital beds (β=−.0001, SE 0.0000; P=.002), rural designation (β=.0723, SE 0.0142; P<.001), outpatient service mix (β=.4831, SE 0.0429; P<.001), average length of stay (β=−.0233, SE 0.0055; P<.001), case mix (β=−.0698, SE 0.0188; P<.001), government payer percent (β=−.2486, SE 0.0375; P<.001), wage index (β=.1145, SE 0.0344; P<.001), sole community provider designation (β=.0888, SE 0.0160; P<.001), occupancy rate (β=.0656, SE 0.0310; P=.03), and several regional variables. This implies, on average, there is a statistically significant 2.3% decrease in efficiency score with each day increase in average length of stay, a 6.9% decrease with each point increase in the case mix index, and a 24.9% decrease in efficiency is associated with each point increase in government payer percentage.

Table 4. Analysis results for efficiency score and patient experience score.
VariableEfficiency score (adjusted R2=31.98%)Patient experience score (adjusted R2=29.3%)

βSESignificance (P value)βSESignificance (P value)
Intercept.00120.0822a.34410.0612<.001
Cerner.03300.0135.01−.00020.0100
Epic.04650.0133<.001.02920.0099.003
Meditech.01610.0133.01570.0099
For-Profit−.01190.0125−.05420.0093<.001
Beds−.00010.0000.002−.00010.0000<.001
Rural.07230.0142<.001.05170.0105<.001
Government−.0065−0.0065.00440.0095
Teaching−.00500.0093.02700.0069<.001
Outpatient service mix.48310.0429<.001.30910.0320<.001
Average length of stay−.02330.0055<.001−.02510.0041<.001
Case mix−.06980.0188<.001.01000.0140<.001
Government payer percent−.24860.0375<.001.00440.0095<.001
Wage index.11450.0344<.001−.07440.0256
Sole community provider.08880.0160<.001.00810.0119
System member.01380.0103−.01240.0077
Market concentration.01580.0165−.05860.0123<.001
Average age of facility.00040.0005.00040.0003
Occupancy rate.06560.0310.03−.06180.0230.007
Region 2b (New Jersey, New York, and Pennsylvania).10620.0227<.001−.06490.0169<.001
Region 3 (Delaware, Kentucky, Maryland, North Carolina, Virginia, West Virginia, and Washington, DC).16270.0254<.001−.05220.0189.005
Region 4 (Alabama, Florida, Georgia, Mississippi, South Carolina, Tennessee, and Puerto Rico).09490.0247<.001−.02550.0184
Region 5 (Illinois, Michigan, Indiana, Ohio, and Wisconsin).07080.0228.002−.02540.0170
Region 6 (Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, and South Dakota).22190.0257<.001−.04720.0191.01
Region 7 (Arkansas, Louisiana, and Texas).04810.0254−.00660.0189
Region 8 (Arizona, Colorado, Idaho, Montana, New Mexico, Utah, and Wyoming).17020.0262<.001−.09770.0195<.001
Region 9 (Alaska, California, Hawaii, Nevada, Oregon, and Washington).26930.0237<.001−.07400.0176<.001

aNot significant.

bReferent geographical region is Region 1 (Connecticut, Maine, New Hampshire, Rhode Island, and Vermont); referent electronic health record is “other.”

Patient Experience Score

Table 4 provides the final analysis results of our evaluation of hospitals’ patient experience performance scores regressed on EHR vendors. Epic was positively associated with higher patient perceptions of quality scores 2.9% higher than hospitals in the “other” category (R2=29.3%; β=.0292, SE 0.0099; P=.003).

Table 4 provides additional insight pertaining to the significant association between the control variables included in our study and patient experience scores. These variables include for-profit ownership (β=−.0542, SE 0.0093; P<.001), number of beds (β=−.0001, SE 0.0000; P<.001), rural status (β=.0517, SE 0.0105; P<.001), teaching (β=.0270, SE 0.0069; P<.001), outpatient service mix (β=.3091, SE 0.0320; P<.001), average length of stay (β=−.0251, SE 0.0041; P<.001), case mix (β=.0100, SE 0.0140; P<.001), government payer percent (β=.0044, SE 0.0095; P<.001), market concentration (β=−.0586, SE 0.0123; P<.001), occupancy rate (β=−.0618, SE 0.0230; P=.007), and several geographic regions.

These results imply that, on average, the for-profit hospitals in our study scored 5.4% lower on the HVBP patient experience scores. In addition, for each additional day in the hospital, the patient experience scores decreased by 2.5%. Each point increase in market concentration and occupancy rate also reduces patient experience by 5.9% and 6.2%, respectively. Conversely, rural and teaching hospitals are associated with higher patient experience, with associated increased scores of 5.2% and 2.7%, respectively.

Patient Safety Score

Table 5 provides insight into our research on patient safety performance scores regressed on EHR vendors (R2=24.3%). Epic (β=−.0221, SE 0.0102; P=.03) was negatively associated with patient safety quality scores of 2.2% lower than hospitals in the “other” category. Meditech and Cerner scores were not associated with significant results.

Table 5 also provides details regarding the significant associations between the control variables included in our study and patient safety scores. These variables include the number of hospital beds (β=−.0002, SE 0.0000; P<.001), rural status (β=.0420, SE 0.0109; P<.001), teaching (β=.0168, SE 0.0071; P=.02), outpatient service mix (β=.0965, SE 0.0330; P=.003), average length of stay (β=−.0143, SE 0.0042; P<.001), case mix (β=−.0812, SE 0.0144; P<.001), government payer percent (β=.0746, SE 0.0288; P=.009), and occupancy rate (β=−.0919, SE 0.0238; P<.001). These results indicate that patient safety is negatively impacted by 1.4% for every day increase in average length of stay and also declines by 8.1% for every point increase in the case mix index. Furthermore, with each percent increase in the hospital occupancy rate, patient safety scores declined by 9.2%. Conversely, patient safety scores were positively associated with rural and teaching hospitals by 4.2% and 1.7%, respectively. In addition, with each percentage increase in government payments, patient safety scores improved by 7.5%.

Table 5. Analysis results for patient safety score and clinical process score.
VariablePatient safety score (adjusted R2=24.35%)Clinical process score (adjusted R2=4.19%)

βSESignificance (P value)βSESignificance (P value)
Intercept.82510.0632<.001.59740.0758<.001
Cerner−.01820.0104a.02840.0124.02
Epic−.02210.0102.03.03890.0122.002
Meditech−.00040.0103.02740.0123.03
For-profit.00220.0096.05120.0116<.001
Beds−.00020.0000<.001−.00010.0000.02
Rural.04200.0109<.001.00280.0131
Government−.00830.0098−.02640.0118.03
Teaching.01680.0071.02.01670.0086.05
Outpatient service mix.09650.0330.003.01820.0396
Average length of stay−.01430.0042<.001−.00940.0051
Case mix−.08120.0144<.001.02230.0173
Government payer percent.07460.0288.009−.08390.0346.02
Wage index−.03140.0264.00130.0317
Sole community provider.02370.0123.01460.0148
System member.00820.0079.03290.0095<.001
Market concentration−.01740.0127.03230.0152.03
Average age of facility−.00030.0004.00090.0004.04
Occupancy rate−.09190.0238<.001−.00570.0286
Region 2b (New Jersey, New York, and Pennsylvania).00510.0174−.00140.0210
Region 3 (Delaware, Kentucky, Maryland, North Carolina, Virginia, West Virginia, and Washington, DC).01500.0196.02120.0235
Region 4 (Alabama, Florida, Georgia, Mississippi, South Carolina, Tennessee, and Puerto Rico)−.00560.0190.01560.0228
Region 5 (Illinois, Michigan, Indiana, Ohio, and Wisconsin).02000.0175.02750.0210
Region 6 (Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, and South Dakota)−.00130.0197−.01710.0237
Region 7 (Arkansas, Louisiana, and Texas).00550.0195−.01260.0234
Region 8 (Arizona, Colorado, Idaho, Montana, New Mexico, Utah, and Wyoming)−.00230.0201−.04570.0242
Region 9 (Alaska, California, Hawaii, Nevada, Oregon, and Washington).01420.0182.01810.0219

aNot significant.

bReferent geographical region is Region 1 (Connecticut, Maine, New Hampshire, Rhode Island, and Vermont); referent electronic health record is “other.”

Clinical Care Performance Score

Finally, Table 5 shows the results of our evaluation of clinical care performance scores regressed on EHR vendors. All 3 vendors were associated with positive performance with Epic (β=.0388, SE 0.0122; P=.002), Cerner (β=.0283, SE 0.0124; P=.02), and Meditech (β=.0273, SE 0.0123; P=.03), reflecting positively associated higher clinical care performance scores between 2.7% (Meditech) and 3.8% (Epic) higher than hospitals in the “other” category. However, on this dependent variable, we recognize that the explanatory power of the regressors is very low (R2=4.2%).

Table 5 also provides insight into the association between the control variables in our study and clinical process outcomes. The statistically significant variables in our analysis included for-profit status (β=.0512, SE 0.0116; P<.001), number of hospital beds (β=−.0001, SE 0.0000; P=.02), government operated (β=−.0264, SE 0.0118; P=.03), teaching (β=.0167, SE 0.0086; P=.05), government payer percentage (β=−.0839, SE 0.0346; P=.02), system membership (β=.0329, SE 0.0095; P<.001), market concentration (β=.0323, SE 0.0152; P=.03), and the average age of the facility (β=.0009, SE 0.0004; P=.04). These results indicate that government-operated hospitals are associated with 2.6% lower clinical process scores, and for each point increase in government payer percentage, there is also an 8.3% lower score. However, for-profit, teaching, and system-owned hospitals appear to perform better on this measure by 5.1%, 1.6%, and 3.3%, respectively. Hospitals in concentrated markets also appear to perform better than those in less concentrated markets. With each point increase in market concentration, we observed an increase of 3.2%.


Principal Findings

In general, our findings were insightful regarding the performance of individual EHR vendors. We did not expect to see a clearly obvious choice of EHR vendor with respect to performance as defined by our financial and quality-focused dependent variables. To this end, we did not ascertain that there is a single EHR that outperforms all other competitors across all of our study measures.

Our findings pertaining to financial outcomes were somewhat interesting in that no single EHR demonstrated a significant and positive association with net income. In most instances, the capital allocation process is predicated on reasonable assurance that there will be some tangible return on investment over a reasonable amount of time. As EHR adoption is a major capital investment and requires a major organizational change and extensive training, some have argued that it may take the organization a few years to see its effect on financial performance. For instance, Collum et al [20] found a statistically significant improvement in the total margin 2 years after EHR adoption in hospitals. However, the authors attributed the observed effect more to HITECH Act incentive payments than operational improvements, primarily because the authors found no significant association with operating margin. Thus, these previous authors’ observations, coupled with our own, continue to indicate that EHR return on investment remains an unsettled matter.

In our evaluation of TPS as a dependent variable, we note that Epic was the singular EHR with a positive and significant association with improved scoring. Given that the TPS is a composite of the other variables in our analysis, it prompted us to examine each of the subdomains’ performance scores more closely. Although we did not see a positive association for Epic in improved financial performance, this EHR recorded positive and significant associations in clinical care, patient experience, and efficiency scoring. Thus, when considered together, this combination of scores across these 3 quality subdomains appears to provide a performance advantage to Epic and a positive association with TPS scoring.

However, Epic also demonstrated a statistically significant and negative association with patient safety, which was not observed in our other vendors’ performance. Although we did not expect to observe a significant and negative association between any EHR vendors and patient safety scoring, upon further research on this topic, we note that our findings appear to be consistent with several previous researchers’ results. Bowman [33] captures these areas of concern very well in her synthesis of 64 studies and papers highlighting numerous areas where EHRs can impose undue burdens on health care providers and introduce the possibility of errors. These include the potential for recording erroneous data entry leading to patient safety hazards, system design flaws, improper system use, inappropriate document capture, erroneous application of copy and paste functions within the medical record, rigid application of prepopulated templates, and errors related to clinical decision support systems such as alert fatigue [33]. In recent years, others have pointed to potential problems with EHRs as a vector for increased risk to patient safety with respect to incorrect use [34,35], malfunctions [36,37], interoperability or system interaction [38], and health information technology blackouts or downtime [36,39]. On the basis of our findings, we can reasonably assume that many of these issues persist.

Our research group discussed the possible reasons for these results. In many ways, our results are supported by independent research. The top vendors that we studied are often highlighted in the KLAS, LLC Research for best in class, most user friendly, and holding buyers’ attention [40]. It is a user-friendly variable that could contribute to the software’s success in efficiency measures. It is possible that all 3 vendors are equally capable of achieving similar efficiency scores, but the fact that users are more familiar with their function and more willing to explore beyond the basic user training that renders Epic more effective in the areas of TPS. One could reasonably assume that the number of years since the hospital adopted the EHR system, and also the stage of adoption, might have had an effect. Hospitals that have adopted the EHR over several years may be more efficient than hospitals that only recently adopted or changed their EHR system. This could have an impact on levels of customer service, capability to integrate modules together, onboarding processes relating to the EHR, and the organization’s capacity to facilitate initial and ongoing training.

Finally, another factor contributing to the success of one vendor over the others could be ownership. Epic and Meditech have proudly and defiantly maintained their private status. This factor could make a vendor more agile in its software development life cycle, enabling them to customize to order or correct flaws rapidly.

Practice Implications

Those involved in purchasing decisions surrounding EHR should carefully consider areas of focus in the facility, capital expenditure cycles, strategic direction, and willingness of the organization to change EHR vendors. The significant decision to switch EHR vendors is a complex process, and many factors need to be aligned to set up the organization for success. Our research may provide an evidentiary basis for vendor selection. For example, an organization that struggles with improving clinical care performance might consider Epic or Meditech with an understanding that there are numerous other factors that might influence outcomes. A similar choice might be considered for facilities desiring to improve patient experience. However, if patient experience or efficiency—in terms of Medicare Spending Per Beneficiary—is an area of weakness, Epic might be a preferred choice. Ultimately, an organization using a vendor other than these 3 might look at the features that these vendors offer that their vendor does not. Is there something their current vendor could offer and increase measures of efficiency? Regrettably, our research does not extend to the module level of the EHR, so we cannot make any recommendations in that regard.

Limitations and Recommendations for Future Research

Our study had several limitations. First, this is a single year of data drawn from the 2018 data pertaining to performance within only short-term acute care HVBP participating facilities. Future studies should consider examining the growth or decline of EHR influence on these outcomes over time. Furthermore, as a single-year study, our analysis also does not account for any changes in EHR systems during the delay between baseline and performance reporting periods from which HVBP scores were determined, nor do we include the length of time the EHR has been in place within the hospitals studied. A more in-depth paired analysis could be considered to match the EHR system with the exact time frame of performance. Additional financial and quality outcome variables might also be considered, which could broaden the study unit of observations beyond the HVBP constraint. Finally, as more granular data becomes available pertaining to the specific modules in use at the hospital level, future studies might examine how specific module use is associated with specific clinical outcomes.

Beyond the extensive implementation of EHRs, health care providers, hospitals, and health care facilities invest heavily in other forms of information technology to provide and enhance care delivery. As the industry progresses toward value-based care, organizations are increasingly investing in imaging, telehealth, precision medicine, artificial intelligence, cloud-based computing or data storage, consumer-facing technologies, and disease management technologies. Furthermore, in the days since the start of the COVID-19 pandemic, telemedicine has been showcased as an indispensable capability of the EHR. Another aspect that should be evaluated in the future is the telemedicine capabilities of these vendors. Future research should carefully examine care delivered through this modality and compare the outcomes across the top vendors.

Conclusions

The return on investment and outcomes associated with EHRs have been a topic of intense focus and debate over the past 2 decades. Up to this point, a research gap has persisted pertaining to the study and transparent disclosure of comparative studies of major EHR vendors. In our analysis of the big 3 vendors—Epic, Cerner, and Meditech—we endeavored to fill that gap. Yet, we can see that clearly answering which system performs the best is complex. The implementation of any of these products can take years, and success is not guaranteed. However, our findings may provide some clarity to health care leaders seeking to develop an evidence base to support future capital investment in EHR systems. If an organization is already considering a switch to a new EHR vendor, the organization can devote sufficient funding for such an undertaking, and if the leadership is willing to lead such a large organizational change, then our study may provide some points of clarity pertaining to the big 3 vendors that might be apt for consideration.

Conflicts of Interest

None declared.

  1. Morse S. US health IT investment skyrockets to $7.1 billion. Healthcare IT news. 2018.   URL: https://www.healthcareitnews.com/news/us-health-it-investment-skyrockets-71-billion [accessed 2020-04-30]
  2. Muchmore S. Hospital profitability down as operators lack flexibility to cut costs, Kaufman Hall says. Healthcare Dive.   URL: https:/​/www.​healthcaredive.com/​news/​hospital-profitability-down-as-operators-lack-flexibility-to-cut-costs-kau/​559705/​ [accessed 2020-04-30]
  3. Kacik A. Health systems weigh return on investment as they ramp up tech. Modern Healthcare.   URL: https://www.modernhealthcare.com/operations/health-systems-weigh-return-investment-they-ramp-up-tech [accessed 2020-04-30]
  4. Anderson GF, Frogner BK, Johns RA, Reinhardt UE. Health care spending and use of information technology in OECD countries. Health Aff (Millwood) 2006 May;25(3):819-831. [CrossRef] [Medline]
  5. Alvarez K, Goldfarb NI. US health system performance: a national scorecard. Am J Med Qual. 2007.   URL: https:/​/go.​gale.com/​ps/​anonymous?id=GALE%7CA157943700&sid=googleScholar&v=2.​1&it=r&linkaccess=abs&issn=10628606&p=AONE&sw=w [accessed 2021-03-16]
  6. Bush GW. Executive order 13335: incentives for the use of health information technology and establishing the position of the national health information technology coordinator. Federal Register. Washington D.C: Government Publishing Office; 2004.   URL: http://edocket.access.gpo.gov/cfr_2005/janqtr/pdf/3CFR13335 [accessed 2020-04-30]
  7. Schilling B. The federal government has put billions into promoting electronic health record use: how is it going? The Commonwealth Fund. The Commonwealth Fund.: The Commonwealth Fund; 2015.   URL: https:/​/www.​commonwealthfund.org/​publications/​newsletter-article/​federal-government-has-put-billions-promoting-electronic-health [accessed 2020-04-30]
  8. Lukaszewski M. A history of health information technology and the future of interoperability. Bulletin of the American College of Surgeons.: American College of Surgeons; 2017.   URL: https:/​/bulletin.​facs.org/​2017/​11/​a-history-of-health-information-technology-and-the-future-of-interoperability/​# [accessed 2020-04-30]
  9. Henry J, Pylypchuk Y, Searcy T, Patel V. Adoption of electronic health record systems among non-federal acute care hospitals: 2008-2015. HealthIT.gov.: Department of Health and Human Services   URL: https:/​/dashboard.​healthit.gov/​evaluations/​data-briefs/​non-federal-acute-care-hospital-ehr-adoption-2008-2015.​php [accessed 2020-04-30]
  10. Arndt R. No end in sight: EHRs hit hospitals' bottom lines with uncertain benefits. Modern Healthcare. 2018.   URL: https:/​/www.​modernhealthcare.com/​article/​20181013/​NEWS/​181019945/​no-end-in-sight-ehrs-hit-hospitals-bottom-lines-with-uncertain-benefits [accessed 2020-04-30]
  11. Adler-Milstein J, Jha AK. HITECH Act drove large gains in hospital electronic health record adoption. Health Aff (Millwood) 2017 Aug 01;36(8):1416-1422. [CrossRef] [Medline]
  12. HITECH Act of 2009, 42 USC sec 139w-4(0)(2) sec 13301, subtitle B: incentives for the use of Health Information Technology (2009). Federal Register. Washinton, D.C: Government Publishing Office; 2009.   URL: https:/​/www.​hhs.gov/​sites/​default/​files/​ocr/​privacy/​hipaa/​understanding/​coveredentities/​hitechact.​pdf [accessed 2020-04-29]
  13. Hudock R, Wagner P. Analysis of the HITECH Act's incentives to facilitate adoption of Health Information Technology. Epstein Becker Green. 2009.   URL: https://www.ebglaw.com/content/uploads/2014/06/28043_ClientAlertHITECH.pdf [accessed 2020-04-30]
  14. Unpacking hospitals' EHR implementation costs: what's behind the million-dollar price tags? Beckers Health IT. 2016.   URL: https:/​/www.​beckershospitalreview.com/​healthcare-information-technology/​unpacking-hospitals-ehr-implementation-costs-what-s-behind-the-million-dollar-price-tags.​html [accessed 2020-04-30]
  15. Cohen JK. 10 EHR implementations with the biggest price tags in 2017. Becker's Health IT. 2017.   URL: https:/​/www.​beckershospitalreview.com/​ehrs/​10-ehr-implementations-with-the-biggest-price-tags-in-2017.​html [accessed 2020-04-30]
  16. Resneck J. Report of the board of trustees impact of high capital costs of hospital EHRs on the medical staff. 2018.   URL: https://www.ama-assn.org/system/files/2019-04/a19-bot32.pdf [accessed 2020-04-30]
  17. Marchibroda J. Health Policy Brief. Interoperability. Health Affairs. 2014.   URL: https://www.healthaffairs.org/do/10.1377/hpb20140811.761828/full/healthpolicybrief_122.pdf [accessed 2020-04-30]
  18. Schmitt KF, Wofford DA. Financial analysis projects clear returns from electronic medical records. Healthc Financ Manage 2002 Jan;56(1):52-57. [Medline]
  19. Menachemi N, Burkhardt J, Shewchuk R, Burke D, Brooks RG. Hospital information technology and positive financial performance: a different approach to finding an ROI. J Healthc Manag 2006;51(1):40-58. [CrossRef]
  20. Collum TH, Menachemi N, Sen B. Does electronic health record use improve hospital financial performance? Evidence from panel data. Health Care Manage Rev 2016;41(3):267-274. [CrossRef]
  21. Wang T, Wang Y, McLeod A. Do health information technology investments impact hospital financial performance and productivity? Int J Account Inform Syst 2018 Mar;28:1-13. [CrossRef]
  22. Wang T, Gibbs D. A framework for performance comparison among major electronic health record systems. Perspect Health Inf Manag 2019;16(Fall):1h [FREE Full text] [Medline]
  23. Beauvais B, Richter JP, Kim FS, Palmer EL, Spear BL, Turner RC. A reason to renovate: the association between hospital age of plant and value-based purchasing performance. Health Care Manage Rev 2018 Oct 31;46(1):66-74. [CrossRef]
  24. Beauvais B, Richter JP, Kim FS. Doing well by doing good: evaluating the influence of patient safety performance on hospital financial outcomes. Health Care Manage Rev 2019;44(1):2-9. [CrossRef] [Medline]
  25. Beauvais B, Richter JP, Kim FS, Sickels G, Hook T, Kiley S, et al. Does patient safety pay? Evaluating the association between surgical care improvement project performance and hospital profitability. J Healthc Manag 2019;64(3):142-154. [CrossRef] [Medline]
  26. Kruse CS, DeShazo J, Kim F, Fulton L. Factors associated with adoption of health information technology: a conceptual model based on a systematic review. JMIR Med Inform 2014 May 23;2(1):e9 [FREE Full text] [CrossRef] [Medline]
  27. Definitive Healthcare (2020).   URL: https://www.defhc.com/home [accessed 2020-04-30]
  28. American Hospital Association. 2020.   URL: https://www.ahadata.com/ [accessed 2020-04-30]
  29. Hospital value based purchasing. Centers for Medicare & Medicaid Services. 2017.   URL: https:/​/www.​cms.gov/​Outreach-and-Education/​Medicare-Learning-Network-MLN/​MLNProducts/​downloads/​Hospital_VBPurchasing_Fact_Sheet_ICN907664.​pdf [accessed 2020-04-30]
  30. Medicare program; hospital inpatient value-based purchasing program. Final rule. Fed Regist 2011 May 06;76(88):26490-26547 [FREE Full text] [Medline]
  31. Honaker J, King G, Blackwell M. Amelia II: A program for missing data. J Stat Softw. 2011.   URL: https://www.jstatsoft.org/article/view/v045i07 [accessed 2021-03-18]
  32. R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. 2017.   URL: https://www.R-project.org/ [accessed 2020-04-30]
  33. Bowman S. Impact of electronic health record systems on information integrity: quality and safety implications. Perspect Health Inf Manag 2013;10:1c [FREE Full text] [Medline]
  34. Tsou AY, Lehmann CU, Michel J, Solomon R, Possanza L, Gandhi T. Safe practices for copy and paste in the EHR. Appl Clin Inform 2017 Dec 20;26(01):12-34. [CrossRef]
  35. Taieb-Maimon M, Plaisant C, Hettinger AZ, Shneiderman B. Increasing recognition of wrong-patient errors through improved interface design of a computerized provider order entry system. Int J Hum Comput Interact 2017 Sep 05;34(5):383-398. [CrossRef]
  36. Pradhan H, Stokes J. Does your electronic health record system introduce patient safety risks? Washington Patient Safety Coalition. 2015.   URL: http://old.wapatientsafety.org/wp-content/uploads/2015/05/EHRFinal-White-PaperApril-2015.pdf [accessed 2020-04-30]
  37. Wright A, Hickman TTT, McEvoy D, Aaron S, Ai A, Andersen JM, et al. Analysis of clinical decision support system malfunctions: a case series and survey. J Am Med Inform Assoc 2016 Nov;23(6):1068-1076 [FREE Full text] [CrossRef] [Medline]
  38. Adams KT, Howe JL, Fong A, Puthumana JS, Kellogg KM, Gaunt M, et al. An analysis of patient safety incident reports associated with electronic health record interoperability. Appl Clin Inform 2017 Dec 21;08(02):593-602. [CrossRef]
  39. Sax U, Lipprandt M, Röhrig R. The rising frequency of IT blackouts indicates the increasing relevance of IT emergency concepts to ensure patient safety. Yearb Med Inform 2018 Mar 06;25(01):130-137. [CrossRef]
  40. US Hospital EMR Market Share 2020. KLAS Research. 2020.   URL: https://klasresearch.com/report/us-hospital-emr-market-share-2020/1616 [accessed 2020-07-08]


AHA: American Hospital Association
ARRA: American Recovery and Reinvestment Act
CMS: Centers for Medicare and Medicaid Services
EHR: electronic health record
HITECH: Health Information Technology for Economic and Clinical Health
HVBP: Hospital Value–Based Purchasing
MCI: market concentration index
ROA: return on assets
TPS: total performance score


Edited by R Kukafka; submitted 29.08.20; peer-reviewed by A Vagelatos, KM Kuo; comments to author 18.09.20; revised version received 30.09.20; accepted 02.02.21; published 14.04.21

Copyright

©Bradley Beauvais, Clemens Scott Kruse, Lawrence Fulton, Ramalingam Shanmugam, Zo Ramamonjiarivelo, Matthew Brooks. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 14.04.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.