Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/55121, first published .
Evaluation and Comparison of Academic Impact and Disruptive Innovation Level of Medical Journals: Bibliometric Analysis and Disruptive Evaluation

Evaluation and Comparison of Academic Impact and Disruptive Innovation Level of Medical Journals: Bibliometric Analysis and Disruptive Evaluation

Evaluation and Comparison of Academic Impact and Disruptive Innovation Level of Medical Journals: Bibliometric Analysis and Disruptive Evaluation

Authors of this article:

Yuyan Jiang1 Author Orcid Image ;   Xue-li Liu1, 2 Author Orcid Image ;   Zixuan Zhang1 Author Orcid Image ;   Xinru Yang1 Author Orcid Image

Original Paper

1Henan Research Center for Science Journals, Xinxiang Medical University, Xinxiang, China

2Faculty of Humanities & Social Sciences, Xinxiang Medical University, Xinxiang, China

Corresponding Author:

Xue-li Liu, BM

Faculty of Humanities & Social Sciences

Xinxiang Medical University

Library and Information Building, 2nd Fl.

No. 601, Jinsui Avenue, Hongqi District

Xinxiang, 450003

China

Phone: 86 1 383 736 0965

Email: liueditor03@163.com


Background: As an important platform for researchers to present their academic findings, medical journals have a close relationship between their evaluation orientation and the value orientation of their published research results. However, the differences between the academic impact and level of disruptive innovation of medical journals have not been examined by any study yet.

Objective: This study aims to compare the relationships and differences between the academic impact, disruptive innovation levels, and peer review results of medical journals and published research papers. We also analyzed the similarities and differences in the impact evaluations, disruptive innovations, and peer reviews for different types of medical research papers and the underlying reasons.

Methods: The general and internal medicine Science Citation Index Expanded (SCIE) journals in 2018 were chosen as the study object to explore the differences in the academic impact and level of disruptive innovation of medical journals based on the OpenCitations Index of PubMed open PMID-to-PMID citations (POCI) and H1Connect databases, respectively, and we compared them with the results of peer review.

Results: First, the correlation coefficients of the Journal Disruption Index (JDI) with the Journal Cumulative Citation for 5 years (JCC5), Journal Impact Factor (JIF), and Journal Citation Indicator (JCI) were 0.677, 0.585, and 0.621, respectively. The correlation coefficient of the absolute disruption index (Dz) with the Cumulative Citation for 5 years (CC5) was 0.635. However, the average difference in the disruptive innovation and academic influence rankings of journals reached 20 places (about 17.5%). The average difference in the disruptive innovation and influence rankings of research papers reached about 2700 places (about 17.7%). The differences reflect the essential difference between the two evaluation systems. Second, the top 7 journals selected based on JDI, JCC5, JIF, and JCI were the same, and all of them were H-journals. Although 8 (8/15, 53%), 96 (96/150, 64%), and 880 (880/1500, 58.67%) of the top 0.1%, top 1%, and top 10% papers selected based on Dz and CC5, respectively, were the same. Third, research papers with the “changes clinical practice” tag showed only moderate innovation (4.96) and impact (241.67) levels but had high levels of peer-reviewed recognition (6.00) and attention (2.83).

Conclusions: The results of the study show that research evaluation based on innovative indicators is detached from the traditional impact evaluation system. The 3 evaluation systems (impact evaluation, disruptive innovation evaluation, and peer review) only have high consistency for authoritative journals and top papers. Neither a single impact indicator nor an innovative indicator can directly reflect the impact of medical research for clinical practice. How to establish an integrated, comprehensive, scientific, and reasonable journal evaluation system to improve the existing evaluation system of medical journals still needs further research.

J Med Internet Res 2024;26:e55121

doi:10.2196/55121

Keywords



Scientific and technical journals play a crucial role in showcasing research findings, and the value orientation of their published results is closely intertwined with their evaluation orientation. However, since Garfield [1] put forward the idea that “citation analysis can be used as an evaluation tool for journals” in 1972, the evaluation system of journals based on academic impact has become mainstream. However, relying too much on impact indicators for evaluation may hurt academic research and discipline progress.

On the one hand, some scholars have long pointed out that ranking journals according to their impact factors is noncomprehensive and may lead to misleading conclusions [2,3]. Meanwhile, many academic journals and publishers have engaged in strategic self-citation, leading to an overinflated journal impact factor (JIF) [4]. Some editorial behaviors to enhance the JIF clearly violate academic norms [5]. Some scholars are overciting each other’s work to enhance their academic impact [6]. External contingencies can have a devastating effect on citation indicators [7]. Scientists themselves also present a mixed attitude toward impact factors [8].

On the other hand, despite all the benefits of increased academic impact to journals, there is a nonnegligible problem in the evaluation of journals that citation indicators essentially characterize the impact of journals rather than their disruptive innovation [9]. Relevant studies have confirmed that the level of disruptive innovation of scientific research is getting increasingly lower [10] and the progress in various disciplines is slowing [11]. This is often overlooked against the background of impact-only evaluations. Therefore, despite the urgent need for disruptive innovations in science [12], impact-based journal rankings have made it more difficult to accept novel results [13], replacing the “taste of science” with the “taste of publication” [14] in the actual environment.

The evaluation of academic journals is about not only the journals themselves [15] but also the wide use of the evaluation results in academic review, promotion, and tenure decisions [16]. Meanwhile, the quality and results of medical research are directly related to human health and life and have a direct impact on human health and well-being. Therefore, the general and internal medicine journals indexed in Science Citation Index Expanded (SCIE) in 2018 were chosen as the study object. The OpenCitations Index of PubMed open PMID-to-PMID citations (POCI) and H1 Connect databases were selected as the sources for citation relationship data and peer review data. We investigated the connections and contrasts between the academic impact, disruptive innovation level, and results of peer review for medical journals and published research papers. We also analyzed the similarities and differences as well as the fundamental causes of the varying evaluation results in terms of impact evaluation, disruptive innovation, and peer review for various types of medical research papers. We aimed to provide a reference for the correct understanding of the innovation level of the results published by journals; the scientific and reasonable evaluation of medical journals; and the construction of a scientific, objective, and fair academic evaluation system.


Research Object

Because there is basically no disruptive innovation in the review literature, this paper only focuses on the general and internal medicine journals indexed in SCIE in 2018 and the research papers involved. In addition, considering the computational efficiency, accuracy, and difficulty of data acquisition, research papers for which citation relationships in the aforementioned journals were not included in POCI and the journals with too few references (less than 10) were excluded. Finally, 114 journals were retained at the journal level, and 15,206 research papers were retained at the paper level.

Data Resource

The data acquired in this study included journal information, literature information, citation relationship data, and peer review data. The data were obtained through the Journal Citation Reports (JCR), Web of Science (WoS), POCI, and H1 Connect databases.

Of these databases, POCI is a Resource Description Framework (RDF) data set containing details of all the citations from publications bearing PMIDs to other PMID-identified publications harvested from the National Institutes of Health Open Citations Collection. POCI covers more than 29 million bibliographic resources and more than 717 million citation links (as of the last update in January 2023). Citations in POCI are not considered simple links but as data entities. This means that it is permissible to assign descriptive attributes to each citation, such as the date of creation of the citation, its time span, and its type [17].

H1 Connect (formerly Faculty Opinions), the world’s most authoritative peer review database in the biomedical field, incorporates the combined efforts of more than 8000 international authoritative experts from around the globe and is a knowledge discovery tool used to evaluate published research. H1 Connect’s reviewers are authoritative experts in the life sciences and medical fields. They provide commentary, opinions, and validation of key papers in their own field. The quality and rigor of the reviewers mean that researchers can be assured of the quality of the papers they recommend, and H1 Connect brings these recommendations together to recommend high-quality research to a wider audience. H1 Connect’s experts typically evaluate the “high level” of research literature in the field within 2 months of publication, with over 90% of recommendations made within 6 months of publication.

Data Acquisition and Processing

The data acquired for this study consisted of 3 parts. The specific steps for data acquisition and processing were (1) data acquisition and (2) data processing.

Data Acquisition

The steps taken to acquire the data included the following: log in to JCR; select Medicine, General & Internal in the “Browse categories” page; select JCR Year=2018 and Citation Indexes=SCIE in the filters; export the result to XLS format; according to the acquired journal title, select the publication year as 2018 and the literature type as Article to search the WoS core database; and export the full record of the journal literature in XLS format. Finally, we downloaded the related H1 Connect literature data and POCI data according to the list of journals.

Data Processing

To process the data, we undertook the following steps: use Navicat to import the full record of research papers into the local SQLite database, process the downloaded POCI data, extract the PMID numbers of all the focus papers from the full record, retrieve the references and citations of the focus papers as well as the citations of the references of the focus papers in the local database transformed based on the POCI data, and establish the relevant data tables for the subsequent calculations.

Evaluation Indicators

Innovation Indicators

Some researchers have observed, at an early stage, that some technological innovations complete and improve current technologies without replacing them while others outright eliminate technologies that were used in the past. However, for a long time, scholars did not analyze and explain the essence of this phenomenon. It was not until 1986 that Tushman and Anderson [18] summarized the phenomenon as follows: There are 2 types of major technological shifts that disrupt or enhance the capabilities of existing firms in an industry. However, Christensen [19], a professor at Harvard Business School in the United States, argued that disruptive innovations are new technologies that replace existing mainstream technologies in unexpected ways. Building on these views, Funk and Owen-Smith [20] provided deeper and more insightful insights. They argued that the dichotomy between disruptive and augmentative technologies lacks nuance and that the impact of new technologies on the status quo is a matter of degree rather than absolute impact.

In this regard, Govindarajan and Kopalle [21] also pointed out that disruptive innovation lacks reliable and valid measurement standards. Therefore, Funk and Owen-Smith [20] created the consolidation-disruption (CD) index, which aims to quantify the degree of technological change brought about by new patents. The index drew the attention of Wu et al [22], who analogized the basic principle of the CD index to measure disruption by calculating the citation substitution of the focus paper in the citation network and who were the first to apply the evaluation of disruptive innovation to the world of bibliometrics.

As an important carrier of academic results, it is important to evaluate papers quantitatively, rationally, and efficiently in terms of their innovation [23]. The disruption (D) index has received widespread attention after it was proposed by Wu et al [22]. A subset of scholars then explored disruptive papers in specific subject areas, including scientometrics [24], craniofacial surgery [25], pediatric surgery [26], synthetic biology [27], energy security [28], colorectal surgery [29], otolaryngology [30], military trauma [31], breast cancer [32], radiology [33], ophthalmology [34], plastic surgery [35], urology [36], and general surgery [37], based on the D index. Park et al [10] also analyzed the annual dynamics of the disruption level of papers and patents across the subject areas.

Another group of scholars conducted in-depth research on the index itself: Bornmann et al [38] explored the convergent validity of the index and the variants that may enhance the effectiveness of the measure and tested the validity of the D index based on the literature in the field of physics [39]. Ruan et al [40] provided an in-depth reflection on the limitations of the application of the D index as a measure of scientific and technological progress. Liu et al [41,42] empirically investigated the stabilization time window of the D index in different subject areas and addressed the mathematical inconsistency of the traditional D index and proposed an absolute disruption index (Dz; as in Equation 2) [43].

This series of studies has made it possible to evaluate the disruption of research papers based on the D index, which has gradually matured. On this basis, Jiang and Liu [44] proposed the Journal Disruption Index (JDI) to evaluate the disruptive innovation level of journals (as in Equation 3) and validated the evaluation effect of this indicator based on Chinese SCIE journals [45].

In Equations 1, 2, and 3, NF refers to the literature that only cites the focus paper (FP), NB refers to the literature that cites both the focus paper and at least one reference (R) of the focus paper, and NR refers to the literature that only cites at least one reference (R) of the focus paper but not the focus paper. n is the number of “Article” type pieces of literature contained in the journal, and Dzi is the Dz of the ith article in the journal.

In this study, Dz and JDI were chosen to evaluate the disruption of the selected studies at the literature and journal levels, respectively. Bornmann and Tekles [46] argued that 3 years is necessary regardless of the measurement for that discipline. Considering that, in the determination of the stabilization time window of the D index for each discipline conducted by Liu et al [42], the stabilization time window for clinical medicine is 4 years after publication. Therefore, in the process of calculating Dz in this paper, the citation time window of the focus papers was set to 2018 to 2022 to ensure the validity of the results. In addition, because the JDI value was too small, we multiplied the JDI by 1000 in all subsequent presentations.

Peer Review Indicators

The peer review indicators selected for this study included the peer review score (PScore), weighted peer review stars (PStar_w), and weighted peer review evaluation times (PTime_w). In this case, the weighted indicators refer to the weighting of ratings and number of evaluations by the number of evaluators when an evaluation was completed by more than one reviewer.

The advantage of using peer review indicators is that it can make up for the lag and one-sidedness of relying solely on the citations among the literature to assess the quality of the literature. It also corrects the shortcomings of the traditional JIF to judge the quality of the literature. Compared with a single impact indicator, it is more scientific.

Impact Indicators

The impact indicators selected for this study included the Cumulative Citation for 5 years (CC5), JIF, Journal Citation Indicator (JCI), and Journal Cumulative Citation for 5 years (JCC5). Among these, JIF is the total number of citations for scholarly articles published in the journal in the past 2 years divided by the total number of citable articles. JCI is the average Category Normalized Citation Impact (CNCI) of the citable literature published in the specific journal in the previous 3 years. The JCC5 is obtained by dividing the sum of the citation frequencies recorded in the POCI repository of the corresponding research papers (focus papers) of the selected journals for the years 2018 to 2022 by the number of research papers published by the journal (as in Equation 4).

In Equation 4, aiCt represents the number of citations of the ith research paper of the journal in year t, and n is the number of research papers published by the journal.


Evaluation and Correlation Analyses of Journals Under Different Evaluation Perspectives

Analysis of Differences in Academic Influence and Level of Disruptive Innovation of Journals

The academic impact and level of disruptive innovation of the selected 114 journals, the results of the correlation analyses of the indicators, and the differences in the rankings are shown in Table 1, Table 2, and Figure 1, respectively. From these, we can see that (1) journals that are at the top of the ranking of impact indicators are usually also ranked at the top in the ranking of disruptive innovation, (2) journals at the bottom of the impact rankings are usually at the bottom of the innovation rankings, and (3) there is little difference in the ranking results of journals under different impact indicators, but there is a big difference in the ranking results of influence and disruptive innovation. Although there are moderate correlations between the JDI of journals and the 3 influence indicators of JIF, JCI, and JCC5, the average difference between the disruptive innovation ranking and academic influence ranking of the journals included in the study reached 20 places.

Table 1. Comparison of academic influence and disruptive innovation level of journals.
JournalJCC5aJDIbJIFcJCId
New England Journal of Medicine337.901167.6470.6724.83
Lancet259.17883.1159.1018.05
Journal of the American Medical Association (JAMA)110.67467.9251.2713.08
Nature Reviews Disease Primers245.25466.9132.279.48
Annals of Internal Medicine144.84293.2319.326.37
British Medical Journal(BMJ)85.27278.5027.605.57
JAMA Internal Medicine54.80235.9120.776.03
International Journal of Evidence-Based Healthcare24.94165.181.160.95
Medical Journal of Australia24.67162.995.441.18
PLoS Medicine49.55158.7611.053.91
Canadian Medical Association Journal23.64138.676.942.06
Journal of Travel Medicine12.63136.544.160.98
Journal of the Royal Society of Medicine6.00121.293.540.83
Amyloid24.28105.394.921.13
Medical Clinics of North America20.90104.362.721.22
Clinical Medicine17.59101.482.050.52
American Journal of Preventive Medicine19.7095.504.441.74
BMC Medicine28.0682.008.292.74
Annals of Family Medicine17.2478.874.192.31
Mayo Clinic Proceedings20.3471.017.092.47
American Journal of Chinese Medicine14.5365.773.511.42
British Journal of General Practice15.7664.694.431.49
Archives of Medical Science11.8962.182.380.88
Postgraduate Medical Journal7.5461.381.980.61
Journal of General Internal Medicine18.3861.184.611.69
Palliative Medicine17.6760.714.961.56
American Journal of Medicine16.7055.794.761.56
Deutsches Ärzteblatt International20.1355.744.471.59
Journal of the Chinese Medical Association8.1053.261.890.62
Journal of the National Medical Association6.9148.410.830.24
African Health Sciences5.2544.920.780.44
Journal of Urban Health11.1742.832.150.83
Canadian Family Physician11.2742.822.190.77
Family Practice8.1341.901.990.88
Journal of Pain and Symptom Management13.6239.723.381.22
Patient Preference and Adherence8.8838.482.100.69
Preventive Medicine13.9437.363.451.35
Saudi Medical Journal5.9034.581.060.35
Military Medicine6.4334.010.850.28
Journal of Korean Medical Science7.7933.231.720.62
Indian Journal of Medical Research5.5932.761.250.39
Sexual Medicine7.6631.841.440.53
Journal of the Royal Army Medical Corps5.9031.270.990.28
Journal of Evaluation in Clinical Practice10.0230.321.540.64
European Journal of General Practice6.6829.861.620.76
Journal of Women’s Health10.2029.342.011.18
International Journal of Medical Sciences10.6129.302.330.81
European Journal of Internal Medicine11.8828.403.661.01
BMJ Open11.0028.202.380.83
Journal of the Formosan Medical Association10.5527.392.840.77
Pakistan Journal of Medical Sciences3.6426.870.830.27
Medicine6.5826.361.870.60
Chinese Medical Journal, Peking9.0825.932.100.85
Scandinavian Journal of Primary Health6.4925.931.560.49
International Journal of Clinical Practice9.6125.872.610.73
Singapore Medical Journal5.5525.681.140.37
BMC Family Practice10.0925.572.431.08
Journal of Internal Medicine28.2324.966.052.09
Journal of the American Academy of Physician Associates3.9124.760.460.14
Journal of Postgraduate Medicine4.9624.491.320.42
Journal of Hospital Medicine9.2524.462.280.79
Internal Medicine Journal6.7624.361.770.61
Pain Medicine12.5024.222.760.98
Frontiers in Medicine (Lausanne)11.4124.193.110.89
Irish Journal of Medical Science5.2623.521.030.37
Acta Clinica Croatica4.2823.250.400.14
Clinics6.1222.491.130.41
Internal and Emergency Medicine10.2921.322.340.80
Medicina (Lithuania)5.2521.311.470.39
Medical Principles and Practice5.8520.721.100.50
Open Medicine (Warsaw, Poland)5.5320.121.220.25
Korean Journal of Internal Medicine8.3419.752.710.70
Colombia Médica4.3317.230.980.24
Journal of Clinical Medicine11.3417.195.691.30
Journal of Nippon Medical School2.3917.110.620.19
Postgraduate Medicine7.7816.982.240.60
British Journal of Hospital Medicine3.4516.870.530.12
Yonsei Medical Journal7.5816.861.760.60
São Paulo Medical Journal4.0016.711.090.30
Upsala Journal of Medical Sciences11.0316.172.750.65
Current Medical Research and Opinion8.0015.732.350.83
American Journal of the Medical Sciences7.7115.521.960.63
Revista Clínica Española5.3415.191.040.40
QJM: An International Journal of Medicine6.8013.922.650.77
Revista da Associacao Medica Brasileira3.7813.730.800.23
Libyan Journal of Medicine6.8913.491.410.49
Translational Research18.5713.214.921.72
Atención Primaria3.2912.971.350.43
La Presse Médicale4.1612.951.170.29
Balkan Medical Journal5.3312.731.200.37
Internal Medicine4.6512.730.960.32
Journal of Investigative Medicine8.0012.541.990.71
Southern Medical Journal4.1212.420.870.33
Scottish Medical Journal2.0012.180.680.19
World Journal of Clinical Cases4.7412.011.150.45
Acta Clinica Belgica4.9811.760.960.27
Diagnostics9.3411.712.490.71
Annals of Saudi Medicine5.2211.080.810.29
Annals of Medicine11.8511.063.051.00
Medicina Clinica (Barcelona)4.2511.011.280.31
Journal of Research in Medical Sciences5.6310.951.470.40
European Journal of Clinical Investigation12.5710.762.780.92
Journal of Nepal Medical Association1.2710.230.210.07
Wiener Klinische Wochenschrift6.6610.101.170.39
Journal of Medical Economics9.029.451.890.84
Tohoku Journal of Experimental Medicine7.078.931.580.54
Journal of Cachexia, Sarcopenia and Muscle41.246.9010.753.08
National Medical Journal of India1.685.920.640.18
Croatian Medical Journal3.135.681.620.51
Disease-a-Month8.445.130.940.34
Revue de Médecine Interne2.975.020.810.28
Medizinische Klinik, Intensivmedizin und Notfallmedizin3.343.520.850.31
Internist1.582.670.430.18
Hippokratia1.522.530.520.14

aJCC5: Journal Cumulative Citation for 5 years.

bJDI: Journal Disruption Index.

cJIF: Journal Impact Factor.

dJCI: Journal Citation Indicator.

Table 2. Correlation analysis of journal evaluation indicators.
Journal evaluation indicatorsJDIaJCC5bJIFcJCId
JDI

r10.6770.5850.621

P valuee<.001<.001<.001
JCC5

r0.67710.9090.935

P value<.001<.001<.001
JIF

r0.5850.90910.955

P value<.001<.001<.001
JCI

r0.6210.9350.9551

P value<.001<.001<.001

aJDI: Journal Disruption Index.

bJCC5: Journal Cumulative Citation for 5 years.

cJIF: Journal Impact Factor.

dJCI: Journal Citation Indicator.

eNot applicable.

Figure 1. Comparison of ranking differences based on different evaluation indicators. JCC5: Journal Cumulative Citation for 5 years; JCI: Journal Citation Indicator; JDI: Journal Disruption Index; JIF: Journal Impact Factor.
Analysis of the Differences Between the Journals’ Impact and Disruption and the Results of Peer Review

In order to better analyze the differences between the academic impact and level of disruptive innovation with peer-reviewed results of journals in the field of general and internal medicine, papers indexed in H1 Connect were referred to as “H-papers,” and the source journals of “H-papers” are referred to as “H-journals.” The evaluation indicators, ranking of H-journals, and the percentage of H-papers are shown in Table 3 and Table 4. We can see that (1) the top 7 journals in terms of both academic impact and disruptive innovation are all H-journals, (2) the average impact ranking of H-journals is higher than the average innovation ranking, and (3) some journals with low impact and innovation also became H-journals.

Table 3. Evaluation indicators of H-journals (sources of papers indexed in the H1 Connect database).
JournalCount, nJPScoreaJPTime_wbJFPStar_wc
New England Journal of Medicine1122192.2234528
Lancet43574.165142
Journal of the American Medical Association27313.73670
Annals of Internal Medicine11118.81425
Nature Reviews Disease Primers450.3611
British Medical Journal11115.81830
Sexual Medicine314.177
PLoS Medicine11103.51525
JAMA Internal Medicine648.2610
International Journal of Clinical Practice323.235
Journal of Cachexia, Sarcopenia and Muscle21423
American Journal of Medicine318.645
BMC Medicine424.667
Journal of Investigative Medicine14.714
Medical Clinics of North America14.911
Canadian Medical Association Journal19.412
Medicina Clinica14.611
Journal of Women’s Health29.522
Wiener Klinische Wochenschrift14.611
Mayo Clinic Proceedings14.611
Frontiers in Medicine14.711
Current Medical Research and Opinion14.711
Pain Medicine14.711
Preventive Medicine19.412
BMJ Open528.356
Journal of Clinical Medicine14.611
Medicine52856

aJPScore: journal peer review score.

bJPTime_w: weighted journal peer review evaluation time.

cJFPStar_w: weighted journal peer review star.

Table 4. Ranking and percentage of H-papers (papers indexed in the H1 Connect database) in H-journals (sources of H-papers).
JournalJDIa rankingJIFb rankingJCIc rankingH-papers, n (%)
New England Journal of Medicine111112 (40.58)
Lancet22243 (22.51)
Journal of the American Medical Association33327 (18.24)
Annals of Internal Medicine57511 (11.46)
Nature reviews. Disease primers4444 (9.30)
British Medical Journal65711 (7.97)
Sexual Medicine4273693 (7.14)
PLoS Medicine108811 (6.11)
JAMA Internal Medicine7666 (5.50)
International Journal of Clinical Practice5540533 (4.62)
Journal of Cachexia, Sarcopenia and Muscle107992 (2.56)
American Journal of Medicine2719193 (2.22)
BMC Medicine1810104 (1.99)
Journal of Investigative Medicine9256541 (1.52)
Medical Clinics of North America1537251 (1.47)
Canadian Medical Association Journal1112141 (1.28)
Medicina Clinica10077921 (1.20)
Journal of Women’s Health4655272 (1.19)
Wiener Klinische Wochenschrift10481821 (1.10)
Mayo Clinic Proceedings2011111 (0.94)
Frontiers in Medicine6431371 (0.53)
Current Medical Research and Opinion8145421 (0.50)
Pain Medicine6335331 (0.48)
Preventive Medicine3729231 (0.39)
BMJ Open4944425 (0.24)
Journal of Clinical Medicine7414241 (0.23)
Medicine5262655 (0.16)

aJDI: Journal Disruption Index.

bJIF: Journal Impact Factor.

cJCI: Journal Citation Indicator.

Evaluation and Correlation Analyses of Papers Under Different Evaluation Perspectives

Analysis of Differences in Academic Impact and Level of Disruptive Innovation of Papers

Ideally, if an article is accepted by a specific journal, it is because its overall quality is similar to other papers previously published in that journal [47]. However, journal-level indicators are, at best, only moderately suggestive of the quality of an article [48], which makes indicators that measure specific articles more popular [49]. Therefore, in this study, research papers in the field of general and internal medicine were also evaluated in terms of their academic impact and level of disruptive innovation, and the results are shown in Figure 2. From the results, we can see that research papers that rank high in the impact ranking usually also rank high in the disruptive innovation ranking. There were 8 (8/15, 53%), 96 (96/150, 64%), and 880 (880/1500, 58.67%) of the Top 0.1%, Top 1%, and Top 10% papers, respectively, selected based on the Dz and CC5 that were the same. Second, the level of disruptive innovation of research papers with the same level of impact varied greatly, and the impact level of research papers with the same level of disruptive innovation also varied greatly. Third, despite the high correlation (r=0.635, P<.001) between the Dz and CC5 of the selected research papers, the average difference between their innovation and impact rankings reached about 2700. Fourth, the actual analysis results showed no correlation between the innovation of the selected research paper and the number of references, which indicates that the Dz index is basically unaffected by the difference in the number of references in the actual evaluation process (r=0.006, P=.43).

Figure 2. Comparison of the ranking of the absolute disruption index (Dz) and Cumulative Citation for 5 years (CC5) of selected research papers.
Analysis of Differences Between Papers’ Impact and Disruption and the Results of Peer Review

In order to better analyze the differences between the academic impact, level of disruptive innovation, and peer review results of journal research papers in the field of general and internal medicine, the differences between the academic impact and disruptive innovation level and the peer review results of H-papers were analyzed. The specific results are shown in Table 5 and Figure 3. From the results, we see that there were 8 (8/15, 53%), 65 (65/150, 43.3%), and 187 (187/1500, 12.47%) H-papers among the top 0.1%, top 1%, and top 10% papers, respectively, selected based on Dz. There were 5 (5/15, 33%), 74 (74/150, 49.3%), and 220 (220/1500, 14.67%) H-papers among the top 0.1%, top 1%, and top 10% papers, respectively, selected based on CC5. Second, there was a significant positive correlation between the peer review indicators, disruptive innovation indicators, and academic impact indicators of H-papers, reflecting the consistency between the quantitative evaluation and peer review at the overall level. Third, the average impact ranking of H-papers was 865 (top 5.68%), and the average disruptive innovation ranking was 1726 (top 11.35%), which means that the average impact ranking of H-papers was higher than the average disruptive innovation ranking. Fourth, some papers with low academic impact and disruptive innovation level also became H-papers. Fifth, compared with the CC5, Dz has a minor correlation advantage with PTime_w and PStar_w.

Table 5. Validation of the correlation between 3 kinds of indicators of H-papers (papers indexed in the H1 Connect database).
IndicatorsCC5aDzbPTime_wcPStar_wdPScoree
CC5

r10.7490.2920.3850.613

P valuef<.001<.001<.001<.001
Dz

r
10.3220.3930.544

P value<.001<.001<.001<.001
PTime_w

r0.2920.32210.7510.534

P value<.001<.001<.001<.001
PStar_w

r0.3850.3930.75110.886

P value<.001<.001<.001<.001
PScore

r0.6130.5440.5340.8861

P value<.001<.001<.001<.001

aCC5: Cumulative Citation for 5 years.

bDz: absolute disruption index.

cPTime_w: weighted peer review evaluation times.

dPStar_w: weighted peer review stars.

ePScore: peer review score.

fNot applicable.

Figure 3. Comparison of ranking differences between H-papers (papers indexed in the H1 Connect database) in general and internal medicine based on different evaluation indicators. CC5: Cumulative Citation for 5 years; Dz: absolute disruption index; PStar: peer review stars.

In addition to rating and commenting on the included research papers, different labels are added by the reviewers of articles in H1 Connect according to their different internal characteristics. The relevant definitions are shown in Table 6 (refer to Du et al [50]). We categorized and counted the 257 H-papers (in this study, if a paper had more than 1 tag, it was counted separately in the calculation of each tag), and the results are shown in Table 7. From this, we can see that (1) research papers with the tags in the “Novel Drug Target,” “Technical Advance,” and “New Finding” categories had a high academic impact and a high disruptive innovation level; (2) research papers with the “Changes Clinical Practice” tag showed only a moderate academic impact and disruptive innovation level but had high levels of peer-reviewed recognition and attention; (3) experts showed higher recognition and concern for research papers with tags of “Negative/Null Result,” “Controversial,” “Refutation,” and “Interesting Hypothesis,” but their academic impact and disruptive innovation level were lower than others.

Table 6. Categorization and definition of peer review tags.
Tag nameDefinition
ConfirmationThe article confirms previously published data or hypotheses.
ControversialThe article challenges established dogma.
Changes clinical practiceThe article recommends that clinicians make complete, concrete and immediate changes in their practice.
Good for teachingThe article is a key article of significance in the field.
Interesting hypothesisThe article presents a meaningful model or hypothesis.
Negative/null resultThe article has invalid or negative results
New findingThe article proposes a new data model or hypothesis.
Novel drug targetThe article proposes a new drug target.
RefutationThe article refutes previously published data or hypotheses
Technical advanceThe article introduces a new practical/theoretical technique, or a new use of an existing technique.
Table 7. Details of research papers with different peer-reviewed tags.
TypeCount, nCC5aDzbPScorecPStar_wdPTime_we
Novel drug target7511.5724.5016.844.431.57
Technical advance16577.4419.6919.985.252.38
New finding185336.489.8015.313.601.74
Confirmation60315.478.4919.724.632.02
Good for teaching82248.777.4718.124.632.22
Changes clinical practice6241.674.9622.506.002.83
Negative/null result27281.813.8417.814.522.30
Controversial33227.063.8325.416.482.73
Refutation4167.752.9422.536.003.25
Interesting hypothesis37207.861.7720.685.382.24

aCC5: Cumulative Citation for 5 years.

bDz: absolute disruption index.

cPScore: peer review score.

dPStar_w: weighted peer review stars.

ePTime_w: weighted peer review evaluation times.


Principal Findings

Evaluation of Disruptive Innovation Is Detached From the Traditional Evaluation System

From these research results, large differences were seen between the innovation and impact rankings of the journals and research papers included in the study. This is also consistent with the findings of Guo and Zhou [51]. This phenomenon reflects the essential difference between the 2 different evaluation systems. It also proves that the evaluation of the disruptive innovation of research papers and journals based on Dz and JDI is not consistent with the traditional evaluation system of impact.

The essence of disruptive evaluation is to measure the innovation from the substitution level of knowledge structure. This evaluation method brings new ideas to the field of scientific research evaluation, helps relevant institutions and scholars remove the constraints of the traditional evaluation system, and helps to establish a value orientation of encouraging innovation for scientific research and scientific and technological journals, so as to promote the benign development of the academic ecology.

The 3 Evaluation Systems Only Have High Consensus on the Top Object

Although, given the consistency of scientific evaluation, there will be some uniformity in the level of disruptive innovation, level of academic impact, and peer review results of journals as well as research papers.

However, from the research results presented in this paper, we can see that (1) the top 7 journals in terms of both academic impact and disruptive innovation were all H-journals, (2) more than one-half of the top papers selected based on Dz and CC5 were the same, (3) the average H-papers were ranked at the top in terms of impact and innovation, and (4) the results of the different evaluation systems only had high consensus on the authoritative journals and top papers in the field.

These findings are also consistent with those of Goman [52] and Chi et al [53]. A fundamental reason for this phenomenon is that the purposes of the 3 evaluation systems are inherently different. Therefore, in the actual evaluation process, the 3 kinds of indicators are not interchangeable, and the combination of the 3 evaluation systems may be a feasible solution to establish a comprehensive, scientific, and reasonable journal evaluation system.

A Single Indicator Cannot Accurately Reflect the Impact of Medical Research on Clinical Practice Alone

From the aforementioned findings, we can see that research papers of the “Novel Drug Target,” Technical Advance,” and “New Finding” types have high academic impact and high levels of innovation, which is in line with their classification definitions. This is also consistent with the findings of Du et al [50], Thelwall et al [54], and Jiang and Liu [55]. Second, research papers of the “Changes Clinical Practice” type showed only moderate levels of innovation and impact but had high levels of peer-reviewed recognition and attention. This reflects the difficulty of evaluating the impact of a particular academic paper on clinical practice, whether the evaluation system is based on academic impact or level of disruptive innovation. Third, peer-reviewed experts show higher recognition and concern for research papers of the types “Negative/Null Result,” “Controversial,” “Refutation,” and “Interesting Hypothesis,” but the level of impact and disruptive innovation of these papers are lower. This partly reflects the current academic community’s excessive focus on positive results; deliberate avoidance of negative results; and overall lack of support for debatable, falsified, and other types of research.

Several scholars have recently suggested that the evaluation system of medical journals should be redesigned for contemporary clinical impact and utility [56]. In this regard, Thelwall and Maflahi [57] advocated that the references of a guideline are an important basis for studying clinical value. However, Traylor and Herrmann-Lingen [58] found only a weak correlation between the number of citations to individual journals in the guidelines and their respective JIF. Therefore, the JIF is not a suitable tool for assessing the clinical relevance of medical research. The results of this study similarly found that research papers of the “Changes Clinical Practice” type showed only moderate levels of disruptive innovation and academic impact, but such research papers received higher recognition and attention from peer review experts. Therefore, combining quantitative evaluation with peer review may be a feasible way to measure the impact of medical research on clinical practice.

Limitations

However, this study also has the following limitations. First, papers in the medical field have a preference for citing review articles [59], which has a certain impact on the evaluation of the disruptive innovation of research papers. Second, the scoring mechanism provided to its reviewers by H1 Connect has a low differentiation degree and cannot perfectly distinguish the differences in quality between papers yet. In addition, H1 Connect has too few evaluators. Brezis and Birukou [60] illustrated that, if the number of reviewers is increased to about 10, the correlation between the results and the quality of the paper will be significantly improved. However, it is difficult to seek so many high-quality experts who are willing to accept open peer review in the high pressure environment of “Publish or Perish” [61]. Third, since the citation data sources used in this study are all based on PubMed data, this study also suffers from the problem of missing references and citations that are not labeled with a PMID, which affects the accuracy of the evaluation results to a certain extent. In future studies, we will obtain more accurate measurement results by jointly using multiple sources of citation data.

Conclusions

In this study, the general and internal medicine journals indexed in SCIE in 2018 were chosen as the study object. The POCI and H1 Connect databases were selected as sources of citation relationship data and peer review data. We investigated the connections and contrasts between the academic impact, level of disruptive innovation, and results of peer review for medical journals and published research papers. We also analyzed the similarities and differences as well as the fundamental causes of the varying evaluation results in terms of impact evaluation, disruptive innovation, and peer review for various types of medical research papers.

The results of this study show that the evaluation of scientific research based on the innovation index is detached from the traditional impact evaluation system, the 3 evaluation systems only have high consistency for authoritative journals and top papers, and neither the single impact index nor the innovation index can directly reflect the impact of medical research on clinical practice.

In addition, with the increasing importance of replicative science, the accuracy of statistical reports, evidential value of reported data, and replicability of given experimental results [62] should also be included in the examination of journal quality. How to establish a comprehensive, all-encompassing, scientific, and reasonable journal evaluation system needs to be further investigated.

Acknowledgments

XL was supported by the National Social Science Foundation of China (23BTQ085), Major Project of Basic Research on Philosophy and Social Sciences in Henan Universities (2024-JCZD-23).

Data Availability

All citation data can be obtained from [63].

Authors' Contributions

XL conceptualized the study, reviewed and edited the manuscript, and supervised the study. YJ was responsible for the methodology, provided the software and other resources, performed the formal analysis, and wrote the original manuscript draft. YJ and XL acquired funding. YJ, XY, and ZZ curated the data. XY and ZZ also contributed to the study investigation.

Conflicts of Interest

None declared.

  1. Garfield E. Citation analysis as a tool in journal evaluation. Science. Nov 03, 1972;178(4060):471-479. [CrossRef] [Medline]
  2. Hansson S. Impact factor as a misleading tool in evaluation of medical journals. Lancet. Sep 30, 1995;346(8979):906. [CrossRef] [Medline]
  3. Metze K, Borges da Silva FA. Ranking of journals by journal impact factors is not exact and may provoke misleading conclusions. J Clin Pathol. May 11, 2022;75(10):649-650. [CrossRef] [Medline]
  4. Siler K, Larivière V. Who games metrics and rankings? Institutional niches and journal impact factor inflation. Research Policy. Dec 2022;51(10):104608. [CrossRef]
  5. Martin BR. Editors’ JIF-boosting stratagems – Which are appropriate and which not? Research Policy. Feb 2016;45(1):1-7. [CrossRef]
  6. Zaidi SJA, Taqi M. Citation cartels in medical and dental journals. J Coll Physicians Surg Pak. Jun 01, 2023;33(6):700-701. [CrossRef] [Medline]
  7. Fassin Y. Research on Covid-19: a disruptive phenomenon for bibliometrics. Scientometrics. May 07, 2021;126(6):5305-5319. [FREE Full text] [CrossRef] [Medline]
  8. Buela-Casal G, Zych I. What do the scientists think about the impact factor? Scientometrics. Feb 22, 2012;92(2):281-292. [CrossRef]
  9. Thelwall M, Kousha K, Stuart E, Makita M, Abdoli M, Wilson P, et al. In which fields are citations indicators of research quality? Asso for Info Science & Tech. May 04, 2023;74(8):941-953. [CrossRef]
  10. Park M, Leahey E, Funk R. Papers and patents are becoming less disruptive over time. Nature. Jan 2023;613(7942):138-144. [CrossRef] [Medline]
  11. Chu JSG, Evans JA. Slowed canonical progress in large fields of science. Proc Natl Acad Sci U S A. Oct 12, 2021;118(41):1. [FREE Full text] [CrossRef] [Medline]
  12. Schattner U, Shatner M. Science desperately needs disruptive innovation. Qeios. Jun 07, 2023:1. [CrossRef]
  13. Liang Z, Mao J, Li G. Bias against scientific novelty: A prepublication perspective. Asso for Info Science & Tech. Nov 19, 2022;74(1):99-114. [CrossRef]
  14. Osterloh M, Frey BS. Ranking games. Eval Rev. Aug 04, 2014;39(1):102-129. [CrossRef]
  15. Kulczycki E, Huang Y, Zuccala AA, Engels TCE, Ferrara A, Guns R, et al. Uses of the Journal Impact Factor in national journal rankings in China and Europe. Asso for Info Science & Tech. Aug 13, 2022;73(12):1741-1754. [CrossRef]
  16. McKiernan E, Schimanski L, Muñoz Nieves C, Matthias L, Niles M, Alperin J. Use of the Journal Impact Factor in academic review, promotion, and tenure evaluations. Elife. Jul 31, 2019;8:8. [FREE Full text] [CrossRef] [Medline]
  17. Heibi I, Peroni S, Shotton D. Software review: COCI, the OpenCitations Index of Crossref open DOI-to-DOI citations. Scientometrics. Sep 14, 2019;121(2):1213-1228. [CrossRef]
  18. Tushman ML, Anderson P. Technological discontinuities and organizational environments. Administrative Science Quarterly. Sep 1986;31(3):439. [CrossRef]
  19. Christensen CM. The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail (Management of Innovation and Change). Boston, MA. Harvard Business Review Press; 1997.
  20. Funk RJ, Owen-Smith J. A dynamic network measure of technological change. Management Science. Mar 2017;63(3):791-817. [CrossRef]
  21. Govindarajan V, Kopalle PK. Disruptiveness of innovations: measurement and an assessment of reliability and validity. Strategic Management Journal. Dec 22, 2005;27(2):189-199. [CrossRef]
  22. Wu L, Wang D, Evans JA. Large teams develop and small teams disrupt science and technology. Nature. Feb 13, 2019;566(7744):378-382. [CrossRef] [Medline]
  23. Kostoff RN, Boylan R, Simons GR. Disruptive technology roadmaps. Technological Forecasting and Social Change. Jan 2004;71(1-2):141-159. [CrossRef]
  24. Bornmann L, Devarakonda S, Tekles A, Chacko G. Disruptive papers published in Scientometrics: meaningful results by using an improved variant of the disruption index originally proposed by Wu, Wang, and Evans (2019). Scientometrics. Mar 14, 2020;123(2):1149-1155. [CrossRef]
  25. Horen S, Hansdorfer M, Kronshtal R, Dorafshar A, Becerra A. The most disruptive publications in craniofacial surgery (1954-2014). J Craniofac Surg. Oct 01, 2021;32(7):2426-2430. [CrossRef] [Medline]
  26. Sullivan GA, Skertich NJ, Gulack BC, Becerra AZ, Shah AN. Shifting paradigms: The top 100 most disruptive papers in core pediatric surgery journals. J Pediatr Surg. Aug 2021;56(8):1263-1274. [CrossRef] [Medline]
  27. Meyer C, Nakamura Y, Rasor BJ, Karim AS, Jewett MC, Tan C. Analysis of the innovation trend in cell-free synthetic biology. Life (Basel). Jun 11, 2021;11(6):551. [FREE Full text] [CrossRef] [Medline]
  28. Jiang Y, Liu X. A bibliometric analysis and disruptive innovation evaluation for the field of energy security. Sustainability. Jan 05, 2023;15(2):969. [CrossRef]
  29. Becerra A, Grimes C, Grunvald M, Underhill J, Bhama A, Govekar H, et al. A new bibliometric index: the top 100 most disruptive and developmental publications in colorectal surgery journals. Dis Colon Rectum. Mar 01, 2022;65(3):429-443. [CrossRef] [Medline]
  30. Sheth AH, Hengartner A, Abdou H, Salehi PP, Becerra AZ, Lerner MZ. Disruption index in otolaryngology: uncovering a bibliometric history of a rapidly evolving field. Laryngoscope. Feb 19, 2024;134(2):629-636. [CrossRef] [Medline]
  31. Dilday J, Gallagher S, Bram R, Williams E, Grigorian A, Matsushima K, et al. Citation versus disruption in the military: Analysis of the top disruptive military trauma research publications. J Trauma Acute Care Surg. May 15, 2023;95(2S):S157-S169. [CrossRef]
  32. Grunvald M, Williams M, Rao R, O’Donoghue C, Becerra A. 100 disruptive publications in breast cancer research. Asian Pac J Cancer Prev. Aug 01, 2021;22(8):2385-2389. [CrossRef]
  33. Abu-Omar A, Kennedy P, Yakub M, Robbins J, Yassin A, Verma N, et al. Extra credit for disruption: trend of disruption in radiology academic journals. Clin Radiol. Dec 2022;77(12):893-901. [CrossRef] [Medline]
  34. Patel PA, Javed Ali M. Characterizing innovation in science through the disruption index. Semin Ophthalmol. Aug 18, 2022;37(6):790-791. [CrossRef] [Medline]
  35. Neubauer DC, Blum JD, Labou SG, Heskett KM, Calvo RY, Reid CM, et al. Using the disruptive score to identify publications that changed plastic surgery practice. Ann Plast Surg. 2022;88(4):S385-S390. [CrossRef]
  36. Khusid JA, Gupta M, Sadiq AS, Atallah WM, Becerra AZ. Changing the status quo: the 100 most-disruptive papers in urology? Urology. Jul 2021;153:56-68. [CrossRef] [Medline]
  37. Williams MD, Grunvald MW, Skertich NJ, Hayden DM, O'Donoghue C, Torquati A, et al. Disruption in general surgery: Randomized controlled trials and changing paradigms. Surgery. Dec 2021;170(6):1862-1866. [CrossRef] [Medline]
  38. Bornmann L, Devarakonda S, Tekles A, Chacko G. Are disruption index indicators convergently valid? The comparison of several indicator variants with assessments by peers. Quantitative Science Studies. Aug 2020;1(3):1242-1259. [CrossRef]
  39. Bu Y, Waltman L, Huang Y. A multidimensional framework for characterizing the citation impact of scientific publications. Quantitative Science Studies. 2021;2(1):155-183. [CrossRef]
  40. Ruan X, Lyu D, Gong K, Cheng Y, Li J. Rethinking the disruption index as a measure of scientific and technological advances. Technological Forecasting and Social Change. Nov 2021;172:121071. [CrossRef]
  41. Liu X, Shen Z, Liao Y, Yang L. The research about the improved disruption index and its influencing factors. Library and Information Service. 2020;64(24):84-91. [CrossRef]
  42. Liu X, Shen Z, Liao Y, Zhu M, Yang L. Research on the stable time window of disruption index. Library and Information Service. 2021;65(18):49-57. [CrossRef]
  43. Liu X, Liao Y, Zhu M. A preliminary study on the application of disruption index to scientific research evaluation. Information Studies: Theory & Application. 2021;44(12):34-40. [CrossRef]
  44. Jiang Y, Liu X. A construction and empirical research of the journal disruption index based on open citation data. Scientometrics. May 18, 2023;128(7):3935-3958. [FREE Full text] [CrossRef] [Medline]
  45. Jiang Y, Wang L, Liu X. Innovative evaluation of Chinese SCIE-indexed scientific journals: An empirical study based on the disruption index. Chinese Journal of Scientific and Technical Periodicals. 2023;34(8):1060-1068. [CrossRef]
  46. Bornmann L, Tekles A. Disruption index depends on length of citation window. EPI. Mar 25, 2019;28(2):1. [CrossRef]
  47. Migheli M, Ramello GB. The unbearable lightness of scientometric indices. Manage Decis Econ. Nov 04, 2021;42(8):1933-1944. [CrossRef]
  48. Thelwall M, Kousha K, Makita M, Abdoli M, Stuart E, Wilson P, et al. In which fields do higher impact journals publish higher quality articles? Scientometrics. May 18, 2023;128(7):3915-3933. [CrossRef]
  49. Iyengar KP, Vaishya R. Article-level metrics: A new approach to quantify reach and impact of published research. J Orthop. Jun 2023;40:83-86. [CrossRef] [Medline]
  50. Du J, Tang X, Wu Y. The effects of research level and article type on the differences between citation metrics and 1000 recommendations. Asso for Info Science & Tech. Jun 2015;67(12):3008-3021. [CrossRef]
  51. Guo L, Zhou Q. Research on characteristics of disruptive papers and their correlation with traditional bibliometric indicators. Journal of Intelligence. 2021;40(1):183-207. [CrossRef]
  52. Gorman DM, Huber C. Ranking of addiction journals in eight widely used impact metrics. J Behav Addict. Jul 13, 2022;11(2):348-360. [FREE Full text] [CrossRef] [Medline]
  53. Chi P, Ding J, Leng F. Characteristics of the technology impact of breakthrough papers in biology and medicine. Journal of the China Society for Scientific and Technical Information. 2022;41(7):675. [CrossRef]
  54. Thelwall M, Kousha K, Abdoli M. Is medical research informing professional practice more highly cited? Evidence from AHFS DI Essentials in drugs.com. Scientometrics. Feb 21, 2017;112(1):509-527. [CrossRef]
  55. Jiang Y, Liu X. The relationship between absolute disruption index, peer review index and CNCI: a study based on virology papers. Library and Information Service. 2023;67(3):96-105. [CrossRef]
  56. Klein-Fedyshin M, Ketchum A. PubMed's core clinical journals filter: redesigned for contemporary clinical impact and utility. J Med Libr Assoc. Jul 10, 2023;111(3):665-676. [FREE Full text] [CrossRef] [Medline]
  57. Thelwall M, Maflahi N. Guideline references and academic citations as evidence of the clinical value of health research. Asso for Info Science & Tech. Mar 17, 2015;67(4):960-966. [CrossRef]
  58. Traylor C, Herrmann-Lingen C. Does the journal impact factor reflect the impact of German medical guideline contributions? Scientometrics. Feb 03, 2023;128(3):1951-1962. [CrossRef]
  59. Marks MS, Marsh MC, Schroer TA, Stevens TH. An alarming trend within the biological/biomedical research literature toward the citation of review articles rather than the primary research papers. Traffic. Jan 19, 2013;14(1):1-1. [FREE Full text] [CrossRef] [Medline]
  60. Brezis ES, Birukou A. Arbitrariness in the peer review process. Scientometrics. Feb 03, 2020;123(1):393-411. [CrossRef]
  61. Jiang Y, Liu X. Open peer review: Mode, technology, problems, and countermeasures. Chinese Journal of Scientific and Technical Periodicals. 2022;33(9):1196-1205. [CrossRef]
  62. Dougherty MR, Horne Z. Citation counts and journal impact factors do not capture some indicators of research quality in the behavioural and brain sciences. R Soc Open Sci. Aug 17, 2022;9(8):220334. [FREE Full text] [CrossRef] [Medline]
  63. COCI CSV dataset of all the citation data. Figshare. Feb 07, 2023. URL: https:/​/figshare.​com/​articles/​dataset/​Crossref_Open_Citation_Index_CSV_dataset_of_all_the_citation_data/​6741422?file=37483791 [accessed 2024-05-24]


CC5: Cumulative Citation for 5 years
CD: consolidation-disruption
CNCI: Category Normalized Citation Impact
D: disruption
Dz: absolute disruption index
JCC5: Journal Cumulative Citation for 5 years
JCI: Journal Citation Indicator
JCR: Journal Citation Reports
JDI: Journal Disruption Index
JIF: Journal Impact Factor
POCI: the OpenCitations Index of PubMed open PMID-to-PMID citations
PScore: peer review score
PStar_w: weighted peer review star
PTime_w: weighted peer review time
RDF: Resource Description Framework
SCIE: Science Citation Index Expanded
WoS: Web of Science


Edited by S Ma; submitted 04.12.23; peer-reviewed by O Beiki, C Oo; comments to author 28.02.24; revised version received 16.04.24; accepted 29.04.24; published 31.05.24.

Copyright

©Yuyan Jiang, Xue-li Liu, Zixuan Zhang, Xinru Yang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 31.05.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.