Published on in Vol 23, No 5 (2021): May

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/28859, first published .
Evaluating Scholars’ Impact and Influence: Cross-sectional Study of the Correlation Between a Novel Social Media–Based Score and an Author-Level Citation Metric

Evaluating Scholars’ Impact and Influence: Cross-sectional Study of the Correlation Between a Novel Social Media–Based Score and an Author-Level Citation Metric

Evaluating Scholars’ Impact and Influence: Cross-sectional Study of the Correlation Between a Novel Social Media–Based Score and an Author-Level Citation Metric

Original Paper

1Department of Emergency Medicine, Mayo Clinic Rochester, Rochester, MN, United States

2Mayo Clinic Libraries, Mayo Clinic Florida, Jacksonville, FL, United States

3Department of Biostatistics and Informatics, Mayo Clinic Rochester, Rochester, MN, United States

4Symplur, Los Angeles, CA, United States

Corresponding Author:

Daniel Cabrera, MD

Department of Emergency Medicine

Mayo Clinic Rochester

200 First Street SW

Rochester, MN, 55905

United States

Phone: 1 507 255 4399

Email: cabrera.daniel@mayo.edu


Background: The development of an author-level complementary metric could play a role in the process of academic promotion through objective evaluation of scholars’ influence and impact.

Objective: The objective of this study was to evaluate the correlation between the Healthcare Social Graph (HSG) score, a novel social media influence and impact metric, and the h-index, a traditional author-level metric.

Methods: This was a cross-sectional study of health care stakeholders with a social media presence randomly sampled from the Symplur database in May 2020. We performed stratified random sampling to obtain a representative sample with all strata of HSG scores. We manually queried the h-index in two reference-based databases (Scopus and Google Scholar). Continuous features (HSG score and h-index) from the included profiles were summarized as the median and IQR. We calculated the Spearman correlation coefficients (ρ) to evaluate the correlation between the HSG scores and h-indexes obtained from Google Scholar and Scopus.

Results: A total of 286 (31.2%) of the 917 stakeholders had a Google Scholar h-index available. The median HSG score for these profiles was 61.1 (IQR 48.2), and the median h-index was 14.5 (IQR 26.0). For the 286 subjects with the HSG score and Google Scholar h-index available, the Spearman correlation coefficient ρ was 0.1979 (P<.001), indicating a weak positive correlation between these two metrics. A total of 715 (78%) of 917 stakeholders had a Scopus h-index available. The median HSG score for these profiles was 57.6 (IQR 46.4), and the median h-index was 7 (IQR 16). For the 715 subjects with the HSG score and Scopus h-index available, ρ was 0.2173 (P<.001), also indicating a weak positive correlation.

Conclusions: We found a weak positive correlation between a novel author-level complementary metric and the h-index. More than a chiasm between traditional citation metrics and novel social media–based metrics, our findings point toward a bridge between the two domains.

J Med Internet Res 2021;23(5):e28859

doi:10.2196/28859

Keywords



Since the development of social media platforms and new communication channels, the use of traditional bibliographic metrics (ie, citation counts, h-indexes) as the predominant factors for academic performance has been questioned [1]. Traditional benchmarks such as citation counts fail to capture the authors’ impact outside academic circles [2]. The ways in which research output is indexed, searched, located, read, and mentioned have significantly changed, and these ways do not describe the influence and impact that scholarly work may have outside core academic domains [3,4].

In the health care world, social media platforms (eg, Twitter, Facebook) are consistently used by patients, policy makers, clinicians, and researchers as efficient ways of sharing information, staying up to date with scientific knowledge, and collaborating with peers and patients [5]. The widespread use of social media by health care stakeholders has led to the development of alternative impact metrics, also known as “altmetrics” [6]. The altmetrics approach offers new ways to analyze and inform scholarship [7]. It complements rather than replaces traditional indicators of a scholar’s performance [8]. Altmetrics have even been adopted aggressively by traditional publishing companies [9]. The study of these alternative metrics is an emerging field; unlike traditional parameters, such as the impact factor or h-index, it does not rely solely on citation counts but is a composite measure. It considers other features such as the number of knowledge databases that refer to the work, and the number of times the work has been viewed and downloaded; it also factors in the number of mentions in social media and traditional news outputs.

Academic merit and achievement should be appraised using frameworks such as the comprehensive researcher achievement model (CRAM) [8], encompassing a combination of four aspects: quantity of researcher outputs (productivity), value of outputs (quality), outcomes of research outputs (impact), and relations between publications or authors and the wider world (influence). Current traditional benchmarks focus mostly on productivity and quality, while alternative metrics focus on influence and impact. In 2011, Eysenbach proposed the Twimpact Factor, an article level social media impact metric consisting of the absolute cumulative number of tweetations 7 days after publication of the article, and the Twindex, which is the relative percentile of the Twimpact Factor of a given article compared with other articles in the same journal [10]. For articles published in the Journal of Medical Internet Research, Eysenbach found relatively strong article-level correlations between these metrics (number of tweets, adjusted by time and journal factors) and future citations and highlighted the importance of using social media–based impact measures to complement traditional citation metrics [10]. While social media metrics at the article or journal level already exist and have been correlated with traditional citation metrics [10], novel tools could also be used to evaluate features such as influence and impact at the author level. There is a clear need to improve the ways in which the different outputs of scholarly work are evaluated, as claimed by the Declaration on Research Assessment (DORA) movement [11]. The development of an author-level complementary metric could play a role in the academic promotion process through objective evaluations of scholars’ influence and impact.

Recently, multiple organizations have created tools that attempt to measure influence and impact in the digital domain primarily by using network analysis of social media activity and digital publications [12]. Among these innovations, Symplur’s Healthcare Social Graph (HSG) score has recently emerged [13]. In this context, we aimed to evaluate the correlation between the HSG, a social media influence and impact metric, and the h-index, a traditional author-level metric.


Study Design, Study Setting, and Participants

This report was written following the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines [14]. This study was deemed exempt by the Institutional Review Board.

This was a cross-sectional observational study of health care stakeholders with a social media presence randomly sampled from the HSG database in May 2020. Health care stakeholders included the following three taxonomic categories: “doctor” (ie, those identified as possibly licensed, MDs, DOs, PhDs), “health care professionals” (ie, those identified as other health care professionals such as nurses, dietitians, respiratory therapists, and pharmacists), and “researchers/academicians” (ie, people working in the field of health-related research or academia). Over 1 million Twitter profiles were labeled according to the health care stakeholder category as part of the database metadata. Only the profiles of those identifying themselves in their public Twitter profile received a label by Symplur partly through manual verification and partly through a machine learning process [15]. We did not exclude health care stakeholders based on their discipline. Considering the 6 million Twitter accounts that received an HSG score and individuals identified as health care stakeholders in May 2020, we performed stratified random sampling to obtain a representative sample with all strata of HSG scores. A random sample of 100 profiles from each HSG score decile (0-9, 10-19, etc) was obtained, yielding an initial list of 1000 subjects with their respective HSG scores. This stratification method was chosen owing to the skewness of the HSG scores in the Symplur database, where simple probability sampling would lead to a study population restricted to lower values of the HSG score.

Data Source, Variables, and Measurement

Symplur is a health care social media analytics company that created the HSG database holding public digital content (ie, conversations, interactions) originating from Twitter and obtained via the official Twitter application programming interface (API) while supplementing it with other public content from social media platforms including LinkedIn, YouTube, Instagram, Reddit, and Facebook. The HSG score was developed by Symplur to identify and rank influencers in any health care topic and is conceptually like an eigenvector [16]. This score ranks Twitter accounts by their global conversational impact in healthcare over the last 52 weeks. As long as Twitter accounts have engaged (ie, tweeted at least once) in one of the 40,000 health care terms being tracked, they will be evaluated. The score is not determined by the absolute numbers of tweets or how many mentions they have received for the given time period, but by the impact of the posted messages. The score comprises three components, a social network analysis algorithm, health care stakeholder weighting, and conversation quality algorithm. The network analysis algorithm is inspired by the hyperlink-induced topic search (HITS) algorithm [17] and considers each Twitter account’s conversation graph by recursively analyzing the health care influence of each individual conversation partner, the influence of the conversation partner’s own conversation partners, and so on [18]. In this respect, it is similar to modern impact factor algorithms for academic journals and Google’s PageRank [19]. The score is designed specifically for health care and considers the health care stakeholder groups to which the account holders belong. In other words, it matters what role a person has in health care. If, for example, an account is interacting with or being mentioned by another account that is not related to health care, then those conversations and mentions will have less weight as determined by the algorithm. If, on the other hand, the conversations and mentions are made by a health care stakeholder, then that has more weight according to the algorithm. Based on the analysis of these conversations, a quality score is factored with a conversation volume to provide a weighted measure for the impact scores. After that, the 52 weekly rankings and quality scores are combined into a single number for each social media profile and then normalized on a scale of 0 (very low influence) to 100 (very high influence).

For each of the 1000 Twitter profiles initially included in our stratified random sample, we manually queried their h-indexes in two reference-based databases (Scopus and Google Scholar) by searching their names in each profile service’s search engine. Before extracting the data, we used a standardized verification process to confirm if the identified profiles corresponded to the Twitter user. Any profile found in Google Scholar or Scopus was verified using at least three of the following identifiers: name (first, middle, last), title, location (country/city), field/specialty, affiliation, and qualitative analysis of Twitter conversations or a free-form Google search using the associated name and any other identifier available. Once a profile was found in either of the platforms and the verification process confirmed with at least three identifiers, the h-index was extracted. This verification process was created to decrease the probability of extracting data from an incorrect profile (eg, similar name but not the author of interest). In Google Scholar and Scopus, the h-index is calculated as “number n of a researcher's papers that have all received at least n citations” [20]. Although there have been multiple studies [20-24] that have highlighted the advantages, disadvantages, and variations, no other traditional author-level metric has had the same level of acceptance or resilience over the past 15 years. Individuals can calculate the h-index of any researcher as long as they have access to a resource providing the citation count of that researcher's publications or research objects. The three most prominent resources or platforms that provide citation counts for researchers are the Web of Science (Clarivate Analytics – previously of Thomson Reuters), Scopus (Elsevier), and Google Scholar (Alphabet). To provide a more comprehensive reporting for this study, Scopus and Google Scholar were chosen to provide the traditional/benchmark h-index data for each individual. This decision was based on the 2018 study by Martin-Martin et al [25], which found that the greatest inclusion of citations in Health & Medical Sciences was on Scopus and Google Scholar. Additional factors were considered when choosing between Web of Science and Scopus. Scopus was seen as providing all authors better access to their own author profiles, which would allow authors to clarify their publications and correct inaccuracies. Additionally, a 2016 study by Walker et al [26] showed a higher interrater reliability in Scopus than in Web of Science for the h-index calculation.

The h-index for the first 100 profiles was independently extracted by three independent investigators (LOJS, GM, and TB). In this initial set of 100 profiles, there was a 98% overall agreement for the h-index extracted from Google Scholar and a 96% overall agreement for the h-index extracted from Scopus. Disagreements were discussed and resolved through consensus with the senior author (DC). Once our standardized verification process and data extraction methods exhibited adequate reliability, the remaining profiles were extracted independently; 600 were reviewed by the first author (LOJS) and 150 by each of the two other investigators (GM, TB). Investigators extracting the h-index for these profiles were blinded to the HSG scores of all subjects.

Data Analysis

From the initial list of 1000 subjects, we excluded those with incomplete names or non-individual user profiles. The remaining profiles were included for the main data analysis if an h-index was available from either Google Scholar or Scopus. All analyses were conducted using the BlueSky Statistics (Version 7.0.746.34007) graphic user interface (GUI) for R. Continuous features (HSG score and h-index) from included profiles were summarized as the median and IQR. Correlation analyses were performed between the HSG scores and h-index obtained from Google Scholar (overall h-index and 2015 h-index) and Scopus (overall h-index). Given the highly skewed nature of metrics such as the h-index [27], we calculated the Spearman correlation coefficient (ρ). This is similar to the Pearson correlation, but it is based on ranks rather than original values. Like the Pearson correlation, values range from –1 to +1, with larger absolute values indicating a stronger relationship. A correlation t test was conducted to evaluate the statistical significance of the correlation coefficients. P values <.05 were considered statistically significant. For sensitivity analysis, we considered the h-index as 0 for those subjects in which a Scopus h-index was not found.

Simple linear regression was initially implemented to understand the linear relationship between the HSG score and h-index (Google Scholar and Scopus). To better understand the true relationship between the HSG score, and the overall h-index provided by Google Scholar and Scopus, negative binomial hurdle regression was performed. The h-index was used as the response variable, and the HSG score was the predictor of interest. A negative binomial model was chosen owing to the skewed nature of the h-index data and the overdispersion present in the data distribution. To account for the high number of zeroes not covered by a negative binomial distribution, a hurdle model with a binomial logistic link function was also implemented. Model selection was performed using the Vuong test and the Akaike information criteria (AIC).


Twitter Profiles

Our stratified random sample generated an initial list of 1000 Twitter profiles from the Symplur database. Of these, 83 were excluded for the following reasons: 5 were repeated profiles, 62 had incomplete names on Twitter (ie, no first and last names, making it impossible to search for a corresponding Google Scholar or Scopus profile), and 16 were not individual user profiles. Among the 917 individual Twitter profiles with complete names for which h-indexes were searchable, 429 (46.8%) were from the United States, 173 (18.9%) from the United Kingdom, 54 (5.9%) from Canada, 49 (5.3%) from Spain, 41 (4.5%) from Australia, 17 (1.9%) from India, 13 (1.4%) from the Netherlands, 13 (1.4%) from France, 12 (1.3%) from Ireland, 9 (1.0%) from Brazil, and the remaining 11.6% from 36 other countries from all continents (only 5 profiles were from unknown countries).

A total of 286 (31.2%) of the 917 stakeholders had a Google Scholar h-index available. The median HSG score for these profiles was 61.1 (IQR 48.2), and the median h-index was 14.5 (IQR 26). A total of 715 (78%) of the 917 stakeholders had a Scopus h-index available. The median HSG score for these profiles was 57.6 (IQR 46.4), and the median h-index was 7 (IQR 16).

Google Scholar h-Index

For the 286 subjects with the HSG score and overall h-index provided by Google Scholar available, the Spearman correlation coefficient ρ was 0.1979 (P<.001), indicating a weak positive correlation between these two metrics (Figure 1). When we analyzed the correlation for the 2015 h-index from Google Scholar, the results were similar (ρ=0.203) (Figure 2). Also, when we analyzed the Google Scholar i10 index, the results did not change significantly (see Multimedia Appendix 1 and Multimedia Appendix 2).

Figure 1. Correlation between HSG scores and Google Scholar overall h-indexes. Spearman correlation coefficient ρ=0.1979 (N=286). The red line is the regression line; the shaded area is the 95% CI.
View this figure
Figure 2. Correlation between HSG scores and Google Scholar 2015 h-indexes. Spearman correlation coefficient ρ=0.203 (N=286). The red line is the regression line; the shaded area is the 95% CI.
View this figure

Linear regression using the Google Scholar overall h-index as the response found a significant association between the HSG score and h-index. Assuming a linear relationship, for every 10-point increase in the HSG score, there was an associated increase of 1.134 in the h-index (95% CI 0.280-2.347; P=.01). The R2 value was 0.0214 and the linear regression equation is expressed as (E[Google Scholar overall h-index] = 17.037 + 0.1314*[HSG score]). From the negative binominal hurdle model, there was no effect of the HSG score on whether an author’s Google Scholar h-index is 0 or positive (log-odds=0.043; P=.31). However, for authors with a positive h-index, a 5-point increase in the HSG score was associated with a 2.7% increase in Google Scholar h-index (exp[coef]=1.027; 95% CI 1.006 -1.048; P=.01). Additionally, the Vuong test found that the hurdle model was a better fit than the negative binomial model (z statistic=3.092; P<.001).

Scopus h-Index

For the 715 subjects with the HSG score and Scopus h-index available, the Spearman correlation coefficient ρ was 0.2173 (P<.001), also indicating a weak positive correlation (Figure 3). In the sensitivity analysis, in which subjects without a Scopus h-index available were computed as having an h-index of 0, therefore including all 917 initially eligible profiles, the Spearman correlation coefficient ρ was 0.317 (P<.001) (see Multimedia Appendix 3).

Figure 3. Correlation between HSG scores and Scopus h-indexes. Spearman correlation coefficient ρ=0.2173 (N=715). The red line is the regression line; the shaded area is the 95% CI.
View this figure

Univariate linear regression fitting of Scopus h-indexes found a significant association with the HSG score. Assuming a linear relationship, for every 10-point increase in the HSG, we expect a 1.049-point increase in the h-index (95% CI 0.567-1.530; P<.001). The R2 value was 0.0249 and the linear regression equation is expressed as (E[Scopus h-index] = 8.0821 + 0.1048*[HSG score]). From the negative binomial hurdle model, we found no significant effect of the HSG score on whether an author’s h-index is 0 or positive (log-odds=0.0072; P=.27). However, for authors with a positive h-index, a 5-point increase in the HSG score was associated with a 4% increase in h-index (exp[coef]=1.040; 95% CI 1.021-1.061; P<.001). The Vuong test found that the hurdle model was a better fit than the negative binomial model (z statistic=4.606; P<.001).


Principal Results

The advent of digital scholarship is rapidly changing the way scholarship is created and appraised in academia. We are currently seeing a swift transition from a paradigm in which the impact of an academician was circumscribed to deliverables critiqued by a restrictive circle of peers to a novel model in which the importance of scholarly work is measured by the influence and impact it generates in academic circles and among the general public. This leads to the critical need to adopt new appraisal concepts and tools [1,10,28].

The HSG score represents a novel author-level tool within the domain of altmetrics. This metric aims to measure and illustrate the influence a particular stakeholder has in health care social media as a function of user-generated content and interactions. This method is common for analyzing the weight or importance of specific users that are part of larger networks [29]. In general, the more the connections and the more information users create or are involved with, the greater their importance in a network [30].

Citation-focused metrics such as the h-index assess the importance of academicians based on the number of times their work has been cited by other scholars, with a significant bias constructed to value certain outputs (eg, prestigious journals) more than others. This distorts the organic reach and impact of articles (eg, where bad articles in good journals are valued more than good articles in bad journals), conflagrating production and publishability with influence and impact [31].

In this study, we aimed to find if there is a relationship between the HSG score, a marker for influence in a network, and the h-index, a metric for productivity. The assumption driving this comparison was that a high degree of productivity (greater h-index) would be associated with higher impact among stakeholders in the field and subsequent influence on digital networks.

Approximately three quarters of health care stakeholders identified by HSG scores had a concomitant h-index profile; this simple observation illustrates that there is a significant overlap between academic endeavors and the participation of these users in social media. In other words, academicians are part of general public forums such as social media; they value such forums and participate in them.

When analyzing the relation between the HSG score and h-index, we found a correlation, albeit a weak one, between the two metrics. This positive relation seems to indicate that the higher the HSG score, the higher the h-index (and vice versa). We believe the association describes a relation between scholarly productivity and influence in a health care network; this is possibly explained by academicians using digital domains to disseminate their scholarly work and subsequently bring attention to it, by measuring interactions with other stakeholders and organically increasing their connections and weight in the network. Nevertheless, it is important to mention that the low R2 value approximately at 2% implies that although we have a statistically significant correlation and are capturing similar trends, there is a sizable amount of variability that is not shared between these two metrics. This emphasizes the simple fact that these metrics measure different components of scholarly work and should be evaluated in an independent and complementary way.

The HSG score and h-index are metrics that are of interest for scholars and academic establishments. Per their definitions, these tools are aimed at different aspects of the CRAM framework [8], where the HSG score likely appraises impact and influence and h-index productivity and quality. Remarkably, from our analysis, we can describe an association bridging these four aspects; the influence and impact of a user in a health care–specific digital network are correlated with their academic productivity and quality. More than a chiasm between traditional citation metrics and novel social media–based metrics, our findings point toward a positive relation between the two domains.

Limitations

There are several limitations that need to be acknowledged. First, the accuracy of the metrics that were obtained (which were subsequently used to compute correlations) depends on the validity of the data provided by each reference-based database. Some of these platforms (eg, Google Scholar) can easily be manipulated [32]. Scopus automatically calculates the h-indexes of authors without a profile in their database, which explains why there were higher numbers of profiles and h-indexes available in Scopus when compared to Google Scholar, in which individuals need to create active profiles. In Scopus, authors may have more than one profile and, for this reason, we have used an available tool in their platform to combine profiles from the same author to obtain the most accurate h-index for that author. Second, Scopus and Google Scholar data are dynamic because new citations are constantly being added to their databases. As we were unable to automatically retrieve the h-indexes from these databases on the same day, manual data extraction occurred over a four-month period. Therefore, authors may have had their h-indexes extracted with a time difference as long as 100 days, and this, although unlikely, could have influenced the accuracy of our analysis. We assumed that the h-index would be time invariant (while in fact it is not) during the period of data extraction. Nevertheless, the h-indexes should theoretically be less dynamic than citations alone, and it is unlikely to change by a large magnitude even after a 100-day period [33]. Third, we have not considered the ages of the authors, which might have an impact on the correlation measures given that more experienced authors may exhibit distinctive behavior compared to emerging authors. Fourth, Google Scholar seems to overestimate author-level metrics when compared to Scopus owing to inclusion of gray literature citations, among other reasons. However, we extracted the h-indexes from both databases, and the results did not change when using one h-index over another. In fact, the h-indexes from Google Scholar and Scopus were strongly correlated with each other (see Multimedia Appendix 4).

Conclusions

It appears that novel author-level altmetrics based on network analysis in social media and digital publications have a positive association with traditional bibliometric benchmarks. This seems to indicate that not only can they coexist but can also supplement and augment each other’s domains. Academicians interested in a comprehensive appraisal of their academic work and preparing for advancement need to be deliberate about investing time and attention into both spheres of appraisal (traditional and altmetrics), as they are relevant, significant, and most importantly appear to move in the same direction and amplify each other.

Acknowledgments

This research was supported in part through the Center for Clinical and Translational Science (CCaTS) Small Grant Program, part of Mayo Clinic CCaTS grant number UL1TR000135 from the National Center for Advancing Translational Sciences (NCATS), a component of the National Institutes of Health (NIH). Its contents are solely the responsibility of the authors and do not necessarily represent the official view of NIH.

Authors' Contributions

LOJS and DC conceived and designed the study. LOJS, GM, and TB conducted the acquisition of the h-index data. AU executed the stratified random sampling from the Symplur database and provided the HSG scores. AM analyzed the data and provided statistical expertise. LOJS, GM, TB, and DC interpreted the data. LOJS and DC drafted the manuscript and all authors contributed substantially to its revision through critical reviewing of the manuscript for important intellectual content. DC (senior author) was responsible for overseeing the project.

Conflicts of Interest

AU is a Partner of Symplur. AU provided the initial sample of subjects included in the study with their corresponding HSG scores. Data extraction of h-index and data analysis were performed independently by the other investigators. AU did not influence the study design, did not participate in the data extraction of the h-index, and did not participate in the analysis of the results or writing of the conclusions. No other investigators have any conflicts of interest to disclose related to this study.

Multimedia Appendix 1

Correlation between HSG scores and Google Scholar overall i10 indexes. Spearman correlation coefficient ρ=0.2037 (N=286).

PNG File , 107 KB

Multimedia Appendix 2

Correlation between HSG scores and Google Scholar 2015 i10 indexes. Spearman correlation coefficient ρ=0.2087 (N=286).

PNG File , 111 KB

Multimedia Appendix 3

Correlation between HSG scores and Scopus h-indexes after computing a Scopus h-index of 0 for subjects without Scopus h-indexes. Spearman correlation coefficient ρ=0.317 (N=917).

PNG File , 189 KB

Multimedia Appendix 4

Strong correlation between Scopus and Google Scholar h-indexes.

PNG File , 116 KB

  1. Cabrera D, Roy D, Chisolm MS. Social media scholarship and alternative metrics for academic promotion and tenure. J Am Coll Radiol 2018 Jan;15(1):135-141. [CrossRef]
  2. Haustein S. In: Haustein S, editor. Multidimensional Journal Evaluation: Analyzing Scientific Periodicals beyond the Impact Factor. Germany: De Gruyter Saur; 2012.
  3. Priem J, Piwowar H, Hemminger B. Altmetrics in the wild: Using social media to explore scholarly impact. arXiv Preprint posted online on March 20, 2012. [FREE Full text]
  4. Piwowar H. Altmetrics: Value all research products. Nature 2013 Jan 10;493(7431):159-159. [CrossRef] [Medline]
  5. von Muhlen M, Ohno-Machado L. Reviewing social media use by clinicians. J Am Med Inform Assoc 2012 Sep 01;19(5):777-781 [FREE Full text] [CrossRef] [Medline]
  6. The state of altmetrics: a tenth anniversary celebration. Altmetric.   URL: https:/​/altmetric.​figshare.com/​articles/​online_resource/​The_State_of_Altmetrics_A_tenth_anniversary_celebration/​13010000 [accessed 2020-11-16]
  7. Priem J, Costello KL. How and why scholars cite on Twitter. Proc. Am. Soc. Info. Sci. Tech 2011 Feb 03;47(1):1-4. [CrossRef]
  8. Braithwaite J, Herkes J, Churruca K, Long JC, Pomare C, Boyling C, et al. Comprehensive Researcher Achievement Model (CRAM): A framework for measuring researcher achievement, impact and influence derived from a systematic literature review of metrics and models. BMJ Open 2019 Mar 30;9(3):e025320. [CrossRef]
  9. About PlumX Metrics. Plum Analytics.   URL: https://plumanalytics.com/learn/about-metrics/ [accessed 2020-11-16]
  10. Eysenbach G. Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact. J Med Internet Res 2011 Dec 19;13(4):e123 [FREE Full text] [CrossRef] [Medline]
  11. San Francisco Declaration on Research Assessment. DORA.   URL: https://sfdora.org/read/ [accessed 2021-04-06]
  12. Nivash J, Dhinesh BL. An empirical analysis of big scholarly data to find the increase in citations. In: Advances in Intelligent Systems and Computing. Switzerland: Springer; 2019:41-51.
  13. Healthcare Social Graph®. Symplur.   URL: https://www.symplur.com/technology/healthcare-social-graph/ [accessed 2020-11-16]
  14. Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, STROBE Initiative. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. Int J Surg 2014 Dec;12(12):1500-1524 [FREE Full text] [CrossRef] [Medline]
  15. Healthcare Stakeholder Segmentation. Symplur.   URL: https://help.symplur.com/en/articles/103684-healthcare-stakeholder-segmentation [accessed 2020-02-27]
  16. Maharani W, Adiwijaya, Gozali A. Degree centrality and eigenvector centrality in twitter. 2014 Presented at: 2014 8th International Conference on Telecommunication Systems Services and Applications (TSSA); October 23-24, 2014; Bali, Indonesia p. 1-5. [CrossRef]
  17. Kleinberg JM. Authoritative sources in a hyperlinked environment. J ACM 1999 Sep;46(5):604-632. [CrossRef]
  18. Healthcare Social Graph Score. Symplur.   URL: https://help.symplur.com/en/articles/103681-healthcare-social-graph-score [accessed 2020-02-27]
  19. PageRank. Wikipedia.   URL: https://en.wikipedia.org/wiki/PageRank [accessed 2020-02-27]
  20. Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A 2005 Nov 15;102(46):16569-16572 [FREE Full text] [CrossRef] [Medline]
  21. Bornmann L, Daniel HD. What do we know about the h index? J Am Soc Inf Sci Technol 2007 Jul;58(9):1381-1385. [CrossRef]
  22. Costas R, Bordons M. The h-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. J Informetr 2007 Jul;1(3):193-203. [CrossRef]
  23. Teixeira da Silva JA, Dobránszki J. Multiple versions of the h-index: Cautionary use for formal academic purposes. Scientometrics 2018 Feb 20;115(2):1107-1113. [CrossRef]
  24. Costas R, Franssen T. Reflections around 'the cautionary use' of the h-index: Response to Teixeira da Silva and Dobránszki. Scientometrics 2018 Mar 9;115(2):1125-1130 [FREE Full text] [CrossRef] [Medline]
  25. Martín-Martín A, Orduna-Malea E, Thelwall M, Delgado López-Cózar E. Google Scholar, Web of Science, and Scopus: A systematic comparison of citations in 252 subject categories. J Informetr 2018 Nov;12(4):1160-1177. [CrossRef]
  26. Walker B, Alavifard S, Roberts S, Lanes A, Ramsay T, Boet S. Inter-rater reliability of h-index scores calculated by Web of Science and Scopus for clinical epidemiology scientists. Health Info Libr J 2016 Jun 11;33(2):140-149 [FREE Full text] [CrossRef] [Medline]
  27. de Solla Price DJ. Networks of scientific papers. Science 1965 Jul 30;149(3683):510-515. [CrossRef] [Medline]
  28. Husain A, Repanshek Z, Singh M, Ankel F, Beck-Esmay J, Cabrera D, et al. Consensus guidelines for digital scholarship in academic promotion. West J Emerg Med 2020 Jul 08;21(4):883-891 [FREE Full text] [CrossRef] [Medline]
  29. Riddell J, Brown A, Kovic I, Jauregui J. Who are the most influential emergency physicians on Twitter? West J Emerg Med 2017 Feb 01;18(2):281-287 [FREE Full text] [CrossRef] [Medline]
  30. Barabási AL. Scale-free networks: A decade and beyond. Science 2009 Jul 24;325(5939):412-413. [CrossRef] [Medline]
  31. Waltman L, van Eck NJ. The inconsistency of the h-index. J. Am. Soc. Inf. Sci 2011 Oct 31;63(2):406-415. [CrossRef]
  32. Delgado López-Cózar E, Robinson-García N, Torres-Salinas D. The Google scholar experiment: How to index false papers and manipulate bibliometric indicators. J Assn Inf Sci Tec 2013 Nov 11;65(3):446-454. [CrossRef]
  33. Schreiber M. How relevant is the predictive power of the h-index? A case study of the time-dependent Hirsch index. J Informetr 2013 Apr;7(2):325-329. [CrossRef]


AIC: Akaike information criteria
API: application programming interface
CRAM: comprehensive researcher achievement model
DORA: Declaration on Research Assessment
GUI: graphical user interface
HITS: hyperlink-induced topic search
HSG: Healthcare Social Graph
IQR: interquartile range
STROBE: Strengthening the Reporting of Observational Studies in Epidemiology


Edited by R Kukafka; submitted 16.03.21; peer-reviewed by T Chan, M Janodia; comments to author 05.04.21; revised version received 07.04.21; accepted 06.05.21; published 31.05.21

Copyright

©Lucas Oliveira J e Silva, Graciela Maldonado, Tara Brigham, Aidan F Mullan, Audun Utengen, Daniel Cabrera. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 31.05.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.