Published on in Vol 23, No 7 (2021): July

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/28615, first published .
Emergency Physician Twitter Use in the COVID-19 Pandemic as a Potential Predictor of Impending Surge: Retrospective Observational Study

Emergency Physician Twitter Use in the COVID-19 Pandemic as a Potential Predictor of Impending Surge: Retrospective Observational Study

Emergency Physician Twitter Use in the COVID-19 Pandemic as a Potential Predictor of Impending Surge: Retrospective Observational Study

Original Paper

1Division of Disaster Medicine, Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, MA, United States

2Department of Emergency Medicine, Harvard Medical School, Boston, MA, United States

3Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States

4Department of Emergency Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, United States

5Department of Emergency Medicine, Mount Sinai Morningside-West, New York, NY, United States

Corresponding Author:

Colton Margus, MD

Division of Disaster Medicine

Department of Emergency Medicine

Beth Israel Deaconess Medical Center

Rosenberg Bldg. 2nd Floor

One Deaconess Road

Boston, MA, 02215

United States

Phone: 1 6177543462

Email: cmargus@bidmc.harvard.edu


Related ArticleComment in: https://www.jmir.org/2022/3/e34870

Background: The early conversations on social media by emergency physicians offer a window into the ongoing response to the COVID-19 pandemic.

Objective: This retrospective observational study of emergency physician Twitter use details how the health care crisis has influenced emergency physician discourse online and how this discourse may have use as a harbinger of ensuing surge.

Methods: Followers of the three main emergency physician professional organizations were identified using Twitter’s application programming interface. They and their followers were included in the study if they identified explicitly as US-based emergency physicians. Statuses, or tweets, were obtained between January 4, 2020, when the new disease was first reported, and December 14, 2020, when vaccination first began. Original tweets underwent sentiment analysis using the previously validated Valence Aware Dictionary and Sentiment Reasoner (VADER) tool as well as topic modeling using latent Dirichlet allocation unsupervised machine learning. Sentiment and topic trends were then correlated with daily change in new COVID-19 cases and inpatient bed utilization.

Results: A total of 3463 emergency physicians produced 334,747 unique English-language tweets during the study period. Out of 3463 participants, 910 (26.3%) stated that they were in training, and 466 of 902 (51.7%) participants who provided their gender identified as men. Overall tweet volume went from a pre-March 2020 mean of 481.9 (SD 72.7) daily tweets to a mean of 1065.5 (SD 257.3) daily tweets thereafter. Parameter and topic number tuning led to 20 tweet topics, with a topic coherence of 0.49. Except for a week in June and 4 days in November, discourse was dominated by the health care system (45,570/334,747, 13.6%). Discussion of pandemic response, epidemiology, and clinical care were jointly found to moderately correlate with COVID-19 hospital bed utilization (Pearson r=0.41), as was the occurrence of “covid,” “coronavirus,” or “pandemic” in tweet texts (r=0.47). Momentum in COVID-19 tweets, as demonstrated by a sustained crossing of 7- and 28-day moving averages, was found to have occurred on an average of 45.0 (SD 12.7) days before peak COVID-19 hospital bed utilization across the country and in the four most contributory states.

Conclusions: COVID-19 Twitter discussion among emergency physicians correlates with and may precede the rising of hospital burden. This study, therefore, begins to depict the extent to which the ongoing pandemic has affected the field of emergency medicine discourse online and suggests a potential avenue for understanding predictors of surge.

J Med Internet Res 2021;23(7):e28615

doi:10.2196/28615

Keywords



The contagiousness, fatality rate, and long-term sequelae thus far attributed to COVID-19, the disease caused by SARS-CoV-2, have led to significant strains on the health care system. Since the World Health Organization (WHO) first reported “a cluster of pneumonia cases” in Wuhan, China, on January 4, 2020 [1], the social media platform Twitter has become a source of both official health information and unofficial medical discourse regarding the ongoing pandemic. Boasting 180 million daily active users [2], not only does the service allow account holders to share links, media, and brief strings of text, but it has evolved into a public forum for unvetted information that can augment, if not supersede, more traditional dissemination methods.

On December 11, 2020, the US Food and Drug Administration (FDA) used Twitter to announce its authorization for immediate emergency use of the COVID-19 vaccine developed by Pfizer and BioNTech [3]. In its Twitter message, or tweet, about the decision, the FDA (@US_FDA) reiterated its aim to “assure the public and medical community that it has conducted a thorough evaluation of the available safety, effectiveness, and manufacturing quality information” [4]. Directly addressing Twitter’s medical community in this way was intentional: throughout the COVID-19 pandemic, many physicians turned to social media rather than traditional medical information channels to discuss the merits and demerits of possible treatments, prior to the availability of formal clinical guidance. Myriad treatment modalities and prevention strategies have been proposed at all levels, and Twitter has served as a means of disseminating everything from guidelines and data to anecdotes and opinions [5].

Utilizing social media to aid in the mapping of an ongoing crisis is not new, and Twitter use has previously been linked to, among others, the H1N1 and Zika virus epidemics [6,7]. Yet even with unparalleled international effort, formal forecasting models of COVID-19 have largely failed [8], and many geopolitical comparisons in popular media now in hindsight appear to have been premature [9-12]. As the front and, for many Americans, only door into the US health care system, emergency departments continue to be looked to for public health surveillance and treatment strategies, as a kind of finger on the epidemiological pulse of their communities [13,14].

Emergency physicians, in particular, have long been at the forefront of physician engagement with social media, relying on a budding network of fellow clinicians collaborating on what has become known as free open-access medical education [15]. The COVID-19 pandemic only further accentuates the unique role of the emergency physician community online, as frontline providers who not only take on substantial risk but who may also be able to provide substantial insight. Facing changed admission criteria, expanded alternate care sites, and recycled equipment, emergency physicians have been forced into the unenviable position of making difficult triaging and resource allocation decisions. This study, therefore, seeks to characterize the sentiment and topic trends in emergency physician discourse on Twitter throughout the prevaccination pandemic, as a potential harbinger of the surge needs that followed.


Sampling and Data Collection

This work was approved as exempt human subjects research through the Beth Israel Deaconess Medical Center Institutional Review Board in Boston, Massachusetts. In order to access Twitter’s application programming interface (API), a developer account was applied for and obtained. Python 3.8.5 and the Tweepy library (Python Software Foundation) [16] then made it possible to acquire all unique followers of the three major physician professional societies in emergency medicine: the American College of Emergency Physicians (ACEP; @ACEPNow), the Society for Academic Emergency Medicine (SAEM; @SAEMonline), and the American Academy of Emergency Medicine (AAEM; @aaeminfo) [17-21]. Because sex and gender are not directly recorded by Twitter but have previously been shown to influence social media engagement and even clinical diagnosis and management [22-24], gendered nouns and pronouns stated in user bios were considered in their place. Those users with privacy settings that would render tweets protected from analysis were removed. Each user bio was then initially screened by textual search for including any of the 157 text strings decided by the research team as connoting a public acknowledgement of one’s role as an emergency physician, such as “emergency medicine physician,” “emergency D.O.,” or “ER doc.”

Exclusion criteria included aspiring emergency physicians and students, organizations, physicians from other specialties, as well as users belonging to other professions, living outside the United States, or without a clear location at the state level. However, emergency physicians still in training, whether described as interns or residents, were not excluded. Exclusion for any of these reasons was determined by two practicing emergency physicians each reviewing and sorting all users manually and independently, with any discrepancies decided by consensus.

A chain-referral sampling method was then employed in order to expand the study group to include those US-based emergency physicians on Twitter not following one of the major professional organization accounts [25]. Followers of already-included participants were then aggregated to create a composite list of potential additional participants. After applying the same exclusions, this new group of users was then appended to the original as a more comprehensive sampling of US emergency physicians on Twitter.

All available tweets up to Twitter’s own limit of 3200 per user were acquired for each study participant. Tweets were removed if reposted as a retweet from a different post, if non-English, or if falling outside the study period from and including January 4, 2020, based on the date of the initial WHO announcement, to and including December 14, 2020, based on the date of the first FDA-approved vaccination in the United States [26].

Sentiment and Topic Generation

Several different methods have previously been employed to conduct sentiment analysis of tweets specific to the health care field, with 46% of such tweets demonstrating sentiment of some kind [27]. Here, the open-source Valence Aware Diction and Sentiment Reasoner (VADER) analysis tool was used to determine both the direction and extent of tweet sentiment polarity, based on a lexicon of sentiment-related words. VADER has been shown to outperform human raters and, in handling emoji and slang, is particularly suited for social media text [28,29]. Sentiment polarity ratings were summed and standardized as a compound score between –1 and 1. By convention, tweets with a compound score between –0.05 and 0.05 were classified as neutral [29,30].

Using the gensim Python package, all tweets were tokenized and preprocessed, including removal of punctuation, special characters, mentions of other users, stop words of little topic value, and links to external websites. Hashtags, which users sometimes use to denote a contextual theme [31,32], were converted to text. Because frequently co-occurring words can exist with unique meaning, two- or three-word phrases were also considered as independent tokens, as in “healthcare_workers.” These preprocessed tweets then underwent unsupervised topic modeling in order to discern meaningful content themes. Latent Dirichlet allocation (LDA) is a common method for topic modeling that has previously been utilized to analyze health care–related tweets [33,34]. Topic coherence, as proposed by Röder et al due to its higher correlation with human topic ranking [35], was then maximized by iteratively modeling over a range of topic numbers as well as α parameters. The resulting topic model was then used to assign a dominant topic to each tweet included in the sample.

In order to contextualize these sentiment and topic trends within the ongoing pandemic, daily COVID-19 case counts and COVID-19 inpatient bed utilization (CIBU) rates were acquired from the US Centers for Disease Control and Prevention and the US Department of Health and Human Services [36,37]. These data were converted to 7-day simple moving averages to account for lower weekend reporting and other daily fluctuations [38,39]. Tweet volume, sentiment, and dominant topic trends were then correlated with new COVID-19 cases and CIBU through Prism, version 9.0.2 (GraphPad Software).

Further comparison between public health and Twitter data was made possible by plotting the 7- and 28-day simple moving averages and observing their intersection as a potential indicator for momentum using Excel, version 16.47.1 (Microsoft). Similarly, a moving average convergence/divergence oscillator (MACD) was generated by subtracting the 28-day exponential moving average from the 7-day exponential moving average. This MACD was then monitored for both (1) turning positive and (2) crossing above its own 7-day exponential moving average. These cross signals based on simple moving averages and on the MACD are both loosely derived from lagging indicators of historical price patterns that are commonly used in finance to guide investment decisions and have previously been applied directly to SARS-CoV-2 infection data [40,41].


The three key US emergency physician organizations had a collective 42,918 followers of their primary Twitter accounts as of December 11, 2020. When those following more than one professional organization were only counted once, there were 27,022 unique followers, with 10,905 (40.4%) belonging to at least two of the three groups (Figure 1). As an approximation for cohesion, the overlap coefficient of the three handles was 0.43, calculated as the ratio of the intersection over the maximum possible intersection ([A∩B∩C]/min[|A|,|B|,|C|]) [42]. After exclusions, 2073 US-based emergency physicians were identified, with high interrater reliability (κ=0.96).

Figure 1. Overview of the methodology applied for study participant selection. Unprotected unique followers of the Twitter handles for three key US emergency physician professional organizations were sampled; they were included if referencing being an emergency medicine physician and excluded if not found to be an individual emergency physician located in a US state or territory. A referral sample of the original sample's followers underwent the same inclusion and exclusion criteria to contribute additional US-based emergency physicians to the study group. AAEM: American Academy of Emergency Medicine; ACEP: American College of Emergency Physicians; SAEM: Society for Academic Emergency Medicine.
View this figure

There were 1,510,802 followers of the initial cohort acquired on December 12 and 13, 2020, with 734,644 found to be internally unique as well as distinct from the original user list assessed. Applying the same inclusion and exclusion criteria resulted in 3110 emergency physicians, 1433 of whom could clearly be identified as located in specific US states, territories, or districts through their public Twitter location and description (κ=0.94). Combining the two groups, there were 3463 US-based emergency physicians included in the study.

Study participants had been using Twitter for an average of 6.6 (SD 3.5) years, with an average of 183.8 (SD 491.0) total tweets (Table 1). Only 910 out of 3463 (26.3%) participants explicitly described themselves as a resident or intern currently in training. The most common US states represented were New York (433/3463, 12.5%) and California (395/3463, 11.4%), and the most contributory US region was the Northeast (1057/3463, 30.5%). Self-identified gender was infrequent (902/3463, 26.0%), with 466 of those 902 participants (51.7%) identifying as a man.

Table 1. Descriptive statistics of included US-based emergency physicians on Twitter.
CharacteristicValue (N=3463)
Gender, n (%)

Identified902 (26.0)

Men (n=902)466 (51.7)

Women (n=902)436 (48.3)

Unidentified2561 (74.0)
Usage

Verified account, n (%)27 (0.8)

Duration (years), mean (SD)6.6 (3.5)

Tweets, mean (SD)183.8 (491.0)

Followers, mean (SD)664.6 (5326.3)

Since 2007-2009, n (%)519 (15.0)

Since 2010-2014, n (%)1471 (42.5)

Since 2015-2019, n (%)1235 (35.7)

Since 2020, n (%)238 (6.9)
Organizations followed, n (%)

American Academy of Emergency Medicine (AAEM) only144 (4.2)

American College of Emergency Physicians (ACEP) only351 (10.1)

Society of Academic Emergency Medicine (SAEM) only275 (7.9)

AAEM and ACEP114 (3.3)

AAEM and SAEM148 (4.3)

ACEP and SAEM343 (9.9)

All three organizations655 (18.9)

None1433 (41.4)
Training: identified as in training, n (%)910 (26.3)
US region, n (%)

Midwest789 (22.8)

Northeast1057 (30.5)

South884 (25.5)

West724 (20.9)

Territory9 (0.3)
Top five US states, n (%)

New York433 (12.5)

California395 (11.4)

Pennsylvania249 (7.2)

Texas235 (6.8)

Illinois212 (6.1)

Tweets collected for the study group totaled 1,941,894 as of December 24, 2020, with 630,915 (32.5%) of those obtained falling between January 4 and December 14, inclusive (Figure 2). Because of a cap on the number of tweets able to be pulled through the official Twitter API, 44 out of 3463 (1.3%) users appeared to have exceeded the limit, such that not all tweets would have been captured. Despite truncation, these avid users still contributed 140,938 of all 1,941,894 (7.3%) collected tweets. Overall, 256,636 (40.7%) retweets and 39,532 (6.3%) non-English tweets were removed, leaving 334,747 (53.1%) unique English-language tweets for analysis. Daily volume went from a pre-March mean of 481.9 (SD 72.7) tweets to a mean of 1065.5 (SD 257.3) tweets thereafter.

Figure 2. Overview of the methodology applied for tweets selected for analysis. Tweets collected for study participants were included if they fell within the January 4 through December 14, 2020, study period and excluded if they were found to be retweets, non-English tweets, and tweets of indeterminate language.
View this figure

After preprocessing, 1,958,230 semantic units, or tokens, were found within the corpus of tweets, with a total vocabulary of 12,401. LDA modeling over a range of topic numbers and parameters settled on a total of 20 content topics for this study. Two physicians then worked together to manually and jointly label these 20 topics based on discussion of key terms and representative tweets. For example, the topic with the top five terms of “resident,” “student,” “residency,” “learn,” and “year” was labeled as medical training. In this way, the most prevalent topics were found to relate to the health care system (45,570/334,747, 13.6%), collaboration (20,112/334,747, 6.0%), and politics (18,186/334,747, 5.4%) (Table 2). Notably, the health care system was the dominant topic throughout the study period, with two exceptions: it was supplanted from June 2 to 9 by race relations and from November 6 to 10 by politics.

Daily change in 7-day moving averages for specific tweet topics and sentiment polarity demonstrated small Pearson correlation coefficients for the topics of pandemic response (r=0.26, 95% CI 0.15-0.36), epidemiology (r=0.25, 95% CI 0.14-0.35), and clinical care (r=0.23, 95% CI 0.13-0.33) with reported COVID-19 cases (all P<.001) (Table 2). There was greater correlation for these topics with hospital bed utilization (all P<.001). While the three topics considered jointly were even more correlated (r=0.41, 95% CI 0.31-0.50), they still fell short of the correlation seen with the 9.4% of included tweets containing “covid,” “coronavirus,” “corona virus,” “cov-2,” “cov2,” or “pandemic” within the tweet text (r=0.47, 95% CI 0.38-0.56) (all P<.001). A proportional stacked area chart reveals an early overall increase in Twitter use as daily case counts rose, particularly among COVID-19–related topics (Figure 3). Aggregated sentiment scores reached a nadir on June 6, when race relations was the dominant topic in the sample, and again on October 6, the day after then–US President Donald Trump was discharged from his COVID-19 hospital admission [43].

Table 2. Topic descriptive statistics.
Topic labelTotal tweets (N=334,747), n (%)Compound sentiment score, mean (SD)Case Pearson correlation, r (95% CI)CIBUa Pearson correlation, r (95% CI)CIBU Spearman correlation, r (95% CI)Key termsExample tweet
Health care system45,570 (13.6)0.16
(0.36)
0.03
(–0.07 to 0.14)
0.12
(0.00 to 0.23)
0.07
(–0.05 to 0.18)
Care, health, physician, medicine, practice, system, medical, important, community, change, issue, lead, work, support, research, address, focus, policy, create, improve“I’m not the one to ask about nursing. Nursing has always defined itself. The problem is the definition used to define ‘advanced nursing’ is the same definition used to define medicine. That is not the same definition that was used years ago, it changed. Common sense dictates one”
Collaboration20,112 (6.0)0.58
(0.34)
0.01
(–0.10 to 0.12)
0.09
(–0.03 to 0.20)
0.11
(–0.01 to 0.23)
Work, great, amazing, team, love, proud, congrat, congratulation, colleague, job, awesome, friend, support, part, good, hard, share, incredible, today, honor“Honored to receive this award from @TXChildrensPEMb section. Thank you all for being such a great group of mentors, colleagues, and friends! Also, winning the Fellow’s Award means so much. Happy for such a great group of fellows and mentees!”
Pandemic care13,240 (4.0)0.14
(0.50)
0.23
(0.13 to 0.33)
0.26
(0.15 to 0.37)
0.18
(0.06 to 0.29)
Patient, care, hospital, covid, doctor, nurse, emergency, doc, physician, call, sick, staff, visit, admit, treat, edc, icud, medical, work, room“Physician-owned hospitals can increase the number of licensed beds, operating rooms, and procedure rooms by converting observation beds to inpatient beds, among other means, to accommodate patient surge”
Research16,415 (4.9)0.02
(0.47)
0.07
(–0.04 to 0.17)
0.06
(–0.06 to 0.17)
0.07
(–0.05 to 0.19)
Patient, study, treatment, high, give, low, risk, drug, pain, show, dose, trial, present, treat, disease, early, diagnosis, benefit, med, effect“Take-homes from 2020 ACEPe Opioids Clinical Policy: 1. Treat opioid withdrawal with buprenorphine. 2. Preferentially prescribe non-opioids for acute pain. 3. Avoid prescribing opioids for chronic pain. 4. Do not prescribe sedatives to patients taking opioids”
Race relations15,128 (4.5)–0.17
(0.51)
0.00
(–0.10 to 0.12)
0.03
(–0.09 to 0.14)
–0.02
(–0.14 to 0.09)
People, black, man, kill, woman, call, speak, matter, police, stand, white, stop, racism, word, racist, history, protest, happen, die, wrong“Black lives matter means Black​ queer​ lives matter, Black ​trans ​lives matter, Black ​non-binary lives matter, Black ​femme​ lives matter, Black ​incarcerated ​lives matter, and Black ​disabled ​lives matter...”
Pandemic response14,143 (4.2)0.05
(0.50)
0.26
(0.15 to 0.36)
0.38
(0.28 to 0.47)
0.27
(0.16 to 0.38)
Covid, pandemic, coronavirus, vaccine, response, protect, health, virus, fight, ppef, crisis, die, continue, country, worker, spread, leadership, expert, action, state“#COVID. COVID COVID COVID COVID COVID COVID COVID COVID COVID 183,000+ Americans dead, and counting... Care for your neighbors. #WearAMask”
Reading17,897 (5.3)0.27
(0.41)
0.06
(–0.05 to 0.17)
0.20
(0.09 to 0.31)
0.13
(0.01 to 0.25)
Read, great, check, write, thread, article, post, book, list, follow, find, share, good, send, add, paper, twitter, email, tweet, link“Please read the first paragraph of the new image again. It literally is saying what I originally replied with. Google searches do no good if you won’t read the text of what you find, not just the header.”
Schedule15,577 (4.7)0.15
(0.41)
0.04
(–0.07 to 0.15)
0.10
(–0.02 to 0.21)
0.07
(–0.05 to 0.19)
Day, time, week, today, hour, start, work, shift, year, long, wait, month, back, night, spend, end, run, sleep, minute, morning“The length of shifts of studies in this paper started at 13 hours. Time off during day hours not post-night is obviously not the same as working 13 hours and having a few hours off before bed.”
Public safety11,594 (3.5)0.19
(0.45)
0.05
(–0.06 to 0.16)
0.26
(0.15 to 0.36)
0.12
(0.00 to 0.24)
People, school, safe, open, home, work, close, place, stay, follow, mask, risk, live, order, family, plan, community, back, person, kid“Every single store we went into on Michigan Ave required a mask. Our hotel requires a mask anywhere inside. Even Millenium Park requires a mask to enter and walk around outside. And on the streets plenty of people are masked outside. I think compliance is excellent”
Politics18,186 (5.4)–0.01
(0.49)
0.00
(–0.11 to 0.11)
–0.01
(–0.13 to 0.10)
–0.05
(–0.17 to 0.07)
Vote, trump, election, lie, country, state, people, lose, president, win, debate, biden, stop, count, call, support, political, campaign, american, fact“Trump’s personal lawyer: Guilty. Trump\'s campaign manager: Guilty. Trump’s deputy campaign manager: Guilty. Trump’s National Security Advisor: Guilty. Trump’s political advisor: Guilty.”
Entertainment18,100 (5.4)0.17
(0.47)
0.03
(–0.07 to 0.14)
0.05
(–0.07 to 0.16)
0.09
(–0.03 to 0.20)
Watch, good, play, love, guy, game, thing, time, bad, video, pretty, show, give, favorite, big, real, season, fan, idea, listen“I only watched pro sports and news for decades, never watching any of the popular TV shows; now I’ve actually started watching Downton Abbey instead. I guess Breaking Bad or GOT is next. I haven’t seen a single episode of either. Any other suggestions?”
Epidemiology14,208 (4.2)0.01
(0.48)
0.25
(0.14 to 0.35)
0.36
(0.25 to 0.45)
0.17
(0.05 to 0.28)
Covid, test, case, death, number, testing, people, high, positive, report, rate, day, virus, infection, risk, coronavirus, symptom, spread, increase, rise“Q: what if I traveled to high risk area/ contact w known #COVID19 case) & HAVE symptoms? A: Isolate yourself. U meet testing criteria but do not HAVE to get tested. If u test negative for everything, please isolate yourself until symptoms resolve as for any contagious illness.”
Scientific inquiry13,235 (4.0)0.13
(0.46)
–0.04
(–0.15 to 0.07)
0.12
(0.00 to 0.23)
0.12
(0.01 to 0.24)
Question, agree, datum, point, answer, science, study, base, evidence, fact, show, understand, opinion, true, wrong, important, good, correct, information, clear“Many, including @realDonaldTrump, have abandoned science, logic and common sense Don’t take medical advice from charlatans Listen to real experts Hydroxychloroquine data shows no benefit + significant potential harms”
Protective equipment15,156 (4.5)0.14
(0.42)
0.07
(–0.04 to 0.18)
0.15
(0.03 to 0.26)
0.14
(0.03 to 0.26)
Mask, wear, put, hand, face, line, eye, time, head, find, leave, back, hold, room, pull, run, clean, cover, hair, remove“A woman on the subway just pulled her mask down to blow her nose. Feeling like somehow people still don\'t get it...”
Business of medicine12,217 (3.6)0.12
(0.48)
0.07
(–0.04 to 0.18)
0.06
(–0.06 to 0.17)
0.12
(0.00 to 0.24)
Pay, money, system, physician, cost, make, free, health, state, give, problem, care, medical, job, insurance, company, hospital, healthcare, cut, plan“Benchmarking to INWg rates or lower, based on a antiquated federal fee scheduling system, is a non-starter for most physician owned and operated practices. Incentivize competition in the marketplace. Offer better reimbursement rates than CMGsh or large groups. Break monopolies.”
Family14,296 (4.3)0.20
(0.46)
0.12
(0.01 to 0.23)
0.12
(0.00 to 0.23)
0.15
(0.03 to 0.26)
Year, kid, child, friend, family, good, call, give, time, talk, feel, parent, young, today, make, back, mom, remember, wife, baby“Same with my wife and her parents back in the day.younger sister got everything she wanted. We married young and never asked for anything. Only her mother came to our wedding (teen marriage never lasts) 44 years ago...no wedding gifts.”
Lifestyle17,610 (5.3)0.18
(0.42)
–0.02
(–0.13 to 0.09)
0.07
(–0.05 to 0.18)
0.07
(–0.05 to 0.19)
Make, eat, food, car, good, run, water, dog, drive, walk, buy, love, drink, bring, hot, coffee, nice, thing, cool, enjoy“Stuffed peppers: Cut 4 bell peppers in half lengthwise. In a skillet saute 2 cups spinach, 1/3 white onion and garlic. Add 1lb ground chicken. Season to taste. Add 1 cup cauliflower rice. Stuff the ‘rice’ into the peppers. Top peppers w/ cheese & bake for 20mins on 375 degrees.”
Medical training15,994 (4.8)0.36
(0.42)
0.08
(–0.03 to 0.19)
0.13
(0.01 to 0.24)
–0.03
(–0.15 to 0.09)
Resident, student, residency, learn, year, program, medical, medtwitter, great, join, today, virtual, mede, attend, interview, teach, school, talk, match, conference“Thankful for my residency family today! Had a great week of shifts and an awesome virtual conference last week! My faculty and co-residents have been so amazing these last few months!”
Emotional reaction12,401 (3.7)0.14
(0.49)
–0.03
(–0.14 to 0.08)
0.10
(–0.01 to 0.22)
0.11
(–0.01 to 0.23)
Make, thing, good, feel, time, bad, people, hard, lot, happen, agree, change, easy, hear, decision, part, point, find, real, sense“Are you nervous? Lots of people feel nervous when they come here That’s normal What are you nervous about? Are you nervous that something may hurt? A lot of people worry about that Nothing is going to hurt right now If that changes I’ll tell you & we’ll get thru it”
Inspirational13,668 (4.1)0.30
(0.50)
0.05
(–0.06 to 0.16)
0.14
(0.03 to 0.25)
0.17
(0.05 to 0.28)
Life, love, feel, hope, true, live, world, word, human, story, time, share, save, real, experience, moment, heart, family, find, change“Thought of the day: I can share my earthly riches like peace, joy, time, talents, giftings, physical helps, hope, wisdom, emotional strength, encouragement, etc.”

aCIBU: COVID-19 inpatient bed utilization.

bTXChildrensPEM: Texas Children’s Hospital Pediatric Emergency Medicine.

ced: emergency department.

dicu: intensive care unit.

eACEP: American College of Emergency Physicians.

fppe: personal protective equipment.

gINW: in-network.

hCMG: contract management group.

Figure 3. Stacked area plot of 7-day moving average daily counts of latent Dirichlet allocation–derived topics, both those pertaining to COVID-19 (red area) and those not (blue area) (left axis), plotted against the 7-day moving average of daily compound sentiment scores nationally (right axis).
View this figure

Over the full study period, three peaks emerged in both COVID-19–related discussion and CIBU, with the rise in tweets appearing to precede the corresponding rise in CIBU. This may be better appreciated with attention directed to where the 7-day moving average crosses above the 28-day moving average as a signal of topic momentum (Figure 4). After the first recorded domestic COVID-19 case on January 22, 2020, February 25 was the first such cross and preceded a period of sustained increase in CIBU from February 28 to an April 9 peak. The next occurrence was on June 22, occurring alongside the second period of sustained increase in CIBU from June 15 to a peak on July 21. A brief cross on July 31 was short-lived, but the subsequent cross on September 13 was maintained and corresponded to a rise in COVID-19 hospital burden that started September 24 and continued through the remainder of the study period, with several episodic crosses seen thereafter. When the MACD is also considered, February 22 (Figure 4, point A) marks a cross above both the zero centerline as well as its 7-day moving average, 46 days before the April 9 peak in CIBU. The next such cross occurred on June 24 (Figure 4, point B), 27 days before the second peak, while the third surge in CIBU appeared to coincide with a first crossing on September 30 (Figure 4, point C).

Figure 4. Time series plot of percent US COVID-19 inpatient bed utilization (CIBU; right axis) and its 7-day simple moving average (CIBU 7-SMA; right axis) against the 7-SMA and 28-day simple moving average (28-SMA) of COVID-19–related emergency physician tweets (left axis). Also plotted are the tweet exponential moving average convergence/divergence oscillator (MACD; left axis) and its own 7-day exponential moving average signal line (MACD 7-EMA; left axis). Labels A through C demonstrate sustained crossover points for tweet volume, where both the 7-SMA overcomes the 28-SMA and the MACD 7-EMA turns positive and overcomes the MACD as indicators of momentum.
View this figure

Because the breadth and diversity of the United States may obfuscate local trends, the four most contributory states of California, New York, Pennsylvania, and Texas were similarly plotted (Figures 5-8). All four experienced a spring signal and subsequent surge, although New York has been recognized among them as an early epicenter [44]. Only Texas appears to have had a sustained cross of the 7-day moving average above the 28-day moving average from June 18 to July 15. This notably preceded the only significant summer peak among these states, reaching a maximum CIBU of 20.5% on July 20; in comparison, California reached a second peak of 14.3% on July 25 while neither New York nor Pennsylvania exceeded 10% again before November. The mean time from the preceding cross of moving averages in COVID-19–related emergency physician tweets to peak CIBU across the four states and the nation was 45.0 (SD 12.7) days.

Figure 5. California time series plots of the 7-day simple moving average (7-SMA) in percent COVID-19 inpatient bed utilization (CIBU 7-SMA; right axis) against the 7-SMA and the 28-day simple moving average (28-SMA) of COVID-19–related emergency physician tweet count (left axis). Also plotted are the tweet exponential moving average convergence/divergence oscillator (MACD; left axis) and its own 7-day exponential moving average signal line (MACD 7-EMA; left axis).
View this figure
Figure 6. New York time series plots of the 7-day simple moving average (7-SMA) in percent COVID-19 inpatient bed utilization (CIBU 7-SMA; right axis) against the 7-SMA and the 28-day simple moving average (28-SMA) of COVID-19–related emergency physician tweet count (left axis). Also plotted are the tweet exponential moving average convergence/divergence oscillator (MACD; left axis) and its own 7-day exponential moving average signal line (MACD 7-EMA; left axis).
View this figure
Figure 7. Pennsylvania time series plots of the 7-day simple moving average (7-SMA) in percent COVID-19 inpatient bed utilization (CIBU 7-SMA; right axis) against the 7-SMA and the 28-day simple moving average (28-SMA) of COVID-19–related emergency physician tweet count (left axis). Also plotted are the tweet exponential moving average convergence/divergence oscillator (MACD; left axis) and its own 7-day exponential moving average signal line (MACD 7-EMA; left axis).
View this figure
Figure 8. Texas time series plots of the 7-day simple moving average (7-SMA) in percent COVID-19 inpatient bed utilization (CIBU 7-SMA; right axis) against the 7-SMA and the 28-day simple moving average (28-SMA) of COVID-19–related emergency physician tweet count (left axis). Also plotted are the tweet exponential moving average convergence/divergence oscillator (MACD; left axis) and its own 7-day exponential moving average signal line (MACD 7-EMA; left axis).
View this figure

Principal Findings

Emergency physician engagement on Twitter has grown considerably since the start of the COVID-19 pandemic in both the topics raised and the sentiments conveyed. Furthermore, the analysis described here demonstrates conversations with increasing focus on pandemic response, clinical care, and epidemiology. That these topics correlate better with CIBU than simple case counts supports the idea that they may well serve as a kind of barometer of health care system strain. Momentum in these conversations, in fact, as shown by crossings of tweet count moving averages, were shown to occur before key rises in CIBU, which may with future research lend itself to the larger effort of predicting surge based on multiple data streams.

The COVID-19 pandemic has required emergency physicians to adapt continually to an extraordinary inadequacy of resources. While strains on the ventilator supply received much attention [45], early limitations in testing and bed availability also created significant clinical challenges [46], let alone the mental health effects that are likely to be far-reaching [47]. In the case of personal protective equipment (PPE) shortages, many frontline providers resorted to individual means to acquire makeshift supplies, and some even turned to social media, as in the case of the #GetMePPE Twitter hashtag, in order to spur necessary action [48-50]. Despite that potential good, such public distress and debate from the medical frontline has, at times, spurred controversy and even real-world, professional repercussions [51].

With overwhelming caseloads and without clinical consensus, frontline physicians have been forced to decide between various treatment modalities based on unclear and, at times, contradictory information, with significant moral distress [52]. The effort to maintain appropriate patient care despite these unknowns, when faced with a need for resource rationing [53], is a de facto implementation of crisis standards of care. While contingency planning is situational and incorporates some aspects of triage practiced routinely in overcrowded emergency departments across the country, the formal triggering of crisis standards, and implicit divergence from conventional standards, enacts systematic change in protocols and care plans during a sustained period of large-scale strain [54-56].

Taken together, this retrospective look at emergency physician Twitter use suggests a new way of considering the pandemic surge, as emergency physician utilization of Twitter reached unprecedented highs. There are likely several reasons for this. The online community has been shown to provide psychological benefit, potentially exacerbated by the isolation faced in providing crisis care, and by a perceived collapse in trust in the existing infrastructure and policy guidance [57,58]. That collaboration was the only topic among 20 to have a compound sentiment range that did not cross zero may relate to this yearning for support. Still, positive mean sentiment scores among the vast majority of topics were unexpected, given recent work pertaining to general public perceptions of the pandemic [59]. There may well be some sway to a self-perceived personal and professional connection to the dominant issue of the day. While the root cause is undoubtedly complex and multifaceted, the increase in emergency physician engagement with social media is likely here to stay.

Whether emergency physicians online can truly act as an early indicator for policy makers remains to be seen, but the community is undoubtedly a subset of the broader pandemic response and is worth looking at more closely. Moving averages have been used to indicate movement in financial markets but are not true predictors of future trends. Given the potential for false signaling and the challenge in determining what constitutes a sustained or meaningful cross prospectively, derived crosses of the kind shown here will likely need corroboration from a variety of other metrics as well as comparison to other samples and controls. Even so, the idea that an indicator of a physician behavioral trend online may also signal momentum in real-world hospital bed utilization has clear implications for the future. This is particularly relevant when it is considered that the study group itself was sourced directly from followers of major professional organizations for whom early recognition of surge would empower a more coordinated and efficacious policy response.

To make such a tool operationally relevant, collaboration between government and private sector partners will likely be necessary to build adequate data pathways allowing for public health surveillance in real time. While emergent topic generation from a retrospective corpus of tweets is not a feasible option for rapid and predictive modeling, this work suggests that even a simple collection of tweets containing disease-specific text strings can nonetheless yield important, potentially meaningful information to inform resource allocation and other policy decisions. Ultimately, all disasters are media events, affecting both how and what information is conveyed. There is, therefore, no great leap of faith in acknowledging that social media, too, may have an important part to play. Future research must delve further into how such tools can one day be used in the early recognition of, and response to, health care strain of such magnitude.

Limitations

In holding to strict inclusion criteria, this study overlooked Twitter users who were not explicit in their self-identification as practicing emergency physicians. Emergency physicians were made the narrow focus of this work based on their key roles as clinical decision makers overseeing department throughput, but inclusion of nurses, technicians, and nonphysician midlevel providers may add breadth. Additionally, follower referral from within the sample may have introduced bias that could have been avoided by subsampling with some individuals selected for study participation and others only for referral [25]. Even so, a comprehensive list of emergency physicians on Twitter compiled in 2016 concluded that there were only 2234 such users around the globe [60]. Social media use has undoubtedly risen since, particularly with the influx of a growing number of emergency medicine residents [61], but the sample provided here does appear to be appropriately sized for this purpose.

Even so, this sample size was insufficient in both participant number and geographic spread to allow for more granular geographic analysis by city or county, although public health surveillance often occurs at this level [62,63]. Both demographic characteristics and spaciotemporal effects at the level of the individual participant have previously been shown to bias tweet sentiment and content, but these were not controlled for in this study [64,65].

Reliance on Twitter may itself limit generalizability, given its comparatively higher representation of young, urban, and minority users when compared with the general US population [66]. Only one social network platform was analyzed, and, insofar as it serves as a public forum, what medical professionals say online does not necessarily correlate with what they think or feel [67]. The study group, however, was not aware of its participation in this research, thereby avoiding that influence on behavior [68]. Excluding reposted tweets may have overlooked certain sentiments and reactions. Additionally, LDA topic modeling depends not only on the size of the overall corpus but on the length of the individual documents themselves. Although methods such as aggregation into larger documents have been proposed in order to overcome tweet brevity [69], doing so would not have allowed for the temporal and user-specific analysis intended. Finally, care must be taken in interpreting relationships between variables, such as physician tweets and CIBU, when both variables have undergone averaging or smoothing, which can sometimes suggest correlation where none exists.

Conclusions

This work reveals both the opportunity and the pressing need to explore social media use by the emergency physician community as a means of anticipating surge needs. By acting as gatekeepers to the hospital, emergency physicians are uniquely positioned to act as early indicators of hospital surge, and finding methods such as Twitter usage, which can track and analyze these indicators, could be vital to future pandemic planning and response.

Conflicts of Interest

None declared.

  1. World Health Organization (WHO). Twitter. 2020 Jan 04.   URL: https://twitter.com/WHO/status/1213523866703814656 [accessed 2020-08-04]
  2. Twitter Q3 2020 shareholder letter. US Securities and Exchange Commission. San Francisco, CA: Twitter, Inc; 2020 Oct 29.   URL: https://www.sec.gov/Archives/edgar/data/1418091/000141809120000199/twtrq320ex991.htm [accessed 2020-12-10]
  3. Hinton DM. Pfizer-BioNTech COVID-19 vaccine EUA letter of authorization. US Food and Drug Administration. Silver Spring, MD: US Food and Drug Administration; 2021 May 10.   URL: https://www.fda.gov/media/144412/download [accessed 2020-12-11]
  4. US Food and Drug Administration (US FDA). Twitter. 2020 Dec 11.   URL: https://twitter.com/US_FDA/status/1337588046674489346 [accessed 2020-12-11]
  5. Liu M, Caputi TL, Dredze M, Kesselheim AS, Ayers JW. Internet searches for unproven COVID-19 therapies in the United States. JAMA Intern Med 2020 Aug 01;180(8):1116-1118 [FREE Full text] [CrossRef] [Medline]
  6. Chew C, Eysenbach G. Pandemics in the age of Twitter: Content analysis of Tweets during the 2009 H1N1 outbreak. PLoS One 2010 Nov 29;5(11):e14118 [FREE Full text] [CrossRef] [Medline]
  7. Fu K, Liang H, Saroha N, Tse ZTH, Ip P, Fung IC. How people react to Zika virus outbreaks on Twitter? A computational content analysis. Am J Infect Control 2016 Dec 01;44(12):1700-1702. [CrossRef] [Medline]
  8. Ioannidis JP, Cripps S, Tanner MA. Forecasting for COVID-19 has failed. Int J Forecast 2020 Aug 25:1-16 [FREE Full text] [CrossRef] [Medline]
  9. Yang J. California and New York were both hit by Covid-19 early, but the results are very different. CNN. 2020 Apr 16.   URL: https://www.cnn.com/2020/04/14/opinions/california-new-york-covid-19-coronavirus-yang/index.html [accessed 2021-01-17]
  10. Secon H. England's coronavirus death rate is nearly 3 times higher than Ireland's. Proactive social-distancing measures made the difference. Business Insider. 2020 Apr 18.   URL: https://www.businessinsider.com/england-ireland-coronavirus-response-compared-2020-4 [accessed 2020-01-17]
  11. Wells P. California surpasses New York as centre of US Covid crisis. Financial Times. 2021 Jan 16.   URL: https://www.ft.com/content/8013b83a-84ab-4a8e-b39a-2f28c4d710f4 [accessed 2021-01-17]
  12. Carswell S. How did Ireland jump from low Covid base to world’s highest infection rate? The Irish Times. 2021 Jan 16.   URL: https:/​/www.​irishtimes.com/​news/​health/​how-did-ireland-jump-from-low-covid-base-to-world-s-highest-infection-rate-1.​4459429 [accessed 2021-01-17]
  13. Stanley K, Lora M, Merjavy S, Chang J, Arora S, Menchine M, et al. HIV prevention and treatment: The evolving role of the emergency department. Ann Emerg Med 2017 Oct;70(4):562-572.e3. [CrossRef] [Medline]
  14. Cochrane DG. Perspective of an emergency physician group as a data provider for syndromic surveillance. MMWR Suppl 2004 Sep 24;53:209-214 [FREE Full text] [Medline]
  15. Greene J. Social media and physician learning: Is it all Twitter? Ann Emerg Med 2013 Nov;62(5):11A-13A. [CrossRef] [Medline]
  16. Roesslein J. Tweepy: Twitter for Python!. GitHub. 2020.   URL: https://github.com/tweepy/tweepy [accessed 2020-12-11]
  17. Joint statement on mask excuses in the ED. American College of Emergency Physicians. 2020 Jul 28.   URL: https:/​/www.​acep.org/​corona/​COVID-19-alert/​covid-19-articles/​joint-statement-on-mask-excuses-in-the-ed/​ [accessed 2020-12-11]
  18. EM organizations. West Virginia University Libraries. 2020 Nov 05.   URL: https://libguides.wvu.edu/EmergencyMedicine/ProfOrgs [accessed 2020-12-12]
  19. Moore J. Professional Organizations-Emergency Medicine. Minneapolis, MN: University of Minnesota   URL: https://tinyurl.com/pujrymmt [accessed 2020-12-12]
  20. Emergency medicine specialty description. American Medical Association.   URL: https://www.ama-assn.org/specialty/emergency-medicine-specialty-description [accessed 2020-12-12]
  21. Counselman FL, Beeson MS, Marco CA, Adsit SK, Harvey AL, Keehbauch JN, 2016 EM Model Review Task Force. Evolution of the Model of the Clinical Practice of Emergency Medicine: 1979 to present. Acad Emerg Med 2017 Feb;24(2):257-264 [FREE Full text] [CrossRef] [Medline]
  22. Shillcutt SK, Silver JK. Social media and advancement of women physicians. N Engl J Med 2018 Jun 14;378(24):2342-2345. [CrossRef]
  23. Safdar B, Ona Ayala KE, Ali SS, Seifer BJ, Hong M, Greenberg MR, et al. Inclusion of sex and gender in emergency medicine research-A 2018 update. Acad Emerg Med 2019 Mar;26(3):293-302 [FREE Full text] [CrossRef] [Medline]
  24. Zhu JM, Pelullo AP, Hassan S, Siderowf L, Merchant RM, Werner RM. Gender differences in Twitter use and influence among health policy and health services researchers. JAMA Intern Med 2019 Dec 01;179(12):1726-1729 [FREE Full text] [CrossRef] [Medline]
  25. Avrachenkov K, Neglia G, Tuholukova A. Subsampling for chain-referral methods. In: Proceedings of the International Conference on Analytical and Stochastic Modeling Techniques and Applications.: Springer International Publishing; 2016 Presented at: International Conference on Analytical and Stochastic Modeling Techniques and Applications; August 24-26, 2016; Cardiff, UK p. 17-31. [CrossRef]
  26. Otterman S. ‘I trust science,’ says nurse who is first to get vaccine in US. The New York Times. 2020 Dec 14.   URL: https://www.nytimes.com/2020/12/14/nyregion/us-covid-vaccine-first-sandra-lindsay.html [accessed 2020-12-18]
  27. Gohil S, Vuik S, Darzi A. Sentiment analysis of health care tweets: Review of the methods used. JMIR Public Health Surveill 2018 Apr 23;4(2):e43 [FREE Full text] [CrossRef] [Medline]
  28. Hutto CJ, Gilbert E. VADER: A parsimonious rule-based model for sentiment analysis of social media text. In: Proceedings of the 8th International AAAI Conference on Weblogs and Social Media. 2014 Presented at: 8th International AAAI Conference on Weblogs and Social Media; June 1-4, 2014; Ann Arbor, MI   URL: https://www.scinapse.io/papers/2099813784
  29. Raghupathi V, Ren J, Raghupathi W. Studying public perception about vaccination: A sentiment analysis of tweets. Int J Environ Res Public Health 2020 May 15;17(10):3464 [FREE Full text] [CrossRef] [Medline]
  30. Chandrasekaran R, Mehta V, Valkunde T, Moustakas E. Topics, trends, and sentiments of tweets about the COVID-19 pandemic: Temporal infoveillance study. J Med Internet Res 2020 Oct 23;22(10):e22624 [FREE Full text] [CrossRef] [Medline]
  31. How to use hashtags. Twitter.   URL: https://help.twitter.com/en/using-twitter/how-to-use-hashtags [accessed 2021-01-14]
  32. Parker A. Twitter’s secret handshake. The New York Times. 2011 Jun 10.   URL: https://www.nytimes.com/2011/06/12/fashion/hashtags-a-new-way-for-tweets-cultural-studies.html [accessed 2021-01-14]
  33. Paul MJ, Dredze M. Discovering health topics in social media using topic models. PLoS One 2014;9(8):e103408 [FREE Full text] [CrossRef] [Medline]
  34. Xue J, Chen J, Chen C, Zheng C, Li S, Zhu T. Public discourse and sentiment during the COVID 19 pandemic: Using latent Dirichlet allocation for topic modeling on Twitter. PLoS One 2020;15(9):e0239441 [FREE Full text] [CrossRef] [Medline]
  35. Röder M, Both A, Hinneburg A. Exploring the space of topic coherence measures. In: Proceedings of the 8th ACM International Conference on Web Search and Data Mining.: Association for Computing Machinery; 2015 Presented at: 8th ACM International Conference on Web Search and Data Mining; February 2-6, 2015; Shanghai, China p. 399-408. [CrossRef]
  36. CDC Case Task Force. United States COVID-19 cases and deaths by state over time. Centers for Disease Control and Prevention. 2020 Dec 22.   URL: https://data.cdc.gov/Case-Surveillance/United-States-COVID-19-Cases-and-Deaths-by-State-o/9mfq-cb36 [accessed 2020-12-23]
  37. COVID-19 reported patient impact and hospital capacity by state timeseries. HealthData.gov. 2021.   URL: https://healthdata.gov/Hospital/COVID-19-Reported-Patient-Impact-and-Hospital-Capa/g62h-syeh [accessed 2021-01-18]
  38. Evaluation estimates. United States Census Bureau.   URL: https:/​/www.​census.gov/​programs-surveys/​popest/​technical-documentation/​research/​evaluation-estimates.​html [accessed 2020-12-22]
  39. The World Factbook: Puerto Rico. Central Intelligence Agency.   URL: https://www.cia.gov/the-world-factbook/countries/puerto-rico/ [accessed 2020-12-23]
  40. Chong TT, Ng W. Technical analysis and the London stock exchange: Testing the MACD and RSI rules using the FT30. Appl Econ Lett 2008 Nov 21;15(14):1111-1114. [CrossRef]
  41. Paroli M, Sirinian MI. Predicting SARS-CoV-2 infection trend using technical analysis indicators. Disaster Med Public Health Prep 2021 Feb;15(1):e10-e14 [FREE Full text] [CrossRef] [Medline]
  42. Mishori R, Singh LO, Levy B, Newport C. Mapping physician Twitter networks: Describing how they work as a first step in understanding connectivity, information flow, and message diffusion. J Med Internet Res 2014 Apr 14;16(4):e107 [FREE Full text] [CrossRef] [Medline]
  43. Sprunt B. “Don’t be afraid of it”: Trump dismisses virus threat as he returns to White House. NPR. 2020 Oct 05.   URL: https:/​/www.​npr.org/​sections/​latest-updates-trump-covid-19-results/​2020/​10/​05/​920412187/​trump-says-he-will-leave-walter-reed-medical-center-monday-night [accessed 2021-02-20]
  44. Thompson C, Baumgartner J, Pichardo C, Toro B, Li L, Arciuolo R, et al. COVID-19 outbreak - New York City, February 29-June 1, 2020. MMWR Morb Mortal Wkly Rep 2020 Nov 20;69(46):1725-1729 [FREE Full text] [CrossRef] [Medline]
  45. Ranney ML, Griffeth V, Jha AK. Critical supply shortages — The need for ventilators and personal protective equipment during the Covid-19 pandemic. N Engl J Med 2020 Apr 30;382(18):e41. [CrossRef]
  46. Margus C, Sondheim SE, Peck NM, Storch B, Ngai KM, Ho H, et al. Discharge in pandemic: Suspected Covid-19 patients returning to the emergency department within 72 hours for admission. Am J Emerg Med 2020 Aug 18:1-7 [FREE Full text] [CrossRef] [Medline]
  47. Muller AE, Hafstad EV, Himmels JPW, Smedslund G, Flottorp S, Stensland S, et al. The mental health impact of the COVID-19 pandemic on healthcare workers, and interventions to help them: A rapid systematic review. Psychiatry Res 2020 Nov;293:113441 [FREE Full text] [CrossRef] [Medline]
  48. Artenstein AW. In pursuit of PPE. N Engl J Med 2020 Apr 30;382(18):e46. [CrossRef]
  49. He S, Ojo A, Beckman A, Gondi S, Gondi S, Betz M, et al. The story of #GetMePPE and GetUsPPE.org to mobilize health care response to COVID-19: Rapidly deploying digital tools for better health care. J Med Internet Res 2020 Jul 20;22(7):e20469 [FREE Full text] [CrossRef] [Medline]
  50. Lee E, Loh W, Ang I, Tan Y. Plastic bags as personal protective equipment during the COVID-19 pandemic: Between the devil and the deep blue sea. J Emerg Med 2020 May;58(5):821-823 [FREE Full text] [CrossRef] [Medline]
  51. Doubek J. ER doctor says he walks into a “war zone” every day. NPR. 2020 Dec 17.   URL: https:/​/www.​npr.org/​sections/​coronavirus-live-updates/​2020/​12/​17/​946910354/​er-doctor-says-he-walks-into-a-war-zone-every-day [accessed 2020-12-30]
  52. Hertelendy A, Ciottone G, Mitchell C, Gutberg J, Burkle F. Crisis standards of care in a pandemic: Navigating the ethical, clinical, psychological and policy-making maelstrom. Int J Qual Health Care 2021 Mar 03;33(1):mzaa094 [FREE Full text] [CrossRef] [Medline]
  53. Wynia MK. Crisis triage-Attention to values in addition to efficiency. JAMA Netw Open 2020 Dec 01;3(12):e2029326 [FREE Full text] [CrossRef] [Medline]
  54. Leider JP, DeBruin D, Reynolds N, Koch A, Seaberg J. Ethical guidance for disaster response, specifically around crisis standards of care: A systematic review. Am J Public Health 2017 Sep;107(9):e1-e9. [CrossRef]
  55. Hick JL, Barbera JA, Kelen GD. Refining surge capacity: Conventional, contingency, and crisis capacity. Disaster Med Public Health Prep 2013 Apr 08;3(S1):S59-S67. [CrossRef]
  56. Margus C, Sarin RR, Molloy M, Ciottone GR. Crisis standards of care implementation at the state level in the United States. Prehosp Disaster Med 2020 Sep 10;35(6):599-603. [CrossRef]
  57. Mira JJ, Carrillo I, Guilabert M, Mula A, Martin-Delgado J, Pérez-Jover MV, SARS-CoV-2 Second Victim Study Group. Acute stress of the healthcare workforce during the COVID-19 pandemic evolution: A cross-sectional study in Spain. BMJ Open 2020 Nov 06;10(11):e042555 [FREE Full text] [CrossRef] [Medline]
  58. Bachem R, Tsur N, Levin Y, Abu-Raiya H, Maercker A. Negative affect, fatalism, and perceived institutional betrayal in times of the coronavirus pandemic: A cross-cultural investigation of control beliefs. Front Psychiatry 2020;11:589914 [FREE Full text] [CrossRef] [Medline]
  59. Boon-Itt S, Skunkan Y. Public perception of the COVID-19 pandemic on Twitter: Aentiment analysis and topic modeling study. JMIR Public Health Surveill 2020 Nov 11;6(4):e21978 [FREE Full text] [CrossRef] [Medline]
  60. Riddell J, Brown A, Kovic I, Jauregui J. Who are the most influential emergency physicians on Twitter? West J Emerg Med 2017 Feb;18(2):281-287 [FREE Full text] [CrossRef] [Medline]
  61. Haas MRC, Hopson LR, Zink BJ. Too big too fast? Potential implications of the rapid increase in emergency medicine residency positions. AEM Educ Train 2020 Feb;4(Suppl 1):S13-S21 [FREE Full text] [CrossRef] [Medline]
  62. Enanoria WT, Crawley AW, Hunter JC, Balido J, Aragon TJ. The epidemiology and surveillance workforce among local health departments in California: Mutual aid and surge capacity for routine and emergency infectious disease situations. Public Health Rep 2014 Nov 01;129(6_suppl4):114-122. [CrossRef]
  63. Lynch CJ, Gore R. Short-range forecasting of COVID-19 during early onset at county, health district, and state geographic levels using seven methods: Comparative forecasting study. J Med Internet Res 2021 Mar 23;23(3):e24925 [FREE Full text] [CrossRef] [Medline]
  64. Gore RJ, Diallo S, Padilla J. You are what you tweet: Connecting the geographic variation in America's obesity rate to Twitter content. PLoS One 2015;10(9):e0133505 [FREE Full text] [CrossRef] [Medline]
  65. Padilla JJ, Kavak H, Lynch CJ, Gore RJ, Diallo SY. Temporal and spatiotemporal investigation of tourist attraction visit sentiment on Twitter. PLoS One 2018;13(6):e0198857 [FREE Full text] [CrossRef] [Medline]
  66. Smith A, Brenner J. Twitter use 2012. Pew Research Center. Washington, DC: Pew Research Center; 2012 May 31.   URL: https://www.pewresearch.org/internet/2012/05/31/twitter-use-2012/ [accessed 2021-04-09]
  67. DeCamp M, Koenig TW, Chisolm MS. Social media and physicians' online identity crisis. JAMA 2013 Aug 14;310(6):581-582 [FREE Full text] [CrossRef] [Medline]
  68. McCambridge J, Witton J, Elbourne DR. Systematic review of the Hawthorne effect: New concepts are needed to study research participation effects. J Clin Epidemiol 2014 Mar;67(3):267-277 [FREE Full text] [CrossRef] [Medline]
  69. Hong L, Davison B. Empirical study of topic modeling in Twitter. In: Proceedings of the 1st Workshop on Social Media Analytics.: Association for Computing Machinery; 2010 Presented at: 1st Workshop on Social Media Analytics; July 25, 2010; Washington, DC p. 80-88. [CrossRef]


AAEM: American Academy of Emergency Medicine
ACEP: American College of Emergency Physicians
API: application programming interface
CIBU: COVID-19 inpatient bed utilization
FDA: Food and Drug Administration
LDA: latent Dirichlet allocation
MACD: moving average convergence/divergence oscillator
PPE: personal protective equipment
SAEM: Society for Academic Emergency Medicine
VADER: Valence Aware Diction and Sentiment Reasoner
WHO: World Health Organization


Edited by C Basch; submitted 08.03.21; peer-reviewed by S Vilendrer, K Goniewicz, P Chai, R Gore; comments to author 25.03.21; revised version received 14.04.21; accepted 23.04.21; published 14.07.21

Copyright

©Colton Margus, Natasha Brown, Attila J Hertelendy, Michelle R Safferman, Alexander Hart, Gregory R Ciottone. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 14.07.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.