Acceptability of App-Based Contact Tracing for COVID-19: Cross-Country Survey Study

Background The COVID-19 pandemic is the greatest public health crisis of the last 100 years. Countries have responded with various levels of lockdown to save lives and stop health systems from being overwhelmed. At the same time, lockdowns entail large socioeconomic costs. One exit strategy under consideration is a mobile phone app that traces the close contacts of those infected with COVID-19. Recent research has demonstrated the theoretical effectiveness of this solution in different disease settings. However, concerns have been raised about such apps because of the potential privacy implications. This could limit the acceptability of app-based contact tracing in the general population. As the effectiveness of this approach increases strongly with app uptake, it is crucial to understand public support for this intervention. Objective The objective of this study is to investigate the user acceptability of a contact-tracing app in five countries hit by the pandemic. Methods We conducted a largescale, multicountry study (N=5995) to measure public support for the digital contact tracing of COVID-19 infections. We ran anonymous online surveys in France, Germany, Italy, the United Kingdom, and the United States. We measured intentions to use a contact-tracing app across different installation regimes (voluntary installation vs automatic installation by mobile phone providers) and studied how these intentions vary across individuals and countries. Results We found strong support for the app under both regimes, in all countries, across all subgroups of the population, and irrespective of regional-level COVID-19 mortality rates. We investigated the main factors that may hinder or facilitate uptake and found that concerns about cybersecurity and privacy, together with a lack of trust in the government, are the main barriers to adoption. Conclusions Epidemiological evidence shows that app-based contact tracing can suppress the spread of COVID-19 if a high enough proportion of the population uses the app and that it can still reduce the number of infections if uptake is moderate. Our findings show that the willingness to install the app is very high. The available evidence suggests that app-based contact tracing may be a viable approach to control the diffusion of COVID-19.


Contents
gives an overview of the structure of the survey that was administered to respondents. In the following, we describe the various parts in more details. After expressing their informed consent to take part in the study, respondents were given a description of the contact tracing app. We explained that the app would be developed by a national health organization and that, once installed, it would register other users in close proximity via Bluetooth (and, in the case of the US and the UK, potentially location data). Users who were found to have been in close proximity to a confirmed case of COVID-19 for at least 15 minutes would be alerted by the app and asked to quarantine at home for 14 days or until they could be tested for the virus. All other users would see an "all clear" message. We further explained that an early quarantine would prevent individual users from passing on the virus to their loved ones in the early (presymptomatic) stages of their potential infection, and might slow down or even stop the epidemic. We also stressed that the identity of all users would be private throughout the entire process. In order to progress to the main part of the survey, respondents had to correctly answer three comprehension questions about the functioning of the app. Reasons for and against installing the app (Opt-in) Willingness to comply with self-isolation request Willingness to install at different stages of the epidemic Part 3: Willingness to keep the automatically installed app Automatic installation Agreement with automatic installation policy (Opt-out) Data policy preferences Part 4: Among which: Age, Gender, Region, Socio-demographics Encounters with non-household members, Health status, Phone use, Ability to work from home Part 5: Party preference, Trust in government,

Political support
Would opinion of government improve under automatic/voluntary installation policies After this introductory part, the main questionnaire began, which was designed to address the following three objectives: 1. Assess general support for adopting the app (and complying with its requests); 2. Identify the main drivers of adoption intentions (reasons for and against); 3. Evaluate support for different implementation policies (opt-in vs. opt-out installation regimes).
First, respondents were asked to assess their likelihood of installing (or not) the app on their phone; responses were collected on a 5-point scale, from Definitely install to Definitely won't install. 1 Respondents were then asked about their main reasons for and against installing the app; in both cases, respondents could select multiple reasons from a menu of options (the order of these options was randomized at the individual level). We subsequently asked respondents how likely they would be to comply with the request to self-isolate for 14 days if they had been in close contact with an infected person. Responses were collected on a 5-point scale from Definitely comply to Definitely won't comply. Those who did not select Definitely comply were then asked whether their chances of compliance would increase, decrease, or remain the same if health services committed to test them for the virus within two days from the start of their self-isolation (a negative test allowing them to stop self-isolating). Respondents who did not say they would Definitely install the app in response to the initial installation question were asked further questions about their willingness to install the app in three additional scenarios: (i) in case the epidemic had spread to someone in their community, (ii) to someone they knew personally, (iii) or in case an "all clear" message by the app would be associated with a relaxation in lockdown restrictions.
We next assessed whether respondents would be open to an "opt-out" policy: the government would require mobile phone providers to automatically install the app on all phones, but users would be able to immediately uninstall it. Respondents were asked about their willingness to keep (vs. uninstall) the app in this case (on a 5-point scale from Definitely keep to Definitely uninstall ). As a follow-up question, they were asked to rate the extent to which they agreed with the following statement: "The government should ask mobile phone providers to automatically install the app on all phones" (on a 5-point scale from Fully agree to Fully disagree). Respondents who did not Fully agree with the statement were then asked whether their opinion would change if someone in their community, or someone they knew personally, had been infected with the virus. Finally, they were asked for their preference regarding the data policy that should be adopted once the epidemic is over: automatically delete all data, de-identify the data and make it available for research purposes, or some other option of their choice.
The third block of the survey collected basic demographic information (age, gender, region of residence) as well as information about potential risk factors for contracting the virus (frequency of social interactions and health risks), use of smartphone, and ability to work from home and receive (some fraction) of payment.
The final block consisted of questions about political orientation and attitudes towards the government. We first asked respondents to state their political affiliation. We then asked them whether, in general, they "trusted the government to do what is right". Finally, we asked them whether their opinion of the government would improve in case of (i) an opt-in installation regime, and (ii) an opt-out regime.
We kept the survey design as similar as possible across all five countries, with a few exceptions. First, the UK survey was administered before the government issued a nation-wide lockdown. On the other hand, respondents in Germany, Italy and France, were surveyed after such a lockdown had been implemented. In the US, most states were under a "stay at home" order by the time of the survey, but not all of them. To reflect this difference in the environment respondents faced, we slightly adjusted the phrasing of some of our questions between areas in lockdown and those that were not. 2 Second, unlike in the other countries, we asked UK and US respondents about an app that might use GPS data in addition to Bluetooth. Third, in the US survey, which was run last, we included a question explicitly asking respondents which installation regime (opt-in or opt-out) they preferred. We did this in order to check the consistency between their actual preferences for installation regimes and their installation intentions under each regime elicited in Parts 2 and 3. We also added demographic questions about ethnicity, area of residence, health insurance and media use, as well as two versions of a question about willingness to install if a private company like Facebook endorsed it. A detailed overview of the entire survey flow explaining the survey logic and highlighting differences between countries can be found in Section D in the Multimedia Appendix. The full texts of the different surveys can be found here: UK, France, Germany, Italy and US.

A.2 Technical Details
The survey was administered between the 20th and 27th of March 2020 in the four European countries (France, Germany, Italy, UK), and between the 7th and 10th of April 2020 in the US. No personal data was collected at any point during the survey and we obtained informed consent as well as checked for bots before the survey began. Respondents who accepted the survey invitation were directed to our online questionnaire, programmed using the software Qualtrics. The survey was pretested before fielding by generating test observations (around 4000 iterations for each country) to check data quality and consistency. Moreover, we always had a soft launch before the full launch. That is, after collecting the first 100 responses, we paused recruitment and checked the data before launching the survey widely. No adjustments had to be made in any of the countries. Our survey was not password protected but distributed through an open link, so in principle respondents could take part multiple times. However, the panel provider Lucid assigns a unique ID to each respondent which we tracked within the survey, which allowed us to identify and exclude multiple entries from the same respondent. Only a very small number of respondents completed the survey more than once; see Section B.2 for more details. Throughout the survey, respondents were allowed to navigate back and forth between screens except for when adaptive questioning was used. Furthermore, we enforced responses in all questions but those about reasons for and against installation (as we did not want to force people in favor of/opposed to installation to give reasons against/for, respectively). Consequently, every participant who submitted the survey also replied to every question. We did give people the option to choose "Don't know" in response to all the installation questions (as well as a question about sick payment in the demographics part); however, since generally not many people chose it, we merged this option with the midpoint of the scale during data analysis. The entire survey was stretched over 17-23 screens (21-28 screens in the US), depending on the adaptive questioning, with 1-5 items per screen. The average completion time was 11.0 minutes in France, 9.7 minutes in Germany, 10.3 minutes in Italy, 8.7 minutes in the UK, and 12.6 minutes in the US.

B.1 Recruitment
In all five countries, we recruited respondents through Lucid, an online panel provider that works with a variety of sample suppliers to get a broad range of volunteers. Volunteers were recruited using a multitude of methods, from double opt-in panels (the vast majority), publishing networks, social media, and other types of online communities. In some cases, participants were furthermore recruited offline e.g., via TV and radio ads, or with mail campaigns. Participation was voluntary and, in the majority of cases, incentivized with most suppliers providing loyalty reward points or gift cards, and some providing cash payments. In each country, we set recruitment quotas so as to achieve a sample of respondents that was representative of the adult population of the respective country with regard to gender, age and region of residence (quota-based sampling). Furthermore, we did not invite individuals who did not own a mobile phone. 3 Figure 1 shows how many individuals opened the survey by clicking on the link as well as the subsequent attrition at different stages that left us with our final sample of 5995 complete responses. Across the five countries, 10375 individuals started the survey, out of which 10308 consented to participate (a participation rate of 99%). Before participants could begin the main part of the survey, we briefly described the app, and asked three comprehension questions to ensure participants were paying adequate attention and not just (randomly) clicking through the survey. Only if all three comprehension question were answered correctly were respondents allowed to continue with the survey. Out of the people who consented to participate, 3983 failed to answer all three comprehension questions correctly, leaving us with a sample of 6166 respondents who started the main questionnaire. 105 of these respondents either exited the survey before submitting or took the survey more than once and were therefore excluded from the sample, 4 which left us with 6061 complete and unique responses (a completion rate of 59%). Finally, we dropped observations from 44 individuals who indicated in the comments section they did not own a smartphone, and 22 individuals who either did not identify as male or female, or who preferred not to disclose their gender. 5 This gave us a final sample of 5995 participants for whom we have responses to all questions.  Notes: Health Problems refers to diabetes, asthma, high blood pressure, heart or breathing problems. In the demographics, we only gave respondents a "Don't know" option for the question about sick pay. "Trust" indicates whether respondents agreed that, in general, they trust their government to do what is right. The number of cases and deaths per million people in each country refers to the number on the final day of surveying.

B.3 Sample robustness
Lucid recruits survey participants online through a variety of sampling partners. From these partners, Lucid receives demographic information about the pool of potential participants and uses this information to target respondents in accordance with the quotas set for the survey. In our case, we targeted the gender and age composition as well as the regions of residence of the respondents to be representative of each country's overall population. After sending the survey out to an initial sample meeting these quotas, Lucid subsequently focuses on particular demographic groups in the re-sampling to meet the quotas in the final sample of participants (i.e., if initial take-up among people over the age of 60 was low, Lucid would disproportionately target them or even close data collection on younger cohorts completely after a while to ensure balance across age groups). However, while we tried to ensure that our sample was representative of each country's population in terms of gender, age and region of residence, the composition could still differ from the country averages in terms of other characteristics. Furthermore, respondents who are recruited online may be more tech savvy and willing to use a phone application than the average individual. This could potentially bias our estimates.
To address these concerns and assess the external validity of our sample, we investigated its representativeness in two ways, using the German sample as an example. First, we re-created our key figures using sampling weights to harmonize the characteristics of our sample with the German population at large. Our results remained generally consistent -see for example Figure 2 which depicts the re-weighted response likelihoods to the question "How likely would you be to install the app on your phone". However, if unobserved factors like tech savviness are not strongly correlated with demographics, then using survey weights alone is not enough to un-bias the results. As a second step, we also tested whether our results are robust to alternate recruitment methods. One week after the initial survey, we repeated our online survey twice, once again with Lucid and also with Forsa. Although both surveys were conducted online, Forsa, unlike Lucid, recruits its online panel members from a probability based, randomly selected telephone sample. 6 This offline recruitment process ensures that technical literacy and willingness to share data should play less of a role in the selection into the Forsa sample. We find almost exactly the same results in the replication survey, alleviating concerns about our original sample -see, for example, Figure 2 which depicts results for the question "How likely would you be to install the app on your phone". More details about the replication can be found in the German country report. Note: Lucid 1 refers to our initial, unweighted German sample collected with Lucid. Lucid 1 weighted refers to the same sample albeit now reweighted with population weights. Lucid 2 refers to our follow-up German sample that we collected a week after the initial one with Lucid. Finally, Forsa refers to the sample we collected a week after the initial one with Forsa. Light/dark red bars correspond to Probably/Definitely won't install. For more information, please see the German country report.

B.4 Selective Attrition
Using data from only individuals who pass all three comprehension questions may introduce selection bias in our sample. For example, more tech savvy respondents may simultaneously be more likely to pass the comprehension checks and more inclined to install any app. To address this concern, we consider a selection model where the likelihood of a respondent consenting to the survey and correctly answering all comprehension questions is a function of the demographic information we obtained from Lucid for everyone who started the survey (age, gender, region, employment status, and, in the case of Italy, France and Germany, education). We estimate the following probit model: where Φ() gives the normal cumulative distribution function. We also estimate an ordered probit model, taking the stage of a respondent's drop out (consent, comprehension question 1, 2 or 3) as our ordinal outcome variable. The results of these empirical specifications are given below. Respondents with less than high school education are significantly less likely to be included in our final sample. The same is true for male respondents, younger respondents, and the self-employed. Given our selection model, we re-weight our analyses with inverse probability weights reflecting the likelihood that each individual was sampled. Therefore, using our selection model, weights are constructed using the predicted probability of completing the survey, Φ(βx i ). The results from re-weighting the responses to the main questions on voluntary and automatic installation are given in Figure 4. Importantly, we see that the results are broadly consistent with the results presented in the main text.

Figure 3: Selection model
Note: The reference categories for the covariates are as follows: Female (Gender), 18-30 (Age), less than high school education (Education), Employed full-time (Employment). "Don't know", and empty answers are excluded. We also control for region fixed effects. Plotted points are the estimated model coefficients. Lines represent 95% confidence intervals calculated with heteroskedasticity-robust standard errors. Therefore the graph is predominantly helpful for assessing which demographic factors are statistically significant in predicting whether a respondent completes the survey. Figure 4: Likelihood of having the app in an opt-in and opt-out regime, weighted by inverse probability of staying in sample Note: In addition to the covariates shown in Figure 3, we also include region fixed effects, and country-specific education and employment coefficients. Light/dark red bars correspond to Probably/Definitely won't install (uninstall) in Panel A (B).
Finally, we also compared the demographic information we received from Lucid to the self-reported demographic information on gender and age. This allows us to check that respondents gave accurate information in our survey. 335 respondents (5.5%) gave at least one inconsistent answer. Removing these individuals has no effect on the results.
C Additional results

C.1 Robustness to modelling assumptions
In the main part, we make a couple of modelling assumptions. Namely we use a Linear Probability Model and dichotomize the outcome measure by grouping everyone who said they would probably or definitely install the app into the "install" category (assigned value 1) while everyone else falls into the "non-install" category (with value 0). However, our results are robust to making different modelling assumptions as well. Figure 5 displays the results of the opt-in installation question if we model the entire choice space as an ordered logit where high numbers indicate a lower probability of installing the app, so that negative coefficients here have a similar interpretation to positive coefficients in the linear probability model in the main part. We see that the results look very similar to the ones obtained with the main specification (see Figure ??).

Figure 5: Ordered logit specification
Note: This figure replicates the analysis underlying Figure ?? using an Ordered Logit as opposed to a Linear Probability Model specification. Lines represent 95% confidence intervals calculated with heteroskedasticity-robust standard errors. All coefficients are the result of a single regression.
Furthermore, our results also hold when we sort only the people who say they would definitely (rather than definitely or probably) install the app into the install category -both using a linear probability and a probit specification. For brevity, we do not show these results here.
Just as we have looked at the covariates determining the choice to either install or not the app voluntarily (opt-in) in Figure ??, we can also look at the impact of different influencing factors on the decision to keep or immediately uninstall an automatically installed app (opt-out). Figure 6 displays the results if we once again dichotomize the outcome so that the dependent variable takes the value 1 if a respondent indicated that they would probably or definitely keep the app and 0 otherwise. We see that participants from the UK are relatively more likely to keep the app in this scenario while French respondents are less likely in comparison. As with the opt-in installation outcomes, the results are robust to using an ordered logit specification as well as using a probit or linear probability specification when the "keep category" only consists of people saying they would definitely keep the app. For brevity, these robustness checks are omitted here.  Figure ?? using the answers to the opt-out installation question as dependent variable. The dependent variable is an indicator variable taking the value 1 if a respondent chose definitely keep or probably keep when asked whether they would keep the app or not, and 0 otherwise. We use a Linear Probability Model. Lines represent 95% confidence intervals calculated with heteroskedasticity-robust standard errors. All coefficients are the result of a single regression and thus display marginal effects. A coefficient of 0.1 implies a respondent who chose this option is 10 percentage points more likely to state they would definitely or probably keep the app relative to the base category.

C.2 Robustness across demographics
In the main text, we note that the results presented in Figure ?? hold across a variety of different demographic groups -this is illustrated in the following. Figure 7 shows that support for the app does not vary significantly when splitting outcomes by gender, the existence of comorbidities or the availability of sick pay. Figure 8 shows that there are only very small differences with regard to age. The only dimension where we see significant differences in support for the app is trust in government. Figure 9 shows that someone's propensity to install a contact-tracing app decreases the less they generally trust the government to "do what is right". The results of the opt-out scenario show the exact same pattern as the opt-in results. For brevity, we do not show them here.   C.3 Take up and disease severity at the geographical level We collect data on the number of infections as well as deaths due to COVID-19 at the geographic area by day level where the geographic area is at the same level of disaggregation as the respective survey asked for (e.g. state for the US). The data is publicly available on GitHub, and the sources are Santé Publique France (for France), Protezione Civile (for Italy), the Robert Koch Institute (for Germany), PHE (for the UK), and the New York Times (for the US). We adapted the aggregation level of the French data from département to region.
As can be seen in Figure ?? in the main part, we generally find only a very weak relationship between the severity of the outbreak (measured by deaths per capita as well as the absolute number of deaths) in someone's area of residence and the probability that they will want to download or keep the app. The exception to this rule are the very badly affected areas of New York and Northern Italy -here, respondents are about 10 percentage points more likely to state they would definitely install the app than the average participant. The fact that we do not find a strong effect generally could be due to a number of different factors. Firstly, death rates do not vary much across geographic areas once we remove New York and Northern Italy. Secondly, much of the rhetoric surrounding the severity of the disease is at the national, not sub-national level (again excluding New York and Northern Italy). Lastly, it is likely that the cartography of infections does not exactly follow the cartography of regions at our disposal.
C.4 Reasons for/against installation and take up decisions Figure 10 shows that in general, far more reasons are given in favor of installing the app than against it -this is true even among those who state that they probably or definitely won't install the app. The number of reasons for installing the app decreases sharply as respondents become less likely to want to install it. This is not the case for reasons against, where most respondents select only one reason regardless of their intention to install. These findings suggest the need to better explain the various ways in which the app might benefit a user and his or her surroundings. The reasons chosen most often by respondents across all countries in favor of and against installing the app are shown in Figures 11 and 12. While all reasons in favor of an installation appear to come somewhat evenly to people's minds, there is a strong clustering of reasons against the app around surveillance and security concerns as well as possible impacts on mental health. These reasons highlight the biggest concerns that would need to be addressed in the design and implementation of the app to make sure take-up would be sufficiently high. In addition to the common reasons displayed in Figures 11 and 12, we also gave a few answer options, which differed by country. In the UK, respondents were given "Don't want the NHS to have access to my location data" as an additional reason against, which was replaced by "I don't want to activate Bluetooth" in the other surveys. Germans in particular, were quite hesitant to activate Bluetooth, making it the third most popular reason against installation there. In the French, German, Italian and US surveys, respondents were given "Would allow me to return to normal life faster " as an additional reason in favor -however, it was not chosen by many people. Finally in the US survey the additional reason against "No one else will use the app" was the second most popular reason, given by 40% of respondents.  Figure 13 shows the impact each reason had on the probability with which a respondent stated they would probably or definitely install the app -controlling for all the covariates displayed in Figure  ??. Unsurprisingly, the first seven reasons, which are in favor of the app, increase the probability of intending to install the app. Strikingly, out of the six reasons listed against installation (the latter half in the graph), only one had a large impact on installation decisions and that was "I would not benefit". This may indicate that while many people are concerned about government surveillance and cyber security, these reasons are not prohibitive of installing the app or only become so if they lead participants to conclude that the app would not be useful to them. We found similar results with an ordered logit model, a linear probability model dichotomizing on just definitely install, when using a probit model and dropping the controls. Finally, the results are also qualitatively similar when we look at opt-out rather than opt-in decisions.

Figure 13: Impact of reasons on installation probability
Note: The dependent variable is an indicator variable taking the value 1 if a respondent chose definitely install or probably install when asked whether they would install the app or not, and 0 otherwise. We use a Linear Probability Model. Lines represent 95% confidence intervals calculated with heteroskedasticity-robust standard errors. All coefficients are the result of a single regression and thus display marginal effects. A coefficient of 0.1 implies a respondent who chose this option is 10 percentage points more likely to state they would definitely or probably install the app.

C.5 Determinants of the reasons chosen
Figures 14 and 15 display the main determinants of reasons given for or against installing the app. Each column represents a separate linear probability model where the dependent variable takes the value 1 for individuals that chose that specific reason and the value 0 otherwise. The main takeaway from the graphs is that the reasons are largely not explained by the covariates. Only people who trust the government less as well as younger people are significantly more likely to name concerns surrounding post-epidemic surveillance as (one of) their main reasons against installation. Furthermore, younger people are less likely to cite peace of mind as a reason for installation. All results hold when using a probit model as well.

C.6 Opt-in versus Opt-out preferences
Understanding whether an opt-in or an opt-out regime will yield a larger number of app users is crucial for the successful implementation of app-based contact tracing. Standard results from psychology suggest that many more people will keep the app in an opt-out regime than would install it voluntarily in an opt-in regime, as the former approach would reduce (mental) transaction costs and implicitly set having the app as the societal standard. However, Figure ?? shows that instead fewer people indicate they would keep the app rather than voluntarily download it. One reason for this may be that individuals, and in particular individuals who are concerned about potential government surveillance, may perceive automatic installation as an overreach by the government and thus choose to uninstall the app on principle.
To further understand respondents' preferences over an opt-in vs. opt-out regime, we directly asked US respondents which regime they would prefer. 60% would prefer voluntary to automatic installation. This fraction is constant across gender, region, political affiliation, lockdown status and other characteristics.
We can infer the preference between voluntary or automatic installation regime indirectly for the other four countries, where we did not ask directly for this preference. We use participants' intentions to install or keep the app in an op-in vs. opt-out regime to create an 'Intention measure'. We use participants' opinion of the national government in response to it introducing either regime, to create an 'Opinion measure'.
The Opinion measure considers respondents' answers to the questions: • How much do you agree with the following statement?
"My opinion of the government would improve if they introduced the app, and allowed me to decide whether to install it or not" • How much do you agree with the following statement?
"My opinion of the government would improve if they asked mobile phone providers to automatically install the app, and allowed me to decide whether to keep it or not" The difference between respondents' answers to these two questions yields information about which regime they would prefer the government to introduce. Therefore, we create a variable that gives the difference between these two answers, such that when participants' opinion of the government would improve more (or worsen less) under the opt-in regime, this difference is negative. In this case, we would infer that they prefer an opt-in regime.
The Intention measure considers respondents' answers to the "Would you install this app?", and "Would you keep this app if it was automatically installed" questions. The difference between respondents' answers to these questions reflects their different intentions of having the app on their phone in either regime. Thus, this difference is again indicative of participants' preferences over the opt-in/opt-out regimes. As for the Opinion measure, when the Intention measure is negative we can infer that a respondent believes they would be more likely to have the app installed on their phone in an opt-in regime, which we take to mean that they prefer the opt-in regime. 7 For both difference measures, we re-code the new variable into three categories: d < 0 (prefer opt-in), d = 0 (indifferent), and d > 0 (prefer opt-out). The differences in preferences, across countries, are displayed in Figure 16. The trends are broadly in-keeping with the other results. To investigate the factors that are associated with opt-in/opt-out preferences, we use a simple ordered logit model. The results from this analysis are presented in Figure 17. Respondents who express less trust in the government or who worry about government surveillance or their phone being hacked, are more likely to have a preference for the opt-in over the opt-out regime. Figure 17: Determinants of opt-in vs. opt-out preference Note: Positive coefficient means prefer opt out. Ordered logit model. The points show estimated model coefficients, while the lines show 95% confidence intervals, calculated using heteroskedasticity-robust standard errors.

C.6.1 Validity of difference measures
In the US survey, respondents were asked directly whether they would prefer an opt-in or opt-out regime. The below cross tabulation demonstrates a strong relationship between respondents' stated choice (columns) and their inferred preference (rows), using the Opinion Measure. Only 10.4% of respondents' inferred preferences are inconsistent with their stated preferences -those who selected d < 0 and Automatic, or d > 0 and Voluntary. Of the respondents for which the Opinion measure is negative (d < 0), 80.4% stated that they prefer the Voluntary regime. Likewise, of the respondents for which the Opinion measure is positive (d > 0), 64.7% stated that they prefer the Automatic regime.

C.7 Data Usage
Respondents were asked for their preferences over what should happen to the data generated by the app. Respondents could state that they wanted the data immediately deleted, or that the (deidentified) data should be made available to researchers.
In the final sample, 59.9% of respondents answered that the de-identified data should be made available to researchers. Figure 18 demonstrates that respondents who were less likely to say they would download the app were also less likely to want their data to be made available to researchers. Surprisingly, Figure 18 demonstrates no relationship between trust in the government and datasharing preferences. However, once we control for other covariates, this relationship does become statistically significant, as can be seen in Figure 19. This relationship is nevertheless non-monotonic and hence difficult to interpret. Figure 19 also demonstrates that older people, people who use their phones more regularly, and residents of the UK are more likely to consent to having their data shared. Figure 18: Percentage of respondents wanting the data to be made available By intention to install By trust in government Figure 19: Determinants of wanting the data to be made available Note: Computed using a Linear Probability Model. The dependent variable is a dummy variable taking the value 1 if the respondent prefers the data be de-identified and made available, 0 otherwise. Lines represent 95% confidence intervals calculated with heteroskedasticity-robust standard errors. All coefficients within a column are the result of a single regression and thus display marginal effects. A coefficient of 0.1 implies a respondent who chose this option is 10 percentage points more likely to state they prefer the data be made available. The right hand column contains answers only from respondents who answered that they would definitely or probably install the app. The reference categories for the covariates are as follows: 18-30 (Age), Female (Gender), less than once a week (Socialise), receive sick pay (Sick Pay), none (work from home), UK (country), and completely (Trust)

C.8 Compliance with the app self-isolation request
We asked respondents how likely they would be to comply with the request of self-isolating for 14 days if they had been in close contact with a person who was confirmed to be infected. Responses were collected on a 5-item scale ranging from Definitely comply to Definitely won't comply. As shown in Figure 20, the vast majority of respondents in all countries said they would comply with the self-isolation request. Support is again highest in Italy, where 96% of respondents declared they would definitely or probably comply, and lowest in Germany, at a still very high 89%. We further asked respondents who did not say they would definitely comply, whether their chances of compliance would increase, decrease, or remain the same if the health services in their own country committed to test them quickly. In all countries, a commitment to quick testing would further increase compliance rates. This implies that the vast majority of people in all five countries are not only prepared to have the app installed on their phones, but also to use it as intended, even if this means sacrificing (more of) their personal freedom for a limited amount of time. 8

Figure 20
C.9 Effect of comprehension In the German replication survey, conducted via Forsa (see Section B.3), we collected all answers also for those respondents who failed to answer the comprehension questions correctly. For this survey, we can thus compare the willingness to install between respondents who did (N=1048) and who did not (N=149) answer all comprehension questions correctly. We find that 69.0% of the respondents who answered the comprehension questions correctly would definitely or probably install the app. In contrast, only 56.4% of the respondents who answered wrongly would do so. This might be because the respondents who failed the comprehension questions answered randomly. Or it could be that these respondents are actually less willing to install.
The latter interpretation is consistent with previous literature[1] that showed a higher willingness to share data and information among the more technically informed users. A lack of understanding leading to a lower willingness to install would also be in line with the results of surveys that were conducted in Germany shortly after our survey. Some of these surveys did not explain the app in as much detail as our survey and these surveys often find lower willingness to install. For example, infratest Dimap asked respondents between 30 and 31 March 2020 about their willingness to install a contact-tracing app, and found that only 47% of respondents were willing to install the app. 9 The comparison between our surveys and these shorter surveys suggests that a detailed explanation of how the app works and how it could mitigate or stop the epidemic increases the willingness to install.

D Survey questionnaire
In the following, the UK version of the survey is presented with comments (in italics) explaining all the ways in which the Italian, French, German or US surveys differed from it. Overall, we tried to keep the surveys as similar as possible and only changed the phrasing if necessary (for example due to different political circumstances as some countries already were in lockdown at the time of the survey while others were not). In the US, we added a few more questions, which appear teal-colored in the overview below. Finally, horizontal lines indicate page breaks in the survey. The original survey texts for each country can be found here: UK, France, Germany, Italy and US.

D.1 Consent Form
In this study, we will ask you about an app that could help reduce the spread of the COVID-19 epidemic. You may ask any questions before deciding to take part by contacting the researchers (details below). The survey is about 10 minutes long. No background knowledge is required.
Do I have to take part? Please note that your participation is voluntary. If you do decide to take part, you may withdraw at any point during the survey for any reason before submitting your answers by closing the browser.
How will my data be used? Your answers will be completely anonymous. Your data will be stored in a password-protected file and may be used in academic publications. Research data will be stored for a minimum of three years after publication or public release.
Who will have access to my data? Lucid is the data controller with respect to your personal data and, as such, will determine how your personal data is used. Please see their privacy notice here: https://luc.id/privacy-policy/. Lucid will share only fully anonymised data with the University of Oxford, for the purposes of research. Responsible members of the University of Oxford and funders may be given access to data for monitoring and/or audit of the study to ensure we are complying with guidelines, or as otherwise required by law. This project has been reviewed by, and received ethics clearance through, the University of Oxford Central University Research Ethics Committee (reference number ECONCIA20-21-06).
Who do I contact if I have a concern about the study or I wish to complain? If you have a concern about any aspect of this study, please contact Johannes Abeler at johannes.abeler@ economics.ox.ac.uk and we will do our best to answer your query. We will acknowledge your concern within 10 working days and give you an indication of how it will be dealt with. If you remain unhappy or wish to make a formal complaint, please contact the Chair of the Research Ethics Committee at the University of Oxford who will seek to resolve the matter as soon as possible: Economics Departmental Research Ethics Committee at ethics@economics.ox.ac.uk Please note that you may only participate in this survey if you are 18 years of age or over.
If you have read the information above and agree to participate with the understanding that the data (including any personal data) you submit will be processed accordingly and that you need to be 18 years of age or over to participate, please confirm below.
• I confirm • I do not confirm In the US version, we asked the following filtering question on a separate screen: Is your area of residence currently under a stay-at-home order where you are no longer allowed to leave your home for non-essential reasons?

• No
Depending on the answer, participants continued with slightly different survey versions -one making reference to current restrictions and one referring to possible future restrictions. Below we will point out the few instances where this leads to minor differences in formulations.

D.2 App Description and Comprehension Questions
The first screen was the same across all surveys, the only difference being that in the US survey we said "If enough people used the app, it could automatically alert you.".
The current coronavirus epidemic ("COVID-19") is all over the news.
People can get infected if they are in close contact with someone who has the virus. People do not notice when they get infected. They only notice when they start having a fever or a cough, perhaps a week later.
Imagine there was an app that you could install on your mobile phone. This app would automatically alert you if you had been in close contact for at least 15 minutes with someone who was infected with the coronavirus. Such an app does not exist yet in the UK. But we, researchers from the University of Oxford, are interested in understanding what you would think about such an app.
The next pages explain how such an app could work and will ask comprehension questions. You can only continue the survey if you answer all questions correctly.
In the German, Italian and French versions, we said the app would only use Bluetooth and not location data. In every other country, we therefore listed "Activate Bluetooth" as the first (and correct) answer option in the comprehension question. Furthermore, we adapted the health services responsible for the app depending on the country, referring to the Center for Disease Control and Prevention (CDC) in the US, the Robert Koch-Institut (RKI) in Germany and more broadly to the "health services" in Italy and France.
The app would be developed by the NHS. You would need to install the app by simply clicking a link. Once installed, the app would register which other users are close to you. The app would do this by using Bluetooth and your location.
The app would NOT access your contacts, photos, or other data held on your phone. Only the NHS would have access to the data collected. Participants were only allowed to continue to the next screen if they selected option (1). If they chose one of the latter two options, the survey was terminated.
In the US, Germany, Italy and France, we said the app would request people found to have been in contact with a confirmed case of COVID-19 to go into "quarantine at home" rather than "self-isolate" -terminology we stuck to throughout the respective surveys. We also explained the difference between the restrictions imposed on people in the existing lockdowns and the restrictions associated with being alerted by the app. In the UK, we did not consider this to be necessary as, at the time of the survey, the government had not issued a lockdown order yet. Finally, in the US survey, the "lockdown version" referenced a current stay-at-home-order while the "no lockdown version" mentioned a hypothetical one.
If the NHS diagnoses the coronavirus in somebody you have been in close contact with, the app would notify you automatically. The app would give you targeted advice on what to do. It will ask you to self-isolate at home for 14 days or until you have been tested for the virus.
This would be useful since people can infect others even before they have a fever or a cough. Selfisolating would thus protect your family, friends and colleagues from being infected by you. At the same time, only people who were in contact with an infected person would need to self-isolate.
If you had not been in close contact with a confirmed case, then the app would show you an "all clear" message.
Comprehension check: What would the app do if you were found to have been in contact with someone diagnosed with coronavirus?
• Ask me to self-isolate • Give me an "all clear" message • Tell me the name of the person who was diagnosed Participants were only allowed to continue to the next screen if they selected option (1). If they chose one of the latter two options, the survey was terminated.
In the US, Italy, France and Germany, we stressed that, if enough people used it, the app could also shorten the duration of existing lockdowns (or school closures in the case of the "no-lockdown" US version).
If you are diagnosed with coronavirus, the app would notify all people you have been in close contact with, without identifying you to them, and advise them to self-isolate. This would increase the chance of finding all the people you might have infected and help make sure they can keep their loved ones safe as well. If enough people use the app, it will slow down the epidemic and might even stop it entirely.
Comprehension check: What would the app do if you were diagnosed with the coronavirus?
• Give my name and address to all people I have been in close contact with • Advise all people I have been in close contact with to self-isolate • Shut down my phone Participants were only allowed to continue to the next screen if they selected option (2). If they choose the first or the last option, the survey was terminated.

D.3 Installation Questions D.3.1 Voluntary Installation -General
The general installation questions were the same in all countries. However, in the US, half of the participants were randomly assigned to see all the answer options in the survey in ascending rather than descending order i.e., the answer options were ordered from "Definitely won't install" to "Definitely install" (with "Don't know" remaining the last option). Whether or not a participant saw the ascending or descending order remained the same throughout the survey.
For the following questions, please imagine that an app like the one described before exists.
How likely would you be to install, or not install, the app on your phone?
• Definitely install • Probably install • May or may not install • Probably won't install • Definitely won't install • Don't know The ordering of the different reasons (both for and against installation) was randomized and people could choose multiple answers. In the non-UK versions, we gave one more reason for installing the app: "It would allow me to return more quickly to a normal life". This is because in every country but the UK, the government had already issued a (partial) lockdown order.
What would be your main reasons for installing the app (you may click up to five)?
It would help me stay healthy It would let me know my risk of being infected Seeing the "all clear" message would give me peace of mind It would protect my family and friends It would help reduce the number of deaths among older people A sense of responsibility to the wider community It might stop the epidemic Other (please indicate in the field below): In Italy, we gave "I worry the government would use this as an excuse for greater surveillance during the epidemic" (not just after) as an additional reason against, while in the US, we gave "I don't believe other people will install it" as an additional reason against. Plus, in every non-UK country, we used "I don't want to activate Bluetooth" rather than "I don't want the [health services] to have access to my location data" as a reason against.
What would be your main reasons against installing the app (you may click up to five)?

D.3.2 Voluntary Installation -Specific
Participants were only shown this entire section if they did not select "Definitely install" in response to the question "How likely would you be to install, or not install, the app on your phone?" at the beginning of the main questionnaire.
Many people in the UK worry about the effect of the virus on their community and on their family and friends.
Suppose someone in your community had been infected with the virus. How likely would you then be to install, or not install, the app on your phone?
• Definitely install • Probably install • May or may not install • Probably won't install • Definitely won't install • Don't know Only Participants who did not select "Definitely install" in response to the previous question were shown this screen.
Now suppose someone you personally know had been infected with the virus. How likely would you then be to install, or not install, the app on your phone?
• Definitely install • Probably install • May or may not install • Probably won't install • Definitely won't install • Don't know In Italy, France and Germany as well as areas under a stay-at-home order in the US, we dropped the "Imagine the government would introduce.." intro and instead only said: "Imagine the government decided to lift the restrictions of the current lockdown for those people for whom the app showed an all clear message. This means they would be able to leave their homes even without an essential reason." Imagine the government decided to introduce a full lockdown as in Italy to limit the spread of the coronavirus. This would mean that only essential stores, like supermarkets, would remain open, and you would only be allowed to leave your house in exceptional circumstances. Imagine that this restriction would be lifted for those people for whom the app showed an "all clear" message.
In this situation, how likely would you be to install, or not install, the app on your phone?
• In the US version, we asked an additional, direct question about which installation regime respondents would prefer.
Imagine that the federal government introduced such an app -which of the two installation regimes described above would you prefer?
• Voluntary installation • Automatic installation (with an option to uninstall)

D.7 Debrief
In each country, we referred participants to the coronavirus information website of the relevant government agency.
Thank you very much! If you have any questions about the app or any feedback, please let us know by writing them into the field below. You can also email the researchers at johannes.abeler@economics.ox.ac.uk.
If you want to know more about the coronavirus and how to protect you and your family, please click this link to the NHS coronavirus website: https://www.nhs.uk/coronavirus Please click the button below to finish the survey.