Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/55694, first published .
Design Guidelines for Improving Mobile Sensing Data Collection: Prospective Mixed Methods Study

Design Guidelines for Improving Mobile Sensing Data Collection: Prospective Mixed Methods Study

Design Guidelines for Improving Mobile Sensing Data Collection: Prospective Mixed Methods Study

Original Paper

1Computer Science Department, Brigham Young University—Hawaii, Laie, HI, United States

2Information and Computer Sciences Department, University of Hawaii at Manoa, Honolulu, HI, United States

3Division of Cancer Prevention & Control, Department of Internal Medicine, Wexner Medical Center, Ohio State University, Columbus, OH, United States

4Arthur G James Cancer Hospital, The Ohio State University Comprehensive Cancer Center, Columbus, OH, United States

Corresponding Author:

Christopher Slade, MS

Computer Science Department

Brigham Young University—Hawaii

55-220 Kulanui Street #1919

Laie, HI, 96762

United States

Phone: 1 8086753471

Email: christopher.slade@byuh.edu


Background: Machine learning models often use passively recorded sensor data streams as inputs to train machine learning models that predict outcomes captured through ecological momentary assessments (EMA). Despite the growth of mobile data collection, challenges in obtaining proper authorization to send notifications, receive background events, and perform background tasks persist.

Objective: We investigated challenges faced by mobile sensing apps in real-world settings in order to develop design guidelines. For active data, we compared 2 prompting strategies: setup prompting, where the app requests authorization during its initial run, and contextual prompting, where authorization is requested when an event or notification occurs. Additionally, we evaluated 2 passive data collection paradigms: collection during scheduled background tasks and persistent reminders that trigger passive data collection. We investigated the following research questions (RQs): (RQ1) how do setup prompting and contextual prompting affect scheduled notification delivery and the response rate of notification-initiated EMA? (RQ2) Which authorization paradigm, setup or contextual prompting, is more successful in leading users to grant authorization to receive background events? and (RQ3) Which polling-based method, persistent reminders or scheduled background tasks, completes more background sessions?

Methods: We developed mobile sensing apps for iOS and Android devices and tested them through a 30-day user study asking college students (n=145) about their stress levels. Participants responded to a daily EMA question to test active data collection. The sensing apps collected background location events, polled for passive data with persistent reminders, and scheduled background tasks to test passive data collection.

Results: For RQ1, setup and contextual prompting yielded no significant difference (ANOVA F1,144=0.0227; P=.88) in EMA compliance, with an average of 23.4 (SD 7.36) out of 30 assessments completed. However, qualitative analysis revealed that contextual prompting on iOS devices resulted in inconsistent notification deliveries. For RQ2, contextual prompting for background events was 55.5% (χ21=4.4; P=.04) more effective in gaining authorization. For RQ3, users demonstrated resistance to installing the persistent reminder, but when installed, the persistent reminder performed 226.5% more background sessions than traditional background tasks.

Conclusions: We developed design guidelines for improving mobile sensing on consumer mobile devices based on our qualitative and quantitative results. Our qualitative results demonstrated that contextual prompts on iOS devices resulted in inconsistent notification deliveries, unlike setup prompting on Android devices. We therefore recommend using setup prompting for EMA when possible. We found that contextual prompting is more efficient for authorizing background events. We therefore recommend using contextual prompting for passive sensing. Finally, we conclude that developing a persistent reminder and requiring participants to install it provides an additional way to poll for sensor and user data and could improve data collection to support adaptive interventions powered by machine learning.

J Med Internet Res 2024;26:e55694

doi:10.2196/55694

Keywords



Mobile and ubiquitous devices are valuable tools for gathering patient-generated health data in real-world settings [1-11]. Prior to the advent of the smartphone, mobile devices were predominantly used to collect data actively, usually through ecological momentary assessments (EMA). EMA involves “repeated sampling of participants’ current behaviors and experiences in real-time in the participants’ natural environment” [12-14]. EMA can include completing journals, diaries, and survey questions [15], providing audio or video samples [16,17], or participating in digital or physical tests [18]. As mobile devices evolved to include more sensors and access to health data, mobile apps started to use passive data collection methods [19-22], where mobile devices collect data without involving the end user. Today, machine learning models often use passively recorded sensor data streams as inputs to machine learning models that predict health outcomes or events captured through EMA [23-29] (Figure 1).

Figure 1. Machine learning models often use passively recorded sensor streams as inputs to predict outcomes captured through EMA. We explore the feasibility of collecting both passive and active data on consumer mobile devices to answer the following research questions (RQs): (RQ1) how do contextual prompting and setup prompting affect scheduled notification delivery and the response rate of notification-initiated EMA? (RQ2) Which authorization paradigm, setup or contextual prompting, is more successful in leading users to grant authorization to receive background events? and (RQ3) Which polling-based method, persistent reminders or scheduled background tasks, completes more background sessions? EMA: ecological momentary assessment.

Despite the growth of mobile data collection in health research, several challenges for mobile sensing on consumer mobile devices persist. These challenges are due to implementation decisions made by the developers of major mobile devices (ie, iOS or iPhone and Android) to protect users’ privacy, preserve battery life, and minimize distractions. Passive data collection requires access to private user data, whereas active data collection needs to interrupt users to initiate an EMA. In this work, we studied how various user interface (UI) decisions for obtaining authorization at the app level affect the success of both active and passive mobile sensing. Specifically, we tested the following authorization scenarios: (1) users explicitly granting authorization to receive notifications to initiate EMA, (2) users explicitly granting authorization to access background events, and (3) the system implicitly granting background runtime to collect data through polling.

The specific authorization procedure for the first 2 scenarios varies depending on the device. An Android device obtains authorization through setup prompting, where the user is prompted during the initial app launch. On the other hand, iOS devices use contextual prompting, where the user is prompted when the first event or notification occurs.

Background runtime for passive sensing is often obtained implicitly. Android and iOS systems determine when to run background tasks based on user actions and battery status. Another way to obtain background runtime is through a persistent reminder. A persistent reminder is a permanent UI feature, like a home screen widget or persistent notification, that receives background runtime to update its UI.

To explore these active and passive sensing implementation scenarios, we developed mobile sensing apps for Android and iOS devices that logged both passive and active data. We tested our apps with a user study to answer the following research questions (RQs; Figure 1): (RQ1) how do contextual prompting and setup prompting affect scheduled notification delivery and the response rate of notification-initiated EMA? We hypothesize that contextual prompting will lead to better EMA compliance because participants will not have to respond to a setup prompt and will have more context when authorizing notifications; (RQ2) which authorization paradigm, setup or contextual prompting, is more successful in leading users to grant authorization to receive background events? We hypothesize that the contextual prompts will improve background event authorization because the added context will help participants understand and feel safe approving the permission prompt; and (RQ3) which polling-based method, persistent reminders or scheduled background tasks, completes more background sessions? We hypothesize that persistent reminders will poll for data more often than background tasks because the mobile operating system (OS) is willing to expend resources to keep the UI up-to-date.


Overview

This section is organized as follows. First, we introduce the authorization procedures that can affect the success of mobile sensing studies. Next, we describe how we used these procedures in our mobile sensing app. Finally, we describe a user study we performed to answer our RQs. A summary of the authorization methods used for each RQ is found in Table 1.

Table 1. Summary of authorization methods used in each research question (RQ).
ConditionAuthorization method
RQ1a

AndroidSetup prompting for notifications

iOSContextual prompting for notifications
RQ2b

AndroidSetup prompting for background events

iOSContextual prompting for background events
RQ3c

Background tasksUser actions, device status implies consent

Persistent remindersInstallation implies consent

aRQ1 compares authorization methods for notification-initiated ecological momentary assessment.

bRQ2 compares authorization methods for event-driven data collection.

cRQ3 compares polling-based collection methods.

Authorization Procedures

For both Android and iOS, mobile apps must gain explicit authorization before sending notifications or receiving events. Explicit authorization is when a user grants authorization through a permission prompt. At the time of writing, Android devices use setup prompting for explicit authorization, where the user is prompted to grant authorization during setup. The setup prompt can be displayed during the initial launch or when the feature requiring authorization is enabled. iOS devices, on the other hand, use contextual prompting for explicit authorization, where the user is prompted when the first event or notification occurs.

Contextual prompting provides the user with more context to enable a more informed decision. For example, with setup prompting, the user is asked for background location access after consenting, not knowing when or where the location will be accessed. With contextual prompting, the user is prompted when the location is first accessed, helping them understand when their location is accessed. Providing this additional context could help the users feel more comfortable with granting permissions, knowing that the location will be accessed only at a specific location. With setup prompting, on the other hand, we hypothesize that the user might be overwhelmed during setup and deny the request for location or notifications.

While explicit authorization procedures are required for notifications, implicit authorization is needed to grant the background runtime for sensing apps to automatically collect passive sensor and use data. Implicit authorization is when the mobile OS implies authorization based on user actions and device status. Gaining background runtime for scheduled tasks depends on user actions such as app use, swiping an app out of the recent app switcher, and enabling power-saving mode. Device status indicators, such as the battery level, charging state, and network connectivity, are additional factors determining when background runtime is granted. iOS devices always use implicit authorization to determine when to grant runtime. Android devices vary by model and version. However, modern devices include the “adaptive battery” [30] feature, which implicitly grants background runtime.

Installing a permanent UI feature, or persistent reminder, is another way to gain background runtime through implicit authorization. A persistent reminder is part of an app that is always displayed on the mobile device’s home screen, lock screen, or notification center. A persistent reminder gains background runtime to keep its UI updated. In this case, the installation of the persistent reminder implicitly grants authorization to run in the background. The persistent reminder can also remind the participant to complete EMA tasks.

Mobile Sensing Apps

We developed a mobile sensing app for iOS and Android devices designed to collect active data through EMA and passive data through polling-based and event-based data collection. The native languages of each OS, Swift for iOS and Kotlin for Android, were used for development. Besides native UI differences and authorization procedures outlined above, iOS and Android apps were designed to appear and function identically. The apps featured a home screen widget displaying the study’s progress and serving as a persistent reminder. The main screen, EMA screen, and home screen widget are shown in Figure 2.

Figure 2. Mobile sensing app. Left: the main screen displays the study progress. Middle: the EMA screen, which asks participants about their stress levels. Participants were asked to complete one assessment per day. Right: the home screen widget is used as a persistent reminder. EMA: ecological momentary assessment.

Development Process

We first developed the iOS app and then developed the Android app to match the look and functionality of the iOS app. We tested the apps simultaneously to ensure that Android and iOS devices reported the same data and functioned similarly. We ensured they reported the same number of location events, sent notifications at the same time, and otherwise behaved similarly.

The authorization methods we used in this study reflect each mobile OS’ preferred method for gaining authorization. At the time of writing, Android devices do not support contextual prompts for notifications or background events. iOS does not directly support setup prompting for background events and discourages setup prompting for notifications. We chose the OS’ preferred authorization method because users should be familiar with the procedure. Figure 3 highlights the authorization differences.

Figure 3. Overview of the difference between authorization procedures on Android and iOS devices. On iOS devices, both notification and background location authorization prompts are received in context. The notification authorization prompt is presented with the first EMA reminder notification, and the background location authorization is presented when the first location event occurs. For Android devices, those prompts are present during the initial launch of the app. EMA: ecological momentary assessment.

We originally planned to upload all data to secure storage through the background task. However, during testing, we noticed an inconsistency in uploading the data using only the background task. Thus, we also synced data whenever the app was launched in the foreground. To ensure all data were collected, the collection was confirmed before presenting a “Study Finished” screen along with instructions to take a screenshot.

RQ1: Notification-Initiated EMA

RQ1 compares setup and contextual prompting for notification authorization to measure their effect on EMA compliance. We compared EMA compliance using the iOS notification system, which uses contextual prompting, against Android’s notification system, which uses setup prompting. To answer RQ1, the apps used notification-initiated EMA to collect active data. We performed a simple single-question EMA once per day, for which both Android and iOS apps reminded users to complete their EMA through a notification. The Android setup prompt and iOS contextual prompt are displayed in Figure 4.

Figure 4. Notification permission prompts. Left: contextual prompt on an iOS device. Right: setup prompt on an Android device. With setup prompting, users are asked to approve notifications during the initial run of the app. Using contextual prompting, users are asked to approve notifications when the first notification arrives.

Notification systems on Android and iOS have other features that could affect compliance. iOS 15 introduced “Focus” times [31], which implements notification deferral, and notification summaries [32]. Android devices also implement notification deferral through a focus mode [33], pausing notifications from selected apps when activated. These features might reduce EMA compliance [34,35].

RQ2: Event-Based Passive Data Collection

RQ2 explores how setup and contextual prompting affect the success of event-based passive data collection. In the event-based collection, an event, such as a location change, phone call, message, or health alert, initiates the data collection. Sensing apps need explicit authorization to receive background events. Our apps used location changes as the event of interest and monitored a geofence, or circular region, where participants frequently entered and exited, triggering a location event. iOS devices used contextual prompting to authorize receipt of background location events, and Android devices used setup prompting.

RQ3: Polling-Based Passive Data Collection

RQ3 explores the ability of traditional background tasks and persistent reminders to collect data passively. Both Android and iOS apps implemented and scheduled a daily background task. The background task logged the time and uploaded all logs to cloud storage. We also uploaded the data upon app launch to ensure that all data were collected. For persistent reminders, we implemented a home screen widget, shown in Figure 2, and requested a daily refresh, which was also logged.

We explored users’ willingness to install the persistent reminder. Each participant had an equal chance of being assigned to either (1) a control group that did not receive prompts or notifications to install the widget or (2) an experimental group that did receive prompts and notifications. One group of participants was provided verbal instructions for installing the widget.

Alternative Study Designs

We used the mobile OS’ default or preferred authorization procedures to test their effectiveness in the wild, similar to other studies that test the feasibility of cross-platform mobile sensing [36-38]. The strength of this approach is that it does not limit the study to users of one particular device (iOS or Android), increasing the diversity of the participants. It also works to understand mobile sensing in the wild, revealing insights into potential pitfalls while developing cross-platform mobile sensing frameworks. However, this limits our ability to isolate study variables because other differences between the mobile OS implementations or device hardware could impact our results.

Studies focusing on a single mobile OS sometimes sacrifice user diversity in favor of isolating variables. These studies are favored when testing new features [39,40] or focusing on specific variables [41]. Android tends to be used in these studies due to its more open programming interface.

User Study

We tested our sensing apps through a user study that aimed to identify the impact of screen time and study time on college students’ stress levels. Participants installed our apps on their mobile devices and participated in our study for 30 days. To test event-based collection, a geofence that contained the school library and classrooms was used to calculate study time. Background tasks purported to collect screen time statistics to test polling-based collection.

Active data was collected through a simple EMA that asked participants how they managed their stress levels, as shown in Figure 2. A notification was sent each morning at 7:30 AM to remind users to complete their EMA. Students could complete their daily EMA until the end of the day (midnight). Participants could respond with a thumbs up, thumbs down, or neutral.

Ethical Considerations

The University of Hawaii Institutional Review Board approved the study under protocol #2022-0722, and the Brigham Young University—Hawaii Institutional Review Board approved it under protocol #22-72. All participants consented to participate in the study through the sensing app and received extra credit in their courses for participating. The following measures were implemented to ensure user privacy and participant safety: (1) after installation, the mobile app required the participants to electronically consent before collecting any data; (2) students who did not want to participate were given an alternative extra credit assignment, representing the same time commitment as completing the study; (3) the app anonymized all data collected before being uploaded to cloud storage, so the instructors could not identify the participants’ data, including their study habits; (4) follow-up paper surveys were collected by a volunteer, not the class instructor; and (5) at the end of the study, students were instructed by the app to submit a screenshot to their courses’ learning management system, and all students were provided with the same amount of extra credit regardless of EMA compliance or the amount of data collected.

Recruitment

We recruited 145 college students in Computer Science, Information Technology, Business, and Science classes at Brigham Young University—Hawaii. The students were offered extra credit incentives to install our app on their mobile devices and actively engage in the study for 30 days.

To ensure students that their professors could not identify their study habits, specific demographic information was not collected, and the app anonymized all data before uploading logs to cloud storage. However, the general demographics of the recruitment base included college students aged 18-26 years, with around 60% being female and 40% being male. Students represented a variety of races and cultures, with a distribution of about 40% White or non-Hispanic, 15% Native Hawaiian and Other Pacific Islanders, 25% Asian, 15% two or more races, and 5% other.

Exit Survey

Upon completion of the study, a follow-up survey was provided to participants, which 48 participants completed. The survey was provided on paper during the course’s final exam. To ensure privacy, a volunteer, not the instructor, distributed and collected the survey from participants. To preserve privacy, we did not correlate the survey with the data collected by the mobile app. The survey allowed us to gather qualitative insights regarding the differing success rates across the authorization paradigms. The participants were asked (1) what reminded them to complete their assessment, (2) their general thoughts on granting background location access, (3) whether they knew about and installed the home screen widget and their thoughts on installing a widget, and (4) suggestions to improve the app.


Data Cleaning and Analysis

We initially logged 178 (126 iOS and 52 Android) users who installed the app and consented to participate in the study. We removed users from the study if the logs showed they participated for fewer than 5 days, which meant that they deleted the app fewer than 5 days after starting. We did not detect any users who deleted the app after participating for 5 days. This yielded 145 participants, representing 105 iOS and 40 Android users. Each participant had an equal chance of being assigned to the widget or control group by the app. A total of 52 individuals (34 iOS and 18 Android) were randomly assigned to the widget group, where the app prompted them to install the widget. In total, 82 participants were randomly assigned to the control group, which did not receive the prompts, and 11 participants (8 iOS and 3 Android) received verbal instructions to install the widget on their home screens.

To conduct our qualitative analysis, we coded responses to the survey based on general patterns that emerged in at least five responses. The questions and the number of themed responses are listed in Table 2.

Table 2. Summary of exit survey responses for each question, separated by device. At the end of the study, participants were administered a paper survey to complete during their final exam, of which 48 participants (33 iOS and 15 Android) completed. Responses were coded based on general themes that emerged in at least 5 responses. Themes with fewer than 5 responses are not listed.

iOS (n=33), n (%)Android (n=15), n (%)
Question 1: What reminded you to check-in?

Notifications12 (36)9 (60)

Set alarm9 (27)0 (0)

Widget5 (15)0 (0)

Saw app5 (15)3 (20)
Question 2: Did you allow the app to use your location? Why or why not?

Yes25 (76)15 (100)

No2 (6)a0 (0)

It would help the study7 (21)6 (40)

It was required4 (12)3 (20)

Safe4 (12)1 (7)
Question 3: Did you install the widget? Why or why not?

Yes15 (45)2 (13)

No15 (45)13 (87)

Did not know about it7 (21)4 (27)

Helped me remember to check-in6 (18)1 (7)

Did not want to change the home screen or do not use widgets4 (12)2 (13)
Question 4: Do you have any comments or suggestions on improving the app?

Better notifications15 (45)3 (20)

Select notification time4 (12)2 (13)

a1 worried about tracking.

RQ1: Notification-Initiated EMA

To answer RQ1, we compared notification authorization procedures: contextual prompting implemented in iOS versus setup prompting implemented in Android by measuring the number of completed EMA. On average, participants across all conditions completed 23.4 (SD 7.38) assessments out of 30. iPhone users completed 23.46 (SD 7.02) assessments, and Android users completed 23.25 (SD 7.9) assessments on average. We observed no statistically significant difference in the number of completed assessments when examining device-specific differences as shown by an ANOVA test (F1,144=0.0227; P=.88) or when comparing the widget group to the control group (F1,144=1.33; P=.27).

Although our quantitative results demonstrated no difference between devices, our qualitative results indicated that contextual prompting, notification deferral, and notification summaries on iOS devices affected notification delivery. Nine Android users reported that notifications were the primary method of reminding them to complete their EMA. Despite 9 iOS users stating that the notifications helped them remember to complete the task, 15 iOS users surveyed said the notifications did not appear consistently. Nine participants even set their alarms or reminders. For example, P32 stated, “Notifications weren’t working, so I had an event in my calendar to remind me.”

RQ2: Event-Driven Passive Data Collection Results

In RQ2, we compared the effectiveness of contextual prompting (iOS) and setup prompting (Android) in gaining authorization to access background location events. A total of 28% (11/40) of Android and 49% (51/105) of iOS users enabled background location permission. Contextual and setup prompting differed significantly (χ21=4.4; P=04). Thus, contextual prompting was 55.5% more effective than setup prompting in gaining authorization for background location events.

Our poststudy survey asked participants about their willingness to share their location with the sensing app. A total of 40 surveyed participants said they authorized background location access versus 2 participants who reported not sharing their location. Only one participant, P10 (iOS), expressed apprehension about sharing their location, stating, “I was afraid it could track me.” However, 13 participants surveyed expressed a willingness to share their location to contribute to the study’s objectives and thought they granted the needed authorization to access background location events. P46 (Android) exemplified this sentiment: “Yes, if it helps the study then I don’t really mind if it has my location.” User’s willingness to share location data for nonresearch reasons will most likely vary, so we emphasize that these results only apply to the context of research studies.

RQ3: Background Tasks and Persistent Reminders

RQ3 compared traditional background tasks to persistent reminders for polling-based data collection. First, we present our results on how willing users were to install the persistent reminder, and then we present results comparing persistent reminder refreshes to background tasks completed. Finally, we compare background tasks completed by device type and present our qualitative results.

We studied how willing participants were to install the persistent reminder by randomly assigning users to an experimental group where they were notified by the app to install the widget and a control group that did not receive such prompts. In addition, a subgroup of 11 users were given verbal instructions to install the widget. Of 52 participants in the widget group, only 18 (35%) participants installed the home screen widget, with 16 being iOS users and 2 being Android users. One iOS participant in the control group installed the widget. Of the 11 participants who received verbal instructions to install the widget, 5 (46%) participants complied accordingly, of whom 4 were iOS users and 1 was an Android user.

Limiting our data to only users who installed the persistent reminder, on average, the persistent reminder was refreshed 61.2 (SD 16.7) times throughout the study, and the devices performed an average of 7.2 (SD 12.5) background tasks. The two Android devices performed 23 and 42 widget refreshes, with 8 and 6 background tasks, respectively. When users installed the widget, persistent reminders refreshed 266.5% more than background tasks were executed.

Focusing on the difference between Android and iOS, we also observed a significant difference in background tasks completed when comparing devices. iOS devices completed 7.73 (SD 16.32) background sessions during the study. Android devices completed 25.7 (SD 13.96) background sessions. Using data from all participants, an ANOVA test revealed a significant difference in the total number of background sessions between iOS devices and Android devices (F1,144=37.8; P<.001), showing that Android devices were more permissive in granting background runtime. Most iOS devices completed fewer than 10 background sessions, and many did not execute a single background task. In addition, almost half of the Android devices did not complete a daily background task. This demonstrates our difficulty in consistently gaining background runtime, especially on iOS devices.

Our qualitative analysis showed mixed results regarding the installation of the home screen widget. Several participants indicated that the widget helped them remember to complete their daily assessment, as expressed by seven participants. For example, P37 (iOS) stated, “The widget helped me to remember to check in every day and what the study was about.” However, 6 users opted not to use the widget due to concerns about modifying their home screen or preferring not to use widgets. P22 (iOS) mentioned, “My home screen was already full and organized,” while P35 (Android) stated, “I just don’t want to change my home screen.” iOS users may be more willing to add the widget to their home screen because iOS allows for widget stacking, a feature not found in most Android devices. This functionality enables the rotation of different widgets within a stack, eliminating the need to reorganize the home screen to install the widget.

Effects of Stress Level

Because students experiencing a high stress level might be more motivated to authorize data collection and complete assessments, we divided the participants into 2 groups: a low-stress group (n=117) whose EMA responses averaged better than neutral and a high-stress group (n=28) whose EMA responses averaged lower than neutral. We then compared the two groups’ authorization and data collection rates. There was no statistical difference between the groups in the number of EMA responses (F1,144=0.928; P=.34), the number of background tasks executed (F1,144=1.54; P=.22), the number of participants that enabled location (χ21=0.4; P=.52), or whether they installed the widget (χ21=0.5; P=.57). The stress level of the participants did not significantly impact their willingness to grant authorization, provide EMA responses, or install the widget.


Based on our results, we present design principles to enhance the success of mobile passive-sensing research studies.

Notification-Initiated EMA

Setup prompts should generally be used for notifications. Although our quantitative results showed no difference between the iOS and Android notification systems, our qualitative analysis revealed that many iOS users did not consistently receive notifications to remind them to complete their EMA. Nine iOS users reported setting their alarms or calendar reminders to remember to complete their EMA. We conjecture that if this study had required participants to complete multiple tasks per day or to follow a strict time frame for EMA tasks, then EMA compliance on iOS devices would have suffered.

We hypothesized that contextual prompts would work more effectively. However, they lose their context when notification deferrals and summaries are enabled. Only users who searched through their notification summaries would have found the contextual prompt and authorized future notifications. With setup prompting, by contrast, users authorize the notifications immediately after consenting.

Our qualitative results show that notification deferral and notification summaries can cause delayed and missed responses to EMA notifications on iOS devices. New methods should be developed to help participants remember to complete their assessments in a timely manner. iOS allows developers to mark notifications as time-sensitive, which increases their visibility to the user and should be used for time-sensitive EMA. This study did not use time-sensitive notifications because our assessment did not require a time-sensitive response. Future work could analyze the effectiveness of time-sensitive notifications versus basic notifications. However, even time-sensitive notifications could be delayed or ignored based on the user’s settings. We recommend that mobile sensing researchers review notification settings during onboarding. Persistent reminders offer an additional way to remind users to complete tasks. Seven participants (6 iOS and 1 Android) reported that the persistent reminder helped them to remember to complete their assessment and could be an area to explore further.

Besides notification deferral, notification summaries, and contextual prompting, other notification system advances can potentially disrupt notification-initiated EMA. Because notifications disrupt users, sometimes causing stress [42-44], several modifications to notification systems could be introduced to consumer mobile devices. Lin et al [45] demonstrated how notification summaries could be improved by letting users determine the order of the notifications. Pejovic et al [46] explored user contexts to understand user interruptability, leading to the development of intelligent notification systems [34,47]. Kandappu et al [48] also explored intelligently interrupting users. Mobile sensing apps must adjust to these advanced notification systems or find a different way to initiate EMA. For example, Zhang et al [49] explored using a persistent reminder on the lock screen to initiate EMA.

Event-Driven Passive Data Collection

For event-driven data collection, use carefully designed contextual prompts. Our qualitative analysis revealed that most participants were willing to authorize background location access for research studies and were under the impression that they enabled background location access. However, our quantitative results revealed that many participants did not authorize background location access. Many failed to grant authorization for background events even though they intended to authorize it. As we hypothesized, contextual prompting on iOS was 55.5% more effective in gaining authorization for background location events. This coincides with previous work that shows additional context helps users make better privacy decisions [39,50,51].

Even with contextual prompts, less than half of the participants authorized access to background location events. Challenges arise when users misinterpret authorization prompts, limiting the success of event-based collection [39,41,51-53]. To improve contextual prompt accuracy, Wijesekera et al [40] developed a machine learning–based, contextual-aware permission model that improves the permission accuracy rate above context prompts, which should be considered if it becomes available on consumer mobile devices. However, the current model suggests providing “generic but well-formed data” when an app is denied access. In the context of health sensing apps, this can skew results and create adverse health interventions. We recommend that such systems be designed to communicate to the app about denials and that apps be constructed to handle denials to ensure data fidelity. Further research must be performed to balance the collection abilities of mobile sensing apps and protect the privacy and security of their users [54].

Polling-Based Passive Data Collection

For polling, implement a persistent reminder that can be used both as a means of collecting data and as a reminder to complete EMA tasks. Collecting data through background tasks presented significant challenges due to the use of an implied consent model for granting background runtime that is inherent to mobile OS, as described in previous work [36,55]. Although Android devices are generally more permissive, too many factors are under consideration to consistently guarantee background runtime. However, as we hypothesized, using a persistent reminder as a secondary means to poll for data yielded more successful data collection. Installing a persistent reminder on the user’s home screen signifies to the OS the intent to allocate the necessary resources to maintain the reminder’s regular updates, which can also be used for passive data collection.

Many users did not comply with installing the widget when prompted by the app, especially among Android users. Verbal prompts to install the widget did help to improve compliance on Android and iOS. Our qualitative analysis indicated some resistance to installing the widget because users did not want to change their home screen layout, and some were unfamiliar with how to install widgets or did not use widgets.

Several methods are available to overcome the resistance to using widgets. First, participants who more directly benefit from the data collection might be more willing to install a widget. For instance, participants affected by a disease might be more willing to install a widget, especially if the app offers just-in-time, adaptive interventions. In addition, study incentives can also be increased for participants who install the widget. To assist users unfamiliar with widgets, the app can provide a video tutorial on installing the widget, or instructions can be provided during study onboarding.

Limitations

There are other differences between iOS and Android devices and users of those devices than what we tested for, which could be confounding factors. In addition, we studied passive sensing for research studies, and the results do not necessarily apply to passive sensing in other contexts such as crowdsourcing or commercial purposes.

This study involved college students, and while they represented diverse academic disciplines, races, and countries of origin, they were all aged 19-26 years and tended to be more familiar with mobile devices. Further work should be conducted to observe how these results would translate to a larger, more heterogeneous population. Individuals less familiar with mobile devices, such as geriatric patients, may exhibit more difficulty granting and declining authorization due to different privacy preferences or familiarity with smartphone technology.

For persistent reminders, only two Android users installed the widget, with one device refreshing more than once per day and the other refreshing 23 of the 30 days. These preliminary results are promising. However, further work is required to ensure the results are consistent with a larger sample size.

Future Work

Additional modifications to the mobile apps could be implemented and tested in follow-up studies. Although Android does not allow contextual prompts, contextual prompts could be mimicked by randomly sending a notification asking for background location access or notification access. Such a study would eliminate potential confounding factors that could have influenced the results.

Our sensing apps could be used in additional health studies incorporating other populations with different motivations to comply with study procedures. Comparing our results with studies involving more diverse health issues would be an interesting avenue for future work.

Comparison With Prior Work

Numerous studies have contributed to a comprehensive understanding of EMA compliance across research fields [56]. In a review of EMA studies, Wrzus and Neubauer [57] found that compliance cannot be predicted by the number of assessments or length of the study but that financial incentives did improve compliance rates. Murray et al [58] explored the role of participants’ emotional states in affecting compliance. In the related field of crowdsourcing [59], gamification can improve response rates [60]. Other efforts to improve EMA compliance include the work by Schenider et al [61] on just-in-time, adaptive EMA to reduce the burden on participants. Zhang et al [49] explored unlock journaling, where users unlock their devices by completing an EMA. This work built upon this by examining how notification permissions affected EMA compliance.

Mobile sensing depends upon gaining proper authorization to collect data and interrupting participants to initiate an EMA. Research into permissions on mobile devices has shown that users often misinterpret permission prompts [52,53]. Wijesekera et al [39,40] showed that contextual prompts help users correctly interpret permission prompts. Alsoubai et al [41] profiled users to help understand differing privacy strategies, which helps improve intelligent permission systems [40]. Our mobile sensing apps explored contextual prompts and permissions and their role in passive data collection.

Prior work has demonstrated that collecting consistent data across iOS and Android devices remains challenging [62]. Most mobile sensing studies use Android devices due to their more open programming interface, but some work has been conducted to improve mobile sensing on iOS devices. The AWARE-iOS research team [55] explored background data collection methods on iOS and developed guidelines for sustainable data collection on iOS. AWARE-iOS has successfully collected passive data on iOS devices [63]. RADAR-base [64,65] is an open-source mobile health platform for collecting and analyzing large-scale data from various devices including passive and active mobile apps. We expand upon this inspirational prior work by adding design guidelines for consistent active and passive data collection across Android and iOS devices. Consistent data collection is needed to support just-in-time adaptive interventions on consumer mobile devices, necessitating further research [66].

Conclusions

We developed and tested mobile sensing apps for iOS and Android to answer our RQs: (RQ1) How do contextual prompting and setup prompting affect scheduled notification delivery and the response rate of notification-initiated EMA? (RQ2) Which authorization paradigm, setup or contextual prompting, is more successful in leading users to grant authorization to receive background events? and (RQ3) Which polling-based method, persistent reminders or scheduled background tasks, completes more background sessions? Although contextual prompts for notification authorization on iOS devices did not impact EMA compliance rates compared to setup prompts on Android devices, many iOS users reported not receiving notifications. For background event authorization, contextual prompts on iOS devices were 55.5% more effective in gaining authorization than setup prompts on Android devices. Finally, persistent reminders resulted in a completion of background sessions 266.5% more often than when using traditional background tasks. However, we observed some user resistance to installing persistent reminders. Although mobile sensing on consumer mobile devices continues to exhibit challenges, our results suggest that persistent reminders and proper authorization procedures can improve user compliance.

Acknowledgments

The project described was supported by award number 2406251 from the National Science Foundation (NSF) under the Smart Health and Biomedical Research in the Era of Artificial Intelligence and Advanced Data Science program. This research was partially funded by the National Institute of General Medical Sciences grant U54GM138062 and the Medical Research Award fund of the Hawaii Community Foundation grant MedRes_2023_00002689.

Data Availability

The datasets generated during or analyzed during this study are not publicly available due to institutional review board restrictions but are available from the first author upon reasonable request.

Conflicts of Interest

None declared.

Multimedia Appendix 1

iCHECK Checklist for Digital Health Implementations.

PDF File (Adobe PDF File), 172 KB

  1. Jongs N, Jagesar R, van Haren NEM, Penninx BWJH, Reus L, Visser PJ, et al. A framework for assessing neuropsychiatric phenotypes by using smartphone-based location data. Transl Psychiatry. 2020;10(1):211. [FREE Full text] [CrossRef] [Medline]
  2. Swain VD, Gao L, Wood WA, Matli SC, Abowd GD, De Choudhury M. Algorithmic power or punishment: information worker perspectives on passive sensing enabled AI phenotyping of performance and wellbeing. 2023. Presented at: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems; April 19, 2023:1-17; Hamburg, Germany. [CrossRef]
  3. Rooksby J, Morrison A, Murray-Rust D. Student perspectives on digital phenotyping: the acceptability of using smartphone data to assess mental health. 2019. Presented at: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems; May 02, 2019:1-14; Glasgow, Scotland Uk. [CrossRef]
  4. Kline A, Voss C, Washington P, Haber N, Schwartz H, Tariq Q, et al. Superpower glass. GetMobile: Mobile Comput Commun. 2019;23(2):35-38. [CrossRef]
  5. Washington P, Voss C, Kline A, Haber N, Daniels J, Fazel A, et al. SuperpowerGlass: a wearable aid for the at-home therapy of children with autism. Proc ACM Interact Mobile Wearable Ubiquitous Technol. 2017;1(3):1-22. [CrossRef]
  6. Washington P, Kline A, Mutlu O, Leblanc E, Hou C, Stockham N, et al. Activity recognition with moving cameras and few training examples: applications for detection of autism-related headbanging. 2021. Presented at: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems; May 8, 2021:1-7; New York, NY. [CrossRef]
  7. Washington P, Voss C, Haber N, Tanaka S, Daniels J, Feinstein C, et al. A wearable social interaction aid for children with autism. 2016. Presented at: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems; May 7, 2016:2348-2354; San Jose, CA. [CrossRef]
  8. Voss C, Schwartz J, Daniels J, Kline A, Haber N, Washington P, et al. Effect of wearable digital intervention for improving socialization in children with autism spectrum disorder: a randomized clinical trial. JAMA Pediatr. 2019;173(5):446-454. [FREE Full text] [CrossRef] [Medline]
  9. Suruliraj B, Bessenyei K, Bagnell A, McGrath P, Wozney L, Orji R, et al. Mobile sensing apps and self-management of mental health during the COVID-19 pandemic: web-based survey. JMIR Form Res. 2021;5(4):e24180. [FREE Full text] [CrossRef] [Medline]
  10. Sun Y, Kargarandehkordi A, Slade C, Jaiswal A, Busch G, Guerrero A, et al. Personalized deep learning for substance use in Hawaii: protocol for a passive sensing and ecological momentary assessment study. JMIR Res Protocol. 2024;13:e46493. [FREE Full text] [CrossRef] [Medline]
  11. Kargarandehkordi A, Slade C, Washington P. Personalized AI-driven real-time models to predict stress-induced blood pressure spikes using wearable devices: proposal for a prospective cohort study. JMIR Res Protocol. 2024;13:e55615. [FREE Full text] [CrossRef] [Medline]
  12. Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1-32. [CrossRef] [Medline]
  13. van Berkel N, Ferreira D, Kostakos V. The experience sampling method on mobile devices. ACM Comput Surv. 2017;50(6):1-40. [CrossRef]
  14. Doherty K, Balaskas A, Doherty G. The design of ecological momentary assessment technologies. Interact Comput. 2020;32(1):278. [CrossRef]
  15. Doherty K, Marcano-Belisario J, Cohn M, Mastellos N, Morrison C, Car J, et al. Engagement with mental health screening on mobile devices: results from an antenatal feasibility study. 2019. Presented at: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems; May 2, 2019:1-15; Glasgow, Scotland. [CrossRef]
  16. Huang YN, Zhao S, Rivera M, Hong JI, Kraut R. Predicting well-being using short ecological momentary audio recordings. 2021. Presented at: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems; May 8, 2021:1-7; Yokohama, Japan. [CrossRef]
  17. Kalantarian H, Washington P, Schwartz J, Daniels J, Haber N, Wall D. A gamified mobile system for crowdsourcing video for autism research. 2018. Presented at: IEEE International Conference on Healthcare Informatics (ICHI); July 26, 2018:350-352; New York, NY. [CrossRef]
  18. Omberg L, Chaibub Neto E, Perumal TM, Pratap A, Tediarjo A, Adams J, et al. Remote smartphone monitoring of Parkinson's disease and individual response to therapy. Nat Biotechnol. 2022;40(4):480-487. [CrossRef] [Medline]
  19. Lane ND, Miluzzo E, Lu H, Peebles D, Choudhury T, Campbell AT. A survey of mobile phone sensing. IEEE Commun Mag. 2010;48(9):140-150. [CrossRef]
  20. Meegahapola L, Gatica-Perez D. Smartphone sensing for the well-being of young adults: a review. IEEE Access. 2021;9:3374-3399. [CrossRef]
  21. Asselbergs J, Ruwaard J, Ejdys M, Schrader N, Sijbrandij M, Riper H. Mobile phone-based unobtrusive ecological momentary assessment of day-to-day mood: an explorative study. J Med Internet Res. 2016;18(3):e72. [FREE Full text] [CrossRef] [Medline]
  22. Wang W, Mirjafari S, Harari G, Ben-Zeev D, Brian R, Choudhury T, et al. Social sensing: assessing social functioning of patients living with schizophrenia using mobile phone sensing. 2020. Presented at: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems; April 23, 2020:1-15; New York, NY. [CrossRef]
  23. Adler DA, Wang F, Mohr DC, Choudhury T. Machine learning for passive mental health symptom prediction: generalization across different longitudinal mobile sensing studies. PLoS One. 2022;17(4):e0266516. [FREE Full text] [CrossRef] [Medline]
  24. Assi K, Meegahapola L, Droz W, Kun P, De GA, Bidoglia M, et al. Complex daily activities, country-level diversity,smartphone sensing: a study in Denmark, Italy, Mongolia, Paraguay, and UK. 2023. Presented at: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems; April 19, 2023:1-23; New York, NY. [CrossRef]
  25. Doryab A, Villalba DK, Chikersal P, Dutcher JM, Tumminia M, Liu X, et al. Identifying behavioral phenotypes of loneliness and social isolation with passive sensing: statistical analysis, data mining and machine learning of smartphone and Fitbit data. JMIR Mhealth Uhealth. 2019;7(7):e13209. [FREE Full text] [CrossRef] [Medline]
  26. LiKamWa R, Liu Y, Lane ND, Zhong L. Moodscope: building a mood sensor from smartphone usage patterns. 2013. Presented at: Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services; June 25, 2013:389-402; New York, NY. [CrossRef]
  27. Nepal S, Wang W, Vojdanovski V, Huckins J, daSilva A, Meyer M, et al. COVID Student Study: a year in the life of college students during the COVID-19 pandemic through the lens of mobile phone sensing. 2022. Presented at: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems; April 28, 2022:1-19; New York, NY. [CrossRef]
  28. Xu X, Chikersal P, Dutcher JM, Sefidgar YS, Seo W, Tumminia MJ, et al. Leveraging collaborative-filtering for personalized behavior modeling. Proc ACM Interact Mobile Wearable Ubiquitous Technol. 2021;5(1):1-27. [CrossRef]
  29. Lind MN, Byrne ML, Wicks G, Smidt AM, Allen NB. The effortless assessment of risk states (EARS) tool: an interpersonal approach to mobile sensing. JMIR Ment Health. 2018;5(3):e10334. [FREE Full text] [CrossRef] [Medline]
  30. Birney A. Android adaptive battery: everything you need to know. Android Authority. Feb 26, 2024. URL: https://www.androidauthority.com/android-adaptive-battery-explained-3223097/# [accessed 2024-11-12]
  31. Use focus on your iPhone or iPad. Apple support. URL: https://support.apple.com/en-us/HT212608 [accessed 2023-11-13]
  32. Auda J, Weber D, Voit A, Schneegass S. Understanding user preferences towards rule-based notification deferral. 2018. Presented at: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems; April 20, 2018:1-6; New York, NY. [CrossRef]
  33. Get into "Focus mode" at work with help from Android. Google Blog. URL: https://tinyurl.com/2rx37d59 [accessed 2023-12-13]
  34. Li T, Haines J, De Eguino MFR, Hong J, Nichols J. Alert now or never: understanding and predicting notification preferences of smartphone users. ACM Trans Comput-Hum Interact. 2023;29(5):1-33. [CrossRef]
  35. Mehrotra A, Pejovic V, Vermeulen J, Hendley R, Musolesi M. My phone and me: understanding people's receptivity to mobile notifications. 2016. Presented at: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems; May 07, 2016:1021-1032; New York, NY. [CrossRef]
  36. Boonstra TW, Nicholas J, Wong QJ, Shaw F, Townsend S, Christensen H. Using mobile phone sensor technology for mental health research: integrated analysis to identify hidden challenges and potential solutions. J Med Internet Res. 2018;20(7):e10131. [FREE Full text] [CrossRef] [Medline]
  37. Boonstra TW, Werner-Seidler A, O'Dea B, Larsen ME, Christensen H. Smartphone app to investigate the relationship between social connectivity and mental health. Annu Int Conf IEEE Eng Med Biol Soc. 2017;2017:287-290. [CrossRef] [Medline]
  38. Boonstra TW, Larsen ME, Townsend S, Christensen H. Validation of a smartphone app to map social networks of proximity. PLoS One. 2017;12(12):e0189877. [FREE Full text] [CrossRef] [Medline]
  39. Wijesekera P, Baokar A, Hosseini A, Egelman S, Wagner D, Beznosov K. Android permissions remystified: a field study on contextual integrity. 2015. Presented at: Proceedings of the 24th USENIX Conference on Security Symposium; August 12, 2015; Washington, DC. URL: https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/wijesekera
  40. Wijesekera P, Reardon J, Reyes I, Tsai L, Chen JW, Good N, et al. Contextualizing privacy decisions for better prediction (and protection). 2018. Presented at: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems; April 21, 2018:1-13; New York, NY. [CrossRef]
  41. Alsoubai A, Anaraky RG, Li Y, Page X, Knijnenburg B, Wisniewski PJ. Permission vs. app limiters: profiling smartphone users to understand differing strategies for mobile privacy management. 2022. Presented at: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems; April 29, 2022:1-18; New York, NY. [CrossRef]
  42. Yoon SH, Lee SS, Lee JM, Lee K. Understanding notification stress of smartphone messenger app. 2014. Presented at: CHI '14 Extended Abstracts on Human Factors in Computing Systems; April 26, 2014:1735-1740; New York, NY. [CrossRef]
  43. Kang S, Park CY, Kim A, Cha N, Lee U. Understanding emotion changes in mobile experience sampling. 2022. Presented at: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems; April 29, 2022:1-14; New Orleans, LA. [CrossRef]
  44. Chan L, Swain VD, Kelley C, de Barbaro K, Abowd GD, Wilcox L. Students' experiences with ecological momentary assessment tools to report on emotional well-being. Proc ACM Interact Mobile Wearable Ubiquitous Technol. 2018;2(1):1-20. [CrossRef]
  45. Lin TC, Su YS, Yang EH, Chen YH, Lee HP, Chang YJ. “Put it on the Top, I’ll Read it Later”: investigating users’ desired display order for smartphone notifications. 2021. Presented at: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems; May 7, 2021:1-13; Yokohama, Japan. [CrossRef]
  46. Pejovic V, Musolesi M, Mehrotra A. Investigating the role of task engagement in mobile interruptibility. 2015. Presented at: MobileHCI '15: Proceedings of the 17th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct; August 24, 2015:1100-1105; Copenhagen, Denmark. [CrossRef]
  47. Mehrotra A, Musolesi M. Intelligent notification systems: a survey of the state of the art and research challenges. ArXiv. Preprint posted online on November 28, 2017. 2018. [CrossRef]
  48. Kandappu T, Mehrotra A, Misra A, Musolesi M, Cheng S, Meegahapola L. PokeME: applying context-driven notifications to increase worker engagement in mobile crowd-sourcing. 2020. Presented at: CHIIR '20: Proceedings of the 2020 Conference on Human Information Interaction and Retrieval; March 14, 2020:3-12; Vancouver BC. [CrossRef]
  49. Zhang X, Pina LR, Fogarty J. Examining unlock journaling with diaries and reminders for In situ self-report in health and wellness. Proc SIGCHI Conf Hum Factor Comput Syst. 2016;2016:5658-5664. [FREE Full text] [CrossRef] [Medline]
  50. Barth A, Datta A, Mitchell J, Nissenbaum H. Privacy and contextual integrity: framework and applications. 2006. Presented at: IEEE Symposium on Security and Privacy (S&P'06); May 21-24, 2006:1-15; Berkeley/Oakland, CA. [CrossRef]
  51. Bonné B, Peddinti S, Bilogrevic I, Taft N. Exploring decision making with Android's runtime permission dialogs using in-context surveys. 2017. Presented at: Proceedings of the Thirteenth USENIX Conference on Usable Privacy and Security; July 12-14, 2017:195-210; Santa Clara, CA. URL: https://tinyurl.com/2s37s4ss
  52. Kelley P, Consolvo S, Cranor L, Jung J, Sadeh N, Wetherall D. A conundrum of permissions: installing applications on an Android smartphone. 2012. Presented at: A Conundrum of Permissions: Installing Applications on an Android Smartphone; March 2, 2012:68-79; Berlin, Heidelberg. [CrossRef]
  53. Tan J, Nguyen K, Theodorides M, Negrón-Arroyo H, Thompson C, Egelman S, et al. The effect of developer-specified explanations for permission requests on smartphone user behavior. 2014. Presented at: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; April 26, 2014:91-100; Toronto, ON. [CrossRef]
  54. Dehling T, Gao F, Schneider S, Sunyaev A. Exploring the far side of mobile health: information security and privacy of mobile health apps on iOS and android. JMIR Mhealth Uhealth. 2015;3(1):e8. [FREE Full text] [CrossRef] [Medline]
  55. Nishiyama Y, Ferreira D, Eigen Y, Sasaki W, Okoshi T, Nakazawa J, et al. IOS crowd–sensing won’t hurt a bit!: AWARE framework and sustainable study guideline for iOS platform. 2020. Presented at: Distributed, Ambient and Pervasive Interactions: 8th International Conference, DAPI 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings; July 19, 2020:223-243; Berlin, Heidelberg. URL: https://link.springer.com/chapter/10.1007/978-3-030-50344-4_17
  56. Stone AA, Schneider S, Smyth JM. Evaluation of pressing issues in ecological momentary assessment. Annu Rev Clin Psychol. 2023;19:107-131. [FREE Full text] [CrossRef] [Medline]
  57. Wrzus C, Neubauer AB. Ecological momentary assessment: a meta-analysis on designs, samples, and compliance across research fields. Assessment. 2023;30(3):825-846. [FREE Full text] [CrossRef] [Medline]
  58. Murray AL, Brown R, Zhu X, Speyer LG, Yang Y, Xiao Z, et al. Prompt-level predictors of compliance in an ecological momentary assessment study of young adults' mental health. J Affect Disord. 2023;322:125-131. [FREE Full text] [CrossRef] [Medline]
  59. Jaimes LG, Vergara-Laurens IJ, Raij A. A survey of incentive techniques for mobile crowd sensing. IEEE Internet Things J. 2015;2(5):370-380. [CrossRef]
  60. Morschheuser B, Hamari J, Koivisto J. Gamification in crowdsourcing: a review. 2016. Presented at: 49th Hawaii International Conference on System Sciences (HICSS); January 5-8, 2016:4375-4384; Koloa, HI. [CrossRef]
  61. Schneider S, Junghaenel DU, Smyth JM, Fred Wen CK, Stone AA. Just-in-time adaptive ecological momentary assessment (JITA-EMA). Behav Res Methods. 2024;56(2):765-783. [FREE Full text] [CrossRef] [Medline]
  62. Blunck H, Kjærgaard M, Bouvin N, Lukowicz P, Franke T, Wüstenberg M, et al. On heterogeneity in mobile sensing applications aiming at representative data collection. 2013. Presented at: UbiComp '13 Adjunct: Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing Adjunct Publication; September 8, 2013:1087-1098; Zurich, Switzerland. [CrossRef]
  63. Nishiyama Y, Ferreira D, Sasaki W, Okoshi T, Nakazawa J, Dey A, et al. Using iOS for inconspicuous data collection: a real-world assessment. 2020. Presented at: UbiComp/ISWC '20 Adjunct: Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers; September 12, 2020:261-266; Virtual Event, Mexico. [CrossRef]
  64. Ranjan Y, Rashid Z, Stewart C, Conde P, Begale M, Verbeeck D, Hyve, et al. RADAR-Base: open source mobile health platform for collecting, monitoring, and analyzing data using sensors, wearables, and mobile devices. JMIR Mhealth Uhealth. 2019;7(8):e11734. [FREE Full text] [CrossRef] [Medline]
  65. Sun S, Folarin AA, Ranjan Y, Rashid Z, Conde P, Stewart C, et al. Using smartphones and wearable devices to monitor behavioral changes during COVID-19. J Med Internet Res. 2020;22(9):e19992. [FREE Full text] [CrossRef] [Medline]
  66. Teepe GW, Da Fonseca A, Kleim B, Jacobson NC, Sanabria AS, Tudor Car L, et al. Just-in-time adaptive mechanisms of popular mobile apps for individuals with depression: systematic app search and literature review. J Med Internet Res. 2021;23(9):e29412. [FREE Full text] [CrossRef] [Medline]


EMA: ecological momentary assessment
OS: operating system
RQ: research question
UI: user interface


Edited by A Mavragani; submitted 20.12.23; peer-reviewed by TAR Sure, S Mitra, E Larson; comments to author 06.04.24; revised version received 29.04.24; accepted 07.09.24; published 18.11.24.

Copyright

©Christopher Slade, Roberto M Benzo, Peter Washington. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 18.11.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.