Published on in Vol 21, No 12 (2019): December

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/14904, first published .
Counting Bites With Bits: Expert Workshop Addressing Calorie and Macronutrient Intake Monitoring

Counting Bites With Bits: Expert Workshop Addressing Calorie and Macronutrient Intake Monitoring

Counting Bites With Bits: Expert Workshop Addressing Calorie and Macronutrient Intake Monitoring

Viewpoint

1Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL, United States

2Department of Computer Science, Northwestern University School of Engineering, Evanston, IL, United States

3Department of Electrical and Computer Engineering, Northwestern University McCormick School of Engineering, Evanston, IL, United States

4School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States

5Department of Materials Science and Engineering, Northwestern University McCormick School of Engineering, Evanston, IL, United States

6Department of Biomedical Engineering, Northwestern University McCormick School of Engineering, Evanston, IL, United States

Corresponding Author:

Nabil Alshurafa, PhD

Department of Preventive Medicine

Northwestern University Feinberg School of Medicine

680 N Lake Shore Drive, Suite 1400

Chicago, IL,

United States

Phone: 1 3125034517

Email: nabil@northwestern.edu


Background: Conventional diet assessment approaches such as the 24-hour self-reported recall are burdensome, suffer from recall bias, and are inaccurate in estimating energy intake. Wearable sensor technology, coupled with advanced algorithms, is increasingly showing promise in its ability to capture behaviors that provide useful information for estimating calorie and macronutrient intake.

Objective: This paper aimed to summarize current technological approaches to monitoring energy intake on the basis of expert opinion from a workshop panel and to make recommendations to advance technology and algorithms to improve estimation of energy expenditure.

Methods: A 1-day invitational workshop sponsored by the National Science Foundation was held at Northwestern University. A total of 30 participants, including population health researchers, engineers, and intervention developers, from 6 universities and the National Institutes of Health participated in a panel discussing the state of evidence with regard to monitoring calorie intake and eating behaviors.

Results: Calorie monitoring using technological approaches can be characterized into 3 domains: (1) image-based sensing (eg, wearable and smartphone-based cameras combined with machine learning algorithms); (2) eating action unit (EAU) sensors (eg, to measure feeding gesture and chewing rate); and (3) biochemical measures (eg, serum and plasma metabolite concentrations). We discussed how each domain functions, provided examples of promising solutions, and highlighted potential challenges and opportunities in each domain. Image-based sensor research requires improved ground truth (context and known information about the foods), accurate food image segmentation and recognition algorithms, and reliable methods of estimating portion size. EAU-based domain research is limited by the understanding of when their systems (device and inference algorithm) succeed and fail, need for privacy-protecting methods of capturing ground truth, and uncertainty in food categorization. Although an exciting novel technology, the challenges of biochemical sensing range from a lack of adaptability to environmental effects (eg, temperature change) and mechanical impact, instability of wearable sensor performance over time, and single-use design.

Conclusions: Conventional approaches to calorie monitoring rely predominantly on self-reports. These approaches can gain contextual information from image-based and EAU-based domains that can map automatically captured food images to a food database and detect proxies that correlate with food volume and caloric intake. Although the continued development of advanced machine learning techniques will advance the accuracy of such wearables, biochemical sensing provides an electrochemical analysis of sweat using soft bioelectronics on human skin, enabling noninvasive measures of chemical compounds that provide insight into the digestive and endocrine systems. Future computing-based researchers should focus on reducing the burden of wearable sensors, aligning data across multiple devices, automating methods of data annotation, increasing rigor in studying system acceptability, increasing battery lifetime, and rigorously testing validity of the measure. Such research requires moving promising technological solutions from the controlled laboratory setting to the field.

J Med Internet Res 2019;21(12):e14904

doi:10.2196/14904

Keywords



The marked rise in obesity, particularly in the United States, is a complex sociodemographic and public health problem that is largely driven by poor diet, excessive caloric intake, and insufficient caloric expenditure [1]. Weight loss interventions in clinical and research settings have sought to curb this growing health concern by providing recommendations on decreasing caloric intake and increasing caloric expenditure [2]. Thus, subjective diet assessments—including food records, 24-hour dietary recall, and food frequency questionnaires (FFQs)—are often used in weight loss interventions to evaluate diet adherence and behavior change, although they are burdensome and prone to biased measurements of dietary intake and physical activity [3,4]. There is increasing interest in using health and fitness wearable devices to measure eating behaviors as they address the limitations of subjective diet assessments; these devices are set to become a US $48.2 billion market by 2023 [5]. Broad deployment of wearable activity trackers and heart rate monitors in the last decade has coincided with the need to reduce errors and improve our understanding of diet behaviors, calorie count, and nutrient intake. Using wearable technologies not only improves our understanding of diet behaviors but also aids the design of novel interventions to prevent overeating. Although emerging data suggest that diet and exercise programs are more successful at obtaining weight loss and healthy behavior change when they are mobile health (mHealth)–based interventions (ie, delivered via a mobile phone) compared with non-mHealth interventions (controls) [6], the effectiveness of combining mHealth-based interventions with wearable technologies to produce dietary change has yet to be properly studied.

To date, 3 types of technology-enabled wearable domains for calorie and nutrient monitoring have emerged: (1) image-sensing technology (eg, cameras coupled with novel algorithms that detect and analyze foods in an image using a food database); (2) eating action unit (EAU)–based technology (eg, wrist-worn sensors to capture eating and diet behaviors); and (3) biochemical measures (eg, sweat-sensing wearable technology that measures nutrient status). Although their impact on improving care and health outcomes remains untested, the validity of such devices is a prominent concern among researchers. Prior narrative reviews and surveys have focused on describing existing technologies [7-9] and algorithms [10], along with advantages and disadvantages of each type of wearable. Here, we have described the outcomes of a 1-day invitational workshop that identified challenges in developing technology-enabled, automated calorie-monitoring methods and proposed opportunities for future computing research in this field. We have also discussed how technology and objective measurements can support conventional subjective diet assessment approaches.


An expert, consensus-building 1-day workshop, supported by the National Science Foundation and organized by Northwestern University, was held on June 20, 2017, in Chicago, Illinois. The primary aim of the workshop was to discuss the development, evaluation, and use of technology to detect and understand diet behaviors and estimate calorie and macronutrient intake.

A total of 30 participants from 6 universities and from the National Institutes of Health were selected to participate in the workshop. To capture varying perspectives across multiple fields, participants included population health researchers, such as behavioral scientists, nutritionists, obesity epidemiologists, and intervention developers (Bonnie Spring, Lisa Neff, Kevin Hall, and Marilyn Cornelius); computer scientists (Nabil Alshurafa, Adam Hoover, Edward Delp, and Mingui Sun); and engineers in biomedical, material science, and computer technology (Roozbeh Ghaffari, John Rogers, Veena Misra, Adam Hauke, Andrew Jajack, and Jason Heikenfeld). Multimedia Appendix 1 provides a list of participants at the workshop. Owing to the exploratory nature of this workshop, the organizers did not apply a theoretical framework.

A team consisting of at least 2 participants was organized to lead a discussion about one of the following topics: types of technology-enabled calorie and macronutrient monitoring, potential research gaps and technical challenges to advance the capture of energy expenditure, and methods for how technology can assist conventional subjective diet assessments. Workshop participants were also randomly separated into 2 groups to delineate key topics for future research. Overall, there was consensus regarding the need to refine technology-supported calorie- and macronutrient-monitoring approaches. The primary deliverable was a set of presentations delineating current gold standards for measuring energy intake and an appraisal of the state of research related to calorie- and macronutrient-monitoring technology. Experts within each technology-enabled wearable domain identified new insights and opportunities from these presentations and conversations, which were used to inform the final recommendations presented in this paper. A final review of the recommendations was performed by the authors of this paper.


Technology-enabled measures can reduce participant burden and increase granularity of diet data collection through automated measures [11]. We have explained each technology-enabled domain and identified key challenges to advancing the technology in the sections that follow (summarized in Figure 1). Each section also highlights potential research opportunities to advance technology-enabled devices in measuring calorie and macronutrient intake.

Image-Based Sensing

Image-based sensing systems [12,13] that combine wearable or smartphone-based cameras with advanced computational machine learning models, in particular deep learning [14], have the capability to identify pixels in an image that represent foods (this is known as food segmentation), provide accurate timestamps when a meal is consumed (this is known as the metadata associated with the image), estimate consumption duration and frequency, and ascertain geographic eating locations (these 2 are commonly referred to as contextual information of the eating event). To enable these functions, researchers have focused on using image-based systems to identify food types [15]. These systems combine image-processing techniques and big data analytics to estimate energy contents for a meal [16,17] from food and nutrient databases, such as the Nutrition Data System for Research and the Food and Nutrient Database for Dietary Studies (FNDDS) [18]. Thus, image-based systems are unique as they use known foods in a database to guide the estimation of calorie intake and can provide a fairly accurate analysis of the consumed food types. However, accurate estimation of energy and nutrients in an image relies on the system’s ability to distinguish foods from the image background and to identify (or label) food items. Although there are promising advancements, several challenges remain in automating the estimation of calorie intake from cameras (Table 1).

Figure 1. Overview of challenges in developing technology-enabled, automated caloric-monitoring methods.
View this figure
Table 1. Challenges and research opportunities in adopting image-based sensing methods.
ChallengeResearch opportunity
Lack of publicly available large-scale food image datasets with comprehensive ground truth labelsDevelop feasible method to annotate food images crawled from the Web or collected from nutrition studies that can scale up
Inaccurate food image–segmentation algorithmsReduce the burden of requiring fine-grain pixel-level training data for image segmentation and leverage accurate image or specific image region level information to improve food segmentation performance
Nonrobust food image–recognition systemsDesign deep neural network–based models to capture structures in the image that are associated with specific foods and incorporate contextual information to improve robustness
Inaccurate food portion size estimation in imageDevelop methods that can directly link food images to portion size; explore 3-dimensional information from newer camera sensors on mobile devices
Challenge: Lack of Publicly Available Large-Scale Food Image Datasets With Comprehensive Ground Truth Labels

A ground truth label, derived from observable data, is the objective verification of particular properties of a digital image, used to test the accuracy of automated image analysis processes. The ground truth of food images includes known, fixed features such as pixels representing food objects in the image (used for food segmentation), food types (used for food recognition), and food portion size in grams (used for portion size estimation). Such information is necessary to train the image analysis system to accurately estimate calorie intake from the image. Several publicly available food datasets contain a substantial amount of food images [18] and provide general labels of different food types, but none provide information relative to portion size, segmented food items, or additional context of the image source. To reliably build a machine learning model that automatically maps images to calories, successful image-based systems need large collections of food images (ie, food image datasets) with the necessary ground truth labels to improve the learned models. These ground truth labels should clearly delineate different food items even if they are on a single plate and should include portion size information for each food item.

Research Opportunities

To address the need for constructing large-scale food datasets with food images that provide comprehensive ground truth information, a solution is to merge food images sourced from the internet or from nutrition studies with manual annotation from crowd-sourcing platforms. Amazon Mechanical Turk (AMT) has been used for food image collection and annotation tasks [19,20], although AMT is not tailored for building large food image datasets efficiently with proper labels. This inefficiency may be partly attributed to its high cost and dependency on crowdsource workers unfamiliar with the context in which the data were collected (eg, restaurant food vs homemade meal). There are opportunities to develop novel tools that not only label foods in the image but also remove irrelevant images to aid crowdsource workers in accurately labeling necessary data. These approaches can be developed using a combination of crowd input and advanced automatic image analysis techniques [21].

Challenge: Inaccurate Food Image–Segmentation Algorithms

Image segmentation is the process of partitioning an image using an algorithm into disjointed and coherent regions on the basis of prespecified features. Food image segmentation is important for multifood images in which subsequent analysis, such as recognition and portion size estimation, depends on having accurate segmentation of each food in the image. Owing to the complexity of food images (eg, occlusion, hidden or mixed foods, and shadows), accurate food image segmentation is a difficult task and affects the ability of image-based sensing systems to identify food types. Previous studies [22-26] have used image-segmentation methods such as contour-to-region, graph-based, and superpixel-based approaches. A segmentation method based on deep neural networks has been proposed [27] to reliably build a model that automatically segments foods. However, these models require pixel-level food labels or labeled bounding boxes to indicate regions containing foods, which is time-consuming and computationally expensive.

Research Opportunities

Owing to the inefficiency and high expense, some studies have applied graph-based methods to select regions containing foods [28] or have explored techniques where only image-level labels indicating the presence or absence of foods are required instead of requiring pixel-level labels of food objects [29]. The opportunity to advance such methods remains, as does the main opportunity of creating efficient segmentation algorithms that provide pixel-level labels and training for each image in a food dataset.

Challenge: Nonrobust Food Image Recognition Systems

Research [30-32] in food recognition has analyzed multiple features and classification algorithms (aimed at identifying foods) that are effective but mainly restricted to a known food dataset that has been established a priori. Researchers [27,33,34] use either an end-to-end deep neural network or image features with variations of support vector machine (SVM) [35] classification algorithms to optimize food recognition. However, many studies on food recognition assume that only 1 food item is present in an image and apply a multiclass classification algorithm to identify the foods. In real-world scenarios, there is typically more than 1 food item in an image, where each food item is a segment in the image and is described by handcrafted or deep features and then classified by an SVM classifier [22,36].

Research Opportunities

Robust and accurate food image recognition remains a challenge because many foods have a deformable appearance and thus lack of rigid structures and because there often exists subtle differences in visual features among different food categories. Factors such as food preparation and personal preferences can also affect the appearance of food ingredients. Deep neural network–based approaches provide opportunities to improve the robustness and accuracy of food recognition systems but depend heavily on well-constructed training datasets and proper selections of neural network architectures. However, there is an opportunity to provide contextual information in the food recognition algorithm, which can include environmental cues and previous diet history. There have been advances in restaurant-specific food recognition [23,37,38] where location and menu information are used to assist with recognition. Others [39] have integrated recipe and cuisine as context and prior knowledge to aid automatic food recognition. Food patterns across time and dietary preferences are increasingly being shown to improve food classification accuracy [40]. Incorporating contextual cues can be essential to advancing the robustness of food recognition algorithms.

Challenge: Inaccurate Portion Size Estimation in Image

Estimating food portion size from an image is challenging as preparation and consumption impose large variations on food shape and appearance. Several food portion estimation techniques based on reconstructing the 3-dimensional (3D) models of the foods have been developed, which require users to take multiple images or videos or to modify mobile devices [41-45] to enable reconstruction. These approaches work well for irregularly shaped foods, but they do not work well when there are no strongly matched features (ie, corresponding sets of points) occurring on multiple frames. These approaches also require users to capture multiple images from different angles, making them tedious and unsuitable for long-term health monitoring and data collection. Others have focused on developing methods to estimate food portion size from a single-view image [17,46-49]. These methods use geometric model–based techniques that require food labels and food segmentation masks (ie, pixel location of foods in the image). Errors from automatic food classification and image segmentation can propagate into the final portion estimation. In addition, existing methods have only examined small model libraries consisting of foods with simple geometric shapes (eg, apples, burgers, and pizza). Further research is required to develop more comprehensive model libraries capable of dealing with irregularly shaped foods.

Research Opportunities

Despite some promising results from existing approaches, the performance of current portion estimation methods is not yet satisfactory. More recently, several groups [27,50] have developed portion estimation methods using deep learning. However, these techniques estimate food volumes rather than food energy. With food volumes estimated, food density is still required to compute weights, which can then be mapped to food energy using a food composition resource, such as the FNDDS [18]. Therefore, new approaches [16] that can directly link food images to food energy in the image are desirable.

Depth sensors and dual camera configurations are quickly gaining popularity on consumer mobile devices. More 3D information can be collected without significantly adding to a user’s burden capturing the eating scene. For example, mobile phones equipped with depth sensors enable simultaneous capture of image depth and the RGB color model image. For dual camera systems, at least two images are captured from slightly different angles, enabling multiview 3D reconstruction techniques. The additional information captured by the mobile devices may improve the accuracy of food portion estimation by providing additional 3D information on food objects.

Eating Action Unit–Based Sensing

Although existing imaging technologies have shown reasonable success in estimating calorie intake and nutrients from images, there is growing interest in capturing proxies to calorie intake with sensor modalities that have fewer wearer privacy concerns. EAUs (ie, fine-grained activity units that occur during eating) are a mechanism to understand calorie intake patterns and behaviors. Accelerometer- and gyroscope-based inertial measurement units [51,52] are examples of EAUs that assess eating patterns. These techniques have been developed as a result of observing feeding gestures (or bites) and their correlation to calorie intake [53,54]. The underlying assumption of EAUs is that by counting the number of bites and estimating average calories per bite, we can provide a reasonable estimate of overall calories consumed, map the number of bites to calories to determine over- or underconsumption, and enable users to automatically quantify their calorie intake using EAU-based devices [55].

Capturing EAUs enables actionable insight, where information generated can be used by wearers, clinicians, and dietitians in a timely manner. They also enable interventionists to test the efficacy of calorie-informed, just-in-time interventions in close proximity to eating episodes. Although the detection of EAUs has shown promise, several challenges prevent these systems from being adopted in clinical and population settings (Table 2).

Table 2. Challenges and research opportunities in adopting eating action unit–based sensing methods.
ChallengeResearch opportunity
Limited understanding of context surrounding system success and failuresUse wearable video cameras to validate contextual information surrounding when sensor–algorithm pairings fail in real-world settings
Privacy protection in ground truth data collection methodsIdentify novel ways of protecting bystanders and other sensitive information in the field of view of cameras both in hardware and software to ensure wearer privacy concerns are addressed, thereby increasing likelihood of capturing naturally occurring behavior
Inability to accurately distinguish between food categoriesDefine food categories that are most useful for clinicians and researchers for diet interventions and food recalls
Challenge: Insufficient Understanding of the Context of System Success and Failure

Several systems have shown promise in free-living populations but fall short of delineating the contexts for when and where their systems succeed and fail, which prevents others from building on previous work to advance EAU systems under challenging scenarios. For example, gesture EAUs are confounded by smoking action units; however, few studies attempt to consider other challenging contextual scenarios that can confound eating behaviors [56]. Attempts to advance eating detection while considering challenging contexts are limited primarily because confounding contexts are not clearly delineated. A preliminary study in 8 participants who wore a wrist-worn sensor for a few hours in a free-living situation demonstrated that wearable video cameras have an approximately 38% false-discovery rate, which typically corresponded to phone-related gestures [57]. The false-discovery rate could be mitigated by integrating phone usage information with the system. However, data are limited in the context in which 1 device outperforms the other, limiting our ability to advance EAU-based eating detection systems.

Research Opportunity

Additional data are needed on the strengths and limitations that lead systems to succeed and/or fail in free-living populations. Understanding the context of success or failure enables identification of strengths and weaknesses of various systems and advances both the hardware and algorithm used to solve these challenges. Studies are beginning to identify the context in which sensor-algorithm pairings fail; however, they lack validity through visual confirmation. With the exception of a few recent studies [57-62], few researchers have incorporated wearable video cameras in the field to provide such validation. Researchers should continue using wearable video cameras in free-living populations and clearly state the context within which the system succeeds or fails.

Challenge: Inability to Protect Privacy When Using Wearable Video Cameras in the Field

Evaluation of an eating detection system necessitates testing against visually confirmed (with video) ground truth (ie, means of validating the activity in a real-world setting). Visual confirmation of eating behaviors is the strongest form of ground truth available for EAUs but also one of the most burdensome on the participant and researcher. Many researchers limit their studies to controlled settings primarily because of the limited robustness of the sensor and the time required to manually label video streams to produce ground truth. There is significant time and cost associated with designing a fail-safe device that can function in free-living environments [63]. However, people are generally unwilling to wear wearable video cameras in real-world settings owing to privacy concerns, and the likelihood of behaving naturally in the presence of a video camera is limited [64]. Recent research has shown that, given acceptable levels of incentives and properly orienting the camera, participants are willing to don wearable video cameras in real-world settings for a short period of time. Some researchers bypass the use of wearable video cameras by combining other sources of information, such as food journaling and sensor data [65], commercial electroencephalography (EEG) sensors to produce labels for a wearable EEG device [66], and a GoPro Hero 3 camera mounted on the chest facing the jaw [58].

Research Opportunity

Ensuring a long-lived, minimally intrusive method for capturing ground truth is necessary for capturing realistic data and rigorously testing interventions. As many participants report concerns for bystanders as their main reason for being unwilling to wear a camera all day [64,67,68], researchers may be able to design video cameras that are both privacy-preserving and that aid in validating other body-worn sensors by providing video confirmation of EAUs. Importantly, this will ensure that sensors being deployed adequately validate the behaviors they claim to capture in the settings they are most needed.

Challenge: Inability to Accurately Distinguish Between Food Categories

Similar to image processing, EAU-based sensors have shown success in distinguishing between different food categories. Chewing crunchy chips compared with chewing a banana produces very different sensor signals, and researchers are beginning to capitalize on these variations to distinguish between food items and type. Automatically determining solid versus liquid ingestion has provided some utility in identifying sources of ingestion behavior and intervention design, allowing researchers to investigate this phenomenon [69-71]. This could provide utility in a hospital setting, where foods provided to patients are known a priori, narrowing the food search space. However, this limits utility in free-living populations, and the challenge remains in increasing external system validity.

Research Opportunity

Knowledge of an individual’s diet may narrow the search space, enabling sensors to automatically distinguish between individuals, given their known diet and food environment. An opportunity exists to categorize foods in a way that would be most useful for researchers and clinicians in improving calorie intake estimates. Although distinguishing between liquid and solid consumption may have research utility, it is unknown what other types of food categories may be distinguishable. Within-subject variability of nutrients is influenced by gender, age, and education [72]. Thus, this research opportunity would be most helpful for populations with low-variability diets, particularly among elderly participants or patients on restricted diets.

Biochemical Measure–Based Sensing

There are concerted efforts underway to characterize the biochemical changes in the body that result from food and calorie intake. Novel classes of biochemical and electrochemical sensing systems could be used to analyze changes in metabolic activity observed in interstitial fluid, saliva, or sweat [73,74]. In this section, we have reviewed wearable sweat monitoring systems that have been deployed recently in remote environments (Table 3).

Table 3. Challenges and research opportunities in adopting biochemical measure–based sensing methods.
ChallengeResearch opportunity
On-body biochemical monitoringApply wearable biochemical sensors to monitor electrolytes, metabolites, and proteins in biological fluids (eg, saliva, sweat, and interstitial fluid)
Stability of wearable sensors under different environmental conditions for metabolites, electrolytes, and proteinsDevelop stable biochemical tests to determine concentrations (bioassays) of glucose, lactate, cortisol, ammonium, sodium, chloride, and potassium, which require limited handling and refrigeration with dehydration or freeze-drying methods
Reusable vs single-use wearable sensorsDevelop low-cost battery and energy harvesting solutions to enable single-use and multiuse modes of operation
Challenge: On-Body Biochemical Monitoring

Recent efforts have focused on biochemical analysis of eccrine sweat using wearable devices [75-77], which leverage both colorimetric and electronic-based sensors that collect sweat directly from skin pores and measure biomarker concentrations and dynamics (eg, sweat loss and sweat rate) in real time. This opens new possibilities for characterizing electrolyte and metabolite loss during daily activities, which can be correlated with blood metabolites, hormone, proteins, pathogens, and drugs [78,79].

Continuously monitoring biomarkers in sweat requires highly sensitive techniques for extraction and electrochemical analysis. Researchers have devised strategies to reduce the contamination effects of skin in contact with the device, while increasing sweat collection volume. This new class of wearable biochemical sensors could provide viable pathways for creating noninvasive and remote analysis of diet, wellness, and health [75]. However, shelf-life stability of biochemical sensors, susceptibility to contamination, and fundamental limitations in capturing sufficient volumes of sweat remain problematic.

Research Opportunity

Key opportunities lie in the design and deployment of biochemical-sensing devices that can endure temperature changes owing to environmental factors and mechanical impact while maintaining signal quality without degradation over time. Beyond device resiliency, comparisons of sweat and blood analyte levels must be tested across healthy and sick populations to determine validity and applicability of on-body sweat sensing.

Challenge: Enabling Robust Onboard Enzymatic and Chemical Assays (Biochemical Tests and Assays) Under All Environmental Conditions

The rich heterogeneous blend of electrolytes, metabolites, and proteins in sweat represents a unique set of noninvasively collected data. These biomarkers have been shown to correspond to the physiologic state and may serve as the basis for understanding cognitive impairment in the field. To date, most studies have focused on characterizing electrolytes and metabolites (eg, glucose or lactate) using bioassays in controlled laboratory settings. Metabolic biomarkers could change with physical stress and diet during daily activity outside of controlled laboratory settings. The stability of wearable biochemical sensors is thus crucial to maintain over extended time periods in real-world settings.

Research Opportunity

Wearable biochemical sensors that employ onboard dehydrating reagents or buffers that reduce degradation could lead to broad-scale deployment of these systems. Refrigeration is useful in protecting against bioassay degradation, but it requires special instructions and specialized equipment for proper handling and modes of operation. The development of new classes of wearable devices that require limited handling and refrigeration and that can handle enzyme-linked immunosorbent assay and protein-based analysis, using dehydration and freeze-drying steps to promote chemical stability, represents an area of enormous potential for robust remote-based deployment of wearable technologies.

Challenge: Reusable Versus Single-Use Sensors

Continuous monitoring of sweat biomarkers requires flexible electronics modules, memory storage, and onboard batteries to facilitate data capture, signal processing, and transmission. Significant practical considerations, such as sensor corrosion at the interface with ionic fluids, need for cleaning, and the resulting signal degradation that could occur over time limit the utility of reusable systems. Electrochemical sensing systems consisting of a reusable electronics module and single-use electrochemical sensors provide compelling routes to address these challenges. Single-use system designs may circumvent the challenges of long-term wear, fluid–device interface, and signal degradation. However, disposable devices must be carefully engineered to support sufficiently reduced cost to warrant single-use deployment.

Research Opportunity

Sweat is a corrosive biofluid that engenders significant device cleaning to facilitate reuse of the device. Thus, single-use wearable biochemical-sensing systems address important limitations of reusable systems assuming cost constraints are met. Hybrid designs, in which the reusable module mechanically couples to a single-use biochemical sensor, may mitigate the limitations of reusable and single-use systems. Although hybrid systems tend to cost more, they have significantly greater signal processing and battery capacity for long-term continuous monitoring.

Challenges Across Sensing Modalities

Regardless of the type of sensing modality, calorie estimation techniques share a set of common challenges because of the unique role that eating and nutrition play in everyone’s lives. These challenges are related to sensor development, validation, and refinement both in controlled and free-living settings.

Challenge: Burden of Multimodal Systems

Although many researchers have studied detecting eating using a single wearable device, several are beginning to combine multiple sensors and context via multiple wearable devices to advance the total accuracy of an eating detection system. Mirtchouk et al [80] showed that using in-ear audio with head and wrist sensors improved accuracy from 67.8% with audio alone to 82.7% and 76.2% for head and wrist sensors, respectively. However, these approaches were mainly tested in a laboratory setting, not in a free-living environment, and it is unknown how well the findings translate to free-living populations. Multimodal sensor studies that attempt to determine utility of sensors in real-world settings are sorely needed.

Wrist sensors coupled with other sensor modalities (eg, GPS and respiratory plethysmography) may aid in distinguishing among smoking, eating, and other activities. Examples of multimodal systems include using a jaw motion sensor, a hand gesture sensor, and an accelerometer [81] and using an airflow sensor, a respiratory plethysmography chest sensor, and a wrist-worn sensor [82]. However, the burden of wearing all these sensors is significant. Thus, novel ways of combining less burdensome sensors and devices or integrating noncontact or noninvasive devices are needed while advancing the accuracy in detecting calorie and macronutrient intake or proxies.

Challenge: Time Asynchronization Across Sensors

A multidevice system brings challenges in coordinating and synchronizing activities and sensing across devices. As each device manages its own internal clock, this network of clocks can become unsynchronized following power failure or reset. Most devices are designed to be standalone and use an internal clock, as opposed to time stamping their data using a nearby smartphone or body sensor unit. Time synchronization in real time has been a long-studied problem; however, automated time-synchronization methods post data collection can enable researchers to test multiple devices simultaneously, without the need to reengineer the device to use a central hub. Without reliable millisecond time-synchronization techniques, annotations from 1 sensor stream (eg, video camera) are not transferable to another sensor stream.

Challenge: Lack of Automation in Data Labeling

Once a sensor is deployed in a real-world setting, a supervised learning model, which aims to categorize data from prior known labels or information (ie, supervised training), is designed to process the data and determine system viability. However, one challenge in building a supervised learning model is providing sufficiently annotated instances or labels to train the model. Prior studies in real-world settings have depended on self-reported annotations [59,83-86], which are burdensome and rarely timely. More recently, studies are using wearable cameras worn by participants to provide annotations through visual activity confirmation poststudy [57-62]. To visually confirm, a data labeler is hired to watch the video and label points in time when the activity occurred, which is time-consuming and prone to error. Computer scientists are beginning to design tools to automatically annotate using active learning systems that attempt to reduce the required number of annotations to build a reliable machine learning model. However, systems currently developed focus on building models that process data with samples that are fixed in time (eg, an image or a minute of data). Active learning systems designed to handle activities with varied durations (eg, eating episodes, feeding gestures, and chewing duration) can fill this gap.

Challenge: Unknown Acceptability of System by Users

For a system to succeed in real-world settings, it must be acceptable for the population of interest. Although several surveys have been designed to assess wearability of systems, there is no validated standard survey or approach to assess willingness to wear and use a device in the nutrition context. Current systems deploy devices for a variable amount of time (eg, 1–2 days, 1 week, or 1 month) and then report comfort based on a Likert scale. Habits regarding technology adoption are not properly understood until at least 1 week (when most individuals stop using an app or a device) [87]. As a result, acceptability must be clearly limited to the number of days the system was actually tested in free-living populations. An important contributor to system acceptability is battery lifetime, which is closely tied to device burden (ie, frequency of recharge).

Challenge: Short System Battery Lifetime

Long battery lifetime is essential for wearable technology to ensure high sensor sensitivity and recall of eating episodes in free-living populations. It is reported to be the most important feature rated by mobile device users [88-90]. Battery lifetime becomes critically important in longitudinal studies where reducing user burden is key to gathering more data and encouraging habituation. If users must recharge a device multiple times a day, this will limit data collection. Moreover, battery lifetime enhancements enable populations who may otherwise not be able to manage a device (eg, pediatric or geriatric populations). There are several software approaches to increase battery lifetime including duty-cycling, high-powered sensors, or triggering with low-power sensors. Reducing computational complexity and designing for specificity also reduces wasted energy. New materials enable batteryless sensing devices powered by energy harvested from the environment, wearer motion, or Wi-Fi gateways. Although these sensors show promise, they are not without challenges, as reliability can be an issue when ambient energy is not readily available. Wearable sensors are increasingly being developed to last several months [91,92], but most commercial sensors last <12 hours [93] when attempting to collect continuous inertial measurement unit data. Low-maintenance sensor solutions must be designed, and careful consideration of battery lifetime must exist in every phase of system and study design.

Challenge: Limited Rigor of System Testing

Although technology development serves as an important contribution to the health community, reproducibility of the results is essential to determine proper construct validity, internal reliability, and test-retest reliability to increase confidence in the potential of a system to work in real-world settings. Most existing wearable sensors and systems show success by their principal investigator but have not extended beyond the laboratories in which they were implemented. To prevent bias in reporting, researchers need to disseminate their systems (hardware, software, and datasets) to other teams to provide independent testing and review. Such rigor in testing of sensing platforms is needed across all sensor modalities.


Conventional diet assessments comprise subjective and objective (eg, double-labeled water and metabolic chamber) approaches [94]. Although both approaches measure calorie and macronutrient intake, subjective diet assessments are more commonly implemented in research and clinical settings [95], in part owing to greater convenience and reduced cost [96]. Subjective diet assessments are not constrained by battery lifetimes and are acceptable for target populations. However, each type of subjective diet assessment introduces unique types of measurement error depending on how the diet data are being collected. A description of each assessment, and its strengths and limitations, is presented in Table 4. We posit that technology—such as image sensing, EAU, and biochemical measures—can assist subjective diet assessments to capture habitual dietary intake and eating behaviors, as systematic and random errors associated with both approaches are fundamentally independent. Replacing subjective diet assessments with technology may be difficult as other limitations could arise, such as ambiguity in identifying food images through image distortion or uncertainty over whether the food was truly consumed [97]. Combining information from technology and subjective diet assessments improves the validity of dietary intake data because the 2 approaches complement each other’s strengths and limitations.

Table 4. Conventional subjective measurements of energy and macronutrient intake.
MethodDescriptionStrengthsLimitations
24-hour diet recallsInquiry about everything one had to eat and drink during the previous day (usually midnight to midnight); probes often used to collect more detail and standardize the interviewOpen-ended, enabling greater detail about intake and food preparation; good for culturally diverse diets; less burdensomeMemory dependent; error prone in quantifying portion sizes; requires intensive interviewer effort, which can decrease motivation to collect accurate data; repeated measures needed to capture usual intake; can alter eating behaviors if recalls are scheduled in advance
Food recordsDetailed list of all foods and drinks consumed over a specified amount of time, written by respondent and ideally using weight scales or measuring tools to determine portion size; provides data about actual intakeOpen-ended; does not rely on memory if records are completed on time; allows for self-monitoringRequires intensive respondent effort, which can decrease motivation to collect accurate data or lead to poor response rate; burdensome on staff to analyze data owing to entering and coding items; repeated measures needed to capture usual intake; can alter eating behaviors since respondents are monitoring their diets
Food frequency questionnaireQuestionnaire asking whether a food item was consumed during a specified period of time; contains 2 components (food list and frequency response question); provides data about relative intakeMeasures usual intake; less burdensome on respondent and research staffMemory dependent; food list is fixed and may not capture usual intake, particularly in a culturally diverse diet; may be difficult to quantify food portions without food images; difficult to inquire about mixed dishes; respondent may have difficulty interpreting the questions

One primary concern for using subjective diet assessments includes intentional or unintentional misreporting of dietary intake [98]. Specifically, 24-hour dietary recalls and FFQs rely on memory, which can depend on age, education, attention during eating, and consistency of diet patterns [94,99]. Many individuals underestimate portion sizes for foods and beverages [100] and are sometimes provided with household items, food scales, and/or 2-dimensional images of foods with anchors to improve portion size accuracy [94]. However, these instruments create additional burden and decrease motivation to accurately capture caloric and macronutrient intake. These limitations have far-reaching implications as investigators would be uncertain if the subjective diet assessments are accurately characterizing true dietary intake [101], correctly identifying whether participants are adhering to specific diet interventions [102], and introducing bias when investigating diet-disease associations [101]. Image-sensing and EAU behavior approaches can minimize misreporting by objectively capturing food images and by identifying timing of eating episodes, allowing individuals and researchers to corroborate information from subjective diet assessments. Furthermore, image-sensing and EAU measures can also serve as visual or verbal cues to assist recall when conducting subjective diet assessments.

A limitation specific to 24-hour dietary recall and food records is the measurement of acute intake and not the usual diet [94]. Multiple measures are needed to capture usual intake, and data collection must occur for every day of the week [94]. However, increasing the number of subjective diet assessments creates greater burden for the individual and the research team. Image-sensing and EAU measures can reduce the number of 24-hour dietary recalls and food records needed to capture the best estimates of absolute dietary intake, while automating the analysis of dietary data. To this end, more research is needed to evaluate the number of images and EAU measures needed to provide the best approximation of absolute dietary intakes.

Biochemical measures can also determine nutrient status of the body. However, biochemical concentrations are not true markers of dietary intake and can reflect how the body absorbs, transports, metabolizes, and excretes the nutrient [103]. Therefore, biochemical measures cannot replace subjective diet assessments since it would be unclear how nutrient status is influenced by dietary intake or in vivo processes. Recent advances in statistical approaches, such as prediction models that use data from technology and conventional subjective approaches, account for measurement errors and can provide more accurate results [104,105].

Technology-enabled devices that measure calories and nutrients can also have far-reaching implications in clinical practice. A recent study reported that providers perceive health-tracking technologies as very useful when reviewing patient data, managing medical visits, and facilitating patient–provider communication [106]. A growing number of patients are also engaging with health technology. According to the National Cancer Institute’s 2017 Health Information Trends Survey, 34% reported owning an electronic monitoring device to track their health behaviors [107]. A growing opportunity remains in developing efficient strategies to merge technology with subjective diet assessments toward obesity prevention and treatment efforts.


Sensors from the 3 domains presented (image-based, EAU-based, and biochemical measure–based) have the potential to identify markers that improve estimates of calorie intake. However, the technologies still require considerable user input from the end user, scientist, or clinician who may have to label or segment images or metrics from such wearables to train a machine learning system. Fully automating technology-enabled calorie and nutrient monitoring would open the possibility to providing highly informed and validated information to augment recall methods and advance estimates of calorie intake for clinicians and patients.

Wearable-based sensing modalities focused on biochemical processes offer a solution for understanding food nutrients. However, more technical expertise is needed to merge conformal, low battery, secure, and valid technology with appropriate caloric assumptions. Once a stronger correlation can be drawn between biochemical products analyzed and calories consumed, biochemical-based wearables may provide promise in future automated calorie estimation systems. However, as with most wearable technologies, adherence to wearing the device remains problematic. This may be overcome if the value in such technologies pans out.

Although calories and nutrients can be consumed and monitored, it is essential to understand the behavioral choices that drive these decisions and if those behaviors can provide insight into calorie intake. All 3 systems have the potential to provide such information from cortisol levels for stressful eating (using biochemical sensors) to late-night snacking (using image-, physical-, and behavioral-based sensors) and beyond. The biggest challenge then becomes how to use the reliable big data collected from these devices to drive an actionable outcome such as lower calorie consumption or identification of eating behaviors that increase calorie intake.

It is our view that the defined research opportunities regarding calorie intake monitoring apps are the most promising, which may move the science toward a ubiquitous future of such monitoring. Nonetheless, challenges remain to fully introduce such solutions to have the desired health impact that clinicians and patients alike expect. At present, the next logical step is for scientists to improve the functionality of such devices, for human-computer interaction experts to improve usability, and for clinical teams and behavioral scientists to assess what information can be used to improve health behavior interventions given these new advanced technological tools. This translational, multidomain effort will demonstrate whether calorie intake monitoring enables higher quality of life and thus challenges the public health crisis of obesity.

Acknowledgments

This work was funded by a Smart and Connected Health grant from the National Science Foundation to BS (award #1545751) and a K25 award from the National Institutes of Health National Institute of Diabetes and Digestive and Kidney Diseases to NA (1K25DK113242). AWL is supported by a grant from the National Institutes of Health National Cancer Institute (T32CA193193). Sponsors had no role in review or approval of the paper for publication. The authors would like to thank the attendees of the workshop. The authors would also like to thank Krystina Neuman, PhD, for her constructive feedback in the preparation of the paper.

Authors' Contributions

All authors drafted the paper or critically revised it for intellectual content, gave approval for the final version to be published, and agreed to be accountable for all aspects of the work.

Conflicts of Interest

None declared.

Multimedia Appendix 1

List of attendees including faculty, postdoctoral researchers, and graduate students who participated in the discussions.

DOCX File , 14 KB

  1. Williams EP, Mesidor M, Winters K, Dubbert PM, Wyatt SB. Overweight and obesity: prevalence, consequences, and causes of a growing public health problem. Curr Obes Rep 2015 Sep;4(3):363-370. [CrossRef] [Medline]
  2. Dietary Assessment Primer - National Cancer Institute. Principles Underlying Recommendations.   URL: https://dietassessmentprimer.cancer.gov/approach/principles.html [accessed 2019-05-05]
  3. Poslusna K, Ruprich J, de Vries JH, Jakubikova M, van't Veer P. Misreporting of energy and micronutrient intake estimated by food records and 24 hour recalls, control and adjustment methods in practice. Br J Nutr 2009 Jul;101(Suppl 2):S73-S85. [CrossRef] [Medline]
  4. Kirkpatrick S, Subar A, Douglass D, Zimmerman T, Thompson F, Kahle L, et al. Performance of the Automated Self-Administered 24-hour Recall relative to a measure of true intakes and to an interviewer-administered 24-h recall. Am J Clin Nutr 2014 Jul;100(1):233-240 [FREE Full text] [CrossRef] [Medline]
  5. Nahum-Shani I, Hekler EB, Spruijt-Metz D. Building health behavior models to guide the development of just-in-time adaptive interventions: A pragmatic framework. Health Psychol 2015 Dec;34S:1209-1219 [FREE Full text] [CrossRef] [Medline]
  6. McCarroll R, Eyles H, Mhurchu C. Effectiveness of mobile health (mHealth) interventions for promoting healthy eating in adults: a systematic review. Prev Med 2017 Dec;105:156-168. [CrossRef] [Medline]
  7. Kalantarian H, Alshurafa N, Sarrafzadeh M. A survey of diet monitoring technology. IEEE Pervasive Comput 2017 Jan;16(1):57-65. [CrossRef]
  8. Kalantarian H, Alshurafa N, Sarrafzadeh M. A Survey of Diet Monitoring Technology. IEEE Pervasive Comput 2017 Jan;16(1):57-65. [CrossRef]
  9. Magrini ML, Minto C, Lazzarini F, Martinato M, Gregori D. Wearable devices for caloric intake assessment: state of art and future developments. Open Nurs J 2017;11:232-240 [FREE Full text] [CrossRef] [Medline]
  10. Vu T, Lin F, Alshurafa N, Xu W. Wearable food intake monitoring technologies: a comprehensive review. Computers 2017;6(1):4. [CrossRef]
  11. Schembre SM, Liao Y, O'Connor SG, Hingle MD, Shen S, Hamoy KG, et al. Mobile ecological momentary diet assessment methods for behavioral research: systematic review. JMIR Mhealth Uhealth 2018 Nov 20;6(11):e11170 [FREE Full text] [CrossRef] [Medline]
  12. Six BL, Schap TE, Zhu FM, Mariappan A, Bosch M, Delp EJ, et al. Evidence-based development of a mobile telephone food record. J Am Diet Assoc 2010 Jan;110(1):74-79 [FREE Full text] [CrossRef] [Medline]
  13. Zhu F, Bosch M, Woo I, Kim S, Boushey CJ, Ebert DS, et al. The use of mobile devices in aiding dietary assessment and evaluation. IEEE J Sel Top Signal Process 2010 Aug;4(4):756-766 [FREE Full text] [CrossRef] [Medline]
  14. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015 May 28;521(7553):436-444. [CrossRef] [Medline]
  15. Boushey CJ, Spoden M, Zhu FM, Delp EJ, Kerr DA. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods. Proc Nutr Soc 2017 Aug;76(3):283-294. [CrossRef] [Medline]
  16. Fang S, Shao Z, Mao R, Fu C, Kerr D, Boushey C. Single-View Food Portion Estimation: Learning Image-to-Energy Mappings Using Generative Adversarial Networks. In: Proceedings of the 2018 25th IEEE International Conference on Image Processing. 2018 Presented at: ICIP'18; October 7-10, 2018; Athens, Greece. [CrossRef]
  17. Fang S, Liu C, Zhu F, Delp E, Boushey C. Single-View Food Portion Estimation Based on Geometric Models. In: 2015 IEEE International Symposium on Multimedia. 2015 Presented at: ISM'15; December 14-16, 2015; Miami, FL, USA. [CrossRef]
  18. Montville JB, Ahuja JK, Martin CL, Heendeniya KY, Omolewa-Tomobi G, Steinfeldt LC, et al. USDA Food and Nutrient Database for Dietary Studies (FNDDS), 5.0. Procedia Food Sci 2013;2:99-112. [CrossRef]
  19. Kawano Y, Yanai K. Automatic expansion of a food image dataset leveraging existing categories with domain adaptation. In: Computer Vision - ECCV 2014 Workshops. Cham, Switzerland: Springer; 2015:3-17.
  20. Rabbi M, Costa J, Okeke F, Schachere M, Zhang M, Choudhury T. An Intelligent Crowd-worker Selection Approach for Reliable Content Labeling of Food Images. In: Proceedings of the conference on Wireless Health. 2015 Presented at: WH'15; October 14-16, 2015; Bethesda, Maryland. [CrossRef]
  21. Fang S, Liu C, Tahboub K, Zhu F, Delp E, Boushey C. cTADA: The Design of a Crowdsourcing Tool for Online Food Image Identification and Segmentation. In: Proceedings of the 2018 IEEE Southwest Symposium on Image Analysis and Interpretation. 2018 Presented at: SSIAI'18; April 8-10, 2018; Las Vegas, NV, USA. [CrossRef]
  22. Zhu F, Bosch M, Khanna N, Boushey CJ, Delp EJ. Multiple hypotheses image segmentation and classification with application to dietary assessment. IEEE J Biomed Health Inform 2015 Jan;19(1):377-388 [FREE Full text] [CrossRef] [Medline]
  23. Bettadapura V, Thomaz E, Parnami A, Abowd G, Essa I. Leveraging Context to Support Automated Food Recognition in Restaurants. In: 2015 IEEE Winter Conference on Applications of Computer Vision. 2015 Presented at: WACV'15; January 6-8, 2015; Waikoloa Beach, Hawaii. [CrossRef]
  24. Matsuda Y, Hoashi H, Yanai K. Recognition of Multiple-Food Images by Detecting Candidate Regions. In: 2012 IEEE International Conference on Multimedia and Expo. 2012 Presented at: ICME'12; July 9-13, 2012; Melbourne, VIC, Australia. [CrossRef]
  25. He Y, Xu C, Khanna N, Boushey C, Delp E. Food image analysis: segmentation, identification and weight estimation. Proc (IEEE Int Conf Multimed Expo) 2013 Jul;2013:- [FREE Full text] [CrossRef] [Medline]
  26. Wang Y, Liu C, Zhu F, Boushey C, Delp E. Efficient Superpixel Based Segmentation for Food Image Analysis. In: Proceedings of the 2016 IEEE International Conference on Image Processing. 2016 Presented at: ICIP'16; September 25-28, 2016; Phoenix, AZ, USA. [CrossRef]
  27. Myers A, Johnston N, Rathod V, Korattikara A, Gorban A, Silberman N. Im2Calories: Towards an Automated Mobile Vision Food Diary. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. 2015 Presented at: ICCV'15; December 7-13, 2015; Santiago, Chile p. 2015. [CrossRef]
  28. Shimoda W, Yanai K. CNN-Based Food Image Segmentation Without Pixel-Wise Annotation. In: New Trends in Image Analysis and Processing -- ICIAP 2015 Workshops. Cham, Switzerland: Springer; 2015:449-457.
  29. Wang Y, Zhu F, Boushey C, Delp E. Weakly Supervised Food Image Segmentation Using Class Activation Maps. In: 2017 IEEE International Conference on Image Processing. 2017 Presented at: ICIP'17; September 17-20, 2017; Beijing, China. [CrossRef]
  30. Bossard L, Guillaumin M, Van GL. Food-101—Mining discriminative components with random forests. In: Computer Vision – ECCV 2014. Cham, Switzerland: Springer; 2014:446-461.
  31. Kawano Y, Yanai K. FoodCam-256: A Large-scale Real-time Mobile Food Recognition System employing High-Dimensional Features and Compression of Classifier Weights. In: Proceedings of the 22nd ACM international conference on Multimedia. 2014 Presented at: MM'14; November 3-7, 2014; Orlando, Florida, USA p. 761-762. [CrossRef]
  32. Bosch M, Zhu F, Khanna N, Boushey C, Delp E. Combining Global and Local Features for Food Identification in Dietary Assessment. In: Proceedings of the 18th IEEE International Conference on Image Processing. 2011 Presented at: ICIP'11; September 11-14, 2011; Brussels, Belgium. [CrossRef]
  33. Ao S, Ling C. Adapting New Categories for Food Recognition with Deep Representation. In: Proceedings of the IEEE International Conference on Data Mining Workshop. 2015 Presented at: ICDMW'15; November 14-17, 2015; Atlantic City, NJ, USA.
  34. Hassannejad H, Matrella G, Ciampolini P, de Munari I, Mordonini M, Cagnoni S. Food Image Recognition Using Very Deep Convolutional Networks. In: Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management. 2016 Presented at: MADiMa'16; October 16-16, 2016; Amsterdam, The Netherlands p. 41-49. [CrossRef]
  35. Cortes C, Vapnik V. Support-vector networks. Mach Learn 1995 Sep;20(3):273-297. [CrossRef]
  36. He Y, Xu C, Khanna N, Boushey C, Delp E. Analysis of Food Images: Features and Classification. In: 2014 IEEE International Conference on Image Processing. 2014 Presented at: ICIP'14; October 27-30, 2014; Paris, France.
  37. Aguilar E, Remeseiro B, Bolanos M, Radeva P. Grab, pay, and eat: semantic food detection for smart restaurants. IEEE Trans Multimedia 2018;20(12):3266-3275. [CrossRef]
  38. Beijbom O, Joshi N, Morris D, Saponas S, Khullar S. Menu-Match: Restaurant-Specific Food Logging from Images. In: 2015 IEEE Winter Conference on Applications of Computer Vision. 2015 Presented at: WCACV'15; January 5-9, 2015; Waikoloa, HI, USA. [CrossRef]
  39. Min W, Bao B, Mei S, Zhu Y, Rui Y, Jiang S. You are what you eat: exploring rich recipe information for cross-region food analysis. IEEE Trans Multimedia 2018 Apr;20(4):950-964. [CrossRef]
  40. Wang Y, He Y, Boushey CJ, Zhu F, Delp EJ. Context based image analysis with application in dietary assessment and evaluation. Multimed Tools Appl 2018 Aug;77(15):19769-19794 [FREE Full text] [CrossRef] [Medline]
  41. Kong F, Tan J. DietCam: Automatic dietary assessment with mobile camera phones. Pervasive Mob Comput 2012;8(1):147-163. [CrossRef]
  42. Puri M, Zhu Z, Yu Q, Divakaran A, Sawhney H. Recognition and volume estimation of food intake using a mobile device. In: Proceedings of the 2009 Workshop on Applications of Computer Vision. 2009 Presented at: WACV'09; December 7-8, 2009; Snowbird, UT, USA.
  43. Dehais J, Shevchik S, Diem P, Mougiakakou S. Food Volume Computation for Self Dietary Assessment Applications. In: Proceedings of the 13th IEEE International Conference on BioInformatics and BioEngineering. 2013 Presented at: BIBE'13; November 10-13, 2013; Chania, Greece. [CrossRef]
  44. Sun M, Fernstrom JD, Jia W, Hackworth SA, Yao N, Li Y, et al. A wearable electronic system for objective dietary assessment. J Am Diet Assoc 2010 Jan;110(1):45-47 [FREE Full text] [CrossRef] [Medline]
  45. Shang J, Duong M, Pepin E, Zhang X, Sandara-Rajan K, Mamishev A. A Mobile Structured Light System for Food Volume Estimation. In: Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops. 2011 Presented at: ICCV Workshops'11; November 6-13, 2011; Barcelona, Spain. [CrossRef]
  46. Chen H, Jia W, Li Z, Sun Y, Sun M. 3D/2D Model-to-image Registration for Quantitative Dietary Assessment. In: Proceedings of the 38th Annual Northeast Bioengineering Conference. 2012 Presented at: NEBEC'12; March 16-18, 2012; Philadelphia, PA, USA.
  47. Pouladzadeh P, Shirmohammadi S, Al-Maghrabi R. Measuring calorie and nutrition from food image. IEEE Trans Instrum Meas 2014;63(8):1947-1956. [CrossRef]
  48. Zhang W, Yu Q, Siddiquie B, Divakaran A, Sawhney H. 'Snap-n-Eat': Food recognition and nutrition estimation on a smartphone. J Diabetes Sci Technol 2015 May;9(3):525-533 [FREE Full text] [CrossRef] [Medline]
  49. Aizawa K, Maruyama Y, Li H, Morikawa C. Food balance estimation by using personal dietary tendencies in a multimedia food log. IEEE Trans Multimedia 2013;15(8):2176-2185. [CrossRef]
  50. Ege T, Yanai K. Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions. In: Proceedings of the on Thematic Workshops of ACM Multimedia 2017. 2017 Presented at: Thematic Workshops'17; October 23-27, 2017; Mountain View, California, USA p. 367-375. [CrossRef]
  51. Zhang S, Alharbi R, Stogin W, Pfammatter A, Spring B, Alshurafa N. Food Watch: Detecting and Characterizing Eating Episodes Through Feeding Gestures. In: Proceedings of the 11th EAI International Conference on Body Area Networks. 2016 Presented at: BodyNets'16; December 15-16, 2016; Turin, Italy. [CrossRef]
  52. Thomaz E, Essa I, Abowd G. A practical approach for recognizing eating moments with wrist-mounted inertial sensing. Proc ACM Int Conf Ubiquitous Comput 2015 Sep;2015:1029-1040 [FREE Full text] [CrossRef] [Medline]
  53. Dong Y, Hoover A, Scisco J, Muth E. A new method for measuring meal intake in humans via automated wrist motion tracking. Appl Psychophysiol Biofeedback 2012 Sep;37(3):205-215 [FREE Full text] [CrossRef] [Medline]
  54. Scisco JL, Muth ER, Hoover AW. Examining the utility of a bite-count-based measure of eating activity in free-living human beings. J Acad Nutr Diet 2014 Mar;114(3):464-469. [CrossRef] [Medline]
  55. Turner-McGrievy GM, Wilcox S, Boutté A, Hutto BE, Singletary C, Muth ER, et al. Erratum: the dietary intervention to enhance tracking with mobile devices (DIET Mobile) study: a 6-month randomized weight loss trial. Obesity (Silver Spring) 2017 Dec;25(12):2156-2157 [FREE Full text] [CrossRef] [Medline]
  56. Parate A, Chiu M, Chadowitz C, Ganesan D, Kalogerakis E. RisQ: Recognizing smoking gestures with inertial sensors on a wristband. MobiSys 2014 Jun;2014:149-161 [FREE Full text] [CrossRef] [Medline]
  57. Zhang S, Alshurafa N, Nicholson M, Alshurafa N. When Generalized Eating Detection Machine Learning Models Fail in the Field. In: Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers. 2017 Presented at: UbiComp'17; September 11-15, 2017; Maui, Hawaii p. 613-622.
  58. Bedri A, Li R, Haynes M, Kosaraju RP, Grover I, Prioleau T, et al. EarBit: Using wearable sensors to detect eating episodes in unconstrained environments. Proc ACM Interact Mob Wearable Ubiquitous Technol 2017 Sep;1(3):- [FREE Full text] [CrossRef] [Medline]
  59. Chun KS, Bhattacharya S, Thomaz E. Detecting eating episodes by tracking jawbone movements with a non-contact wearable sensor. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018;2(1):1-21. [CrossRef]
  60. Hamatani T, Elhamshary M, Uchiyama A, Higashino T. FluidMeter: Gauging the human daily fluid intake using smartwatches. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018 Sep;2(3):1-25. [CrossRef]
  61. Fortuna C, Giraud-Carrier C, West J. Hand-to-Mouth Motion Tracking in Free-Living Conditions for Improved Weight Control. In: Proceedings of the IEEE International Conference on Healthcare Informatics. 2016 Presented at: ICHI'16; October 4-7, 2016; Chicago, IL. [CrossRef]
  62. Bi S, Caine K, Halter R, Sorber J, Kotz D, Wang T, et al. Auracle: Detecting eating episodes with an ear-mounted sensor. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018 Sep;2(3):1-27. [CrossRef]
  63. Starner T. The challenges of wearable computing: Part 2. IEEE Micro 2001;21(4):54-67. [CrossRef]
  64. Alharbi R, Stump T, Vafaie N, Pfammatter A, Spring B, Alshurafa N. I Can't Be Myself: Effects of wearable cameras on the capture of authentic behavior in the wild. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018 Sep;2(3):1-40. [CrossRef]
  65. Papapanagiotou V, Diou C, Zhou L, van den Boer J, Mars M, Delopoulos A. A novel chewing detection system based on PPG, audio, and accelerometry. IEEE J Biomed Health Inform 2017 May;21(3):607-618. [CrossRef]
  66. Zhang R, Amft O. Monitoring chewing and eating in free-living using smart eyeglasses. IEEE J Biomed Health Inform 2018;22(1):23-32. [CrossRef]
  67. Dimiccoli M, Marín J, Thomaz E. Mitigating bystander privacy concerns in egocentric activity recognition with deep learning and intentional image degradation. Proc ACM Interact Mob Wearable Ubiquitous Technol 2018;1(4):1-18. [CrossRef]
  68. Nebeker C, Lagare T, Takemoto M, Lewars B, Crist K, Bloss CS, et al. Engaging research participants to inform the ethical conduct of mobile imaging, pervasive sensing, and location tracking research. Transl Behav Med 2016 Dec;6(4):577-586 [FREE Full text] [CrossRef] [Medline]
  69. Amft O. A Wearable Earpad Sensor for Chewing Monitoring. In: Proceedings of the SENSORS, 2010 IEEE Conference. 2010 Presented at: SENSORS'10; November 1-4, 2010; Kona, HI, USA p. 222-227. [CrossRef]
  70. Alshurafa N, Kalantarian H, Pourhomayoun M, Sarin S, Liu J, Sarrafzadeh M. Non-invasive Monitoring of Eating Behavior Using Spectrogram Analysis in a Wearable Necklace. In: Proceedings of the 2014 IEEE Healthcare Innovation Conference,. 2015 Presented at: HIC'14; October 8-10, 2014; Seattle, WA, USA. [CrossRef]
  71. Kalantarian H, Alshurafa N, Pourhomayoun M, Sarin S, Le T, Sarrafzadeh M. Spectrogram-Based Audio Classification of Nutrition Intake. In: Proceedings of the IEEE EMBS Healthcare Innovations & Point of Care Technologies. 2014 Presented at: HIPT'14; October 8-10, 2014; Seattle, WA, USA. [CrossRef]
  72. Palaniappan U, Cue R, Payette H, Gray-Donald K. Implications of day-to-day variability on measurements of usual food and nutrient intakes. J Nutr 2003 Jan;133(1):232-235. [CrossRef] [Medline]
  73. Oncescu V, O'Dell D, Erickson D. Smartphone based health accessory for colorimetric detection of biomarkers in sweat and saliva. Lab Chip 2013 Aug 21;13(16):3232-3238. [CrossRef] [Medline]
  74. Sonner Z, Wilder E, Heikenfeld J, Kasting G, Beyette F, Swaile D, et al. The microfluidics of the eccrine sweat gland, including biomarker partitioning, transport, and biosensing implications. Biomicrofluidics 2015 May;9(3):031301 [FREE Full text] [CrossRef] [Medline]
  75. Ray TR, Choi J, Bandodkar AJ, Krishnan S, Gutruf P, Tian L, et al. Bio-integrated wearable systems: a comprehensive review. Chem Rev 2019 Apr 24;119(8):5461-5533. [CrossRef] [Medline]
  76. Koh A, Kang D, Xue Y, Lee S, Pielak RM, Kim J, et al. A soft, wearable microfluidic device for the capture, storage, and colorimetric sensing of sweat. Sci Transl Med 2016 Nov 23;8(366):366ra165 [FREE Full text] [CrossRef] [Medline]
  77. Gao W, Emaminejad S, Nyein HY, Challa S, Chen K, Peck A, et al. Fully integrated wearable sensor arrays for multiplexed in situ perspiration analysis. Nature 2016 Jan 28;529(7587):509-514 [FREE Full text] [CrossRef] [Medline]
  78. Choi J, Ghaffari R, Baker LB, Rogers JA. Skin-interfaced systems for sweat collection and analytics. Sci Adv 2018 Feb;4(2):eaar3921 [FREE Full text] [CrossRef] [Medline]
  79. Lee H, Choi TK, Lee YB, Cho HR, Ghaffari R, Wang L, et al. A graphene-based electrochemical device with thermoresponsive microneedles for diabetes monitoring and therapy. Nat Nanotechnol 2016 Jun;11(6):566-572. [CrossRef] [Medline]
  80. Mirtchouk M, Merck C, Kleinberg S. Automated Estimation of Food Type and Amount Consumed From Body-worn Audio and Motion Sensors. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 2016 Presented at: UbiComp'16; September 12-16, 2016; Heidelberg, Germany p. 451-462. [CrossRef]
  81. Fontana JM, Farooq M, Sazonov E. Automatic ingestion monitor: a novel wearable device for monitoring of ingestive behavior. IEEE Trans Biomed Eng 2014 Jun;61(6):1772-1779 [FREE Full text] [CrossRef] [Medline]
  82. Sazonov E, Lopez-Meyer P, Tiffany S. A wearable sensor system for monitoring cigarette smoking. J Stud Alcohol Drugs 2013 Nov;74(6):956-964 [FREE Full text] [CrossRef] [Medline]
  83. Sharma S, Jasper P, Muth E, Hoover A. Automatic Detection of Periods of Eating Using Wrist Motion Tracking. In: 2016 IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies (). 2016 Presented at: CHASE'16; June 27-29, 2016; Washington, DC. [CrossRef]
  84. Zhang R, Amft O. Free-living Eating Event Spotting Using EMG-monitoring Eyeglasses. In: Proceedings of the 2018 IEEE EMBS International Conference on Biomedical & Health Informatics. 2018 Presented at: BHI'18; March 4-7, 2018; Las Vegas, NV. [CrossRef]
  85. Blechert J, Liedlgruber M, Lender A, Reichenberger J, Wilhelm F. Unobtrusive electromyography-based eating detection in daily life: a new tool to address underreporting? Appetite 2017 Nov 1;118:168-173. [CrossRef] [Medline]
  86. Doulah A, Farooq M, Yang X, Parton J, McCrory MA, Higgins JA, et al. Meal microstructure characterization from sensor-based food intake detection. Front Nutr 2017;4:31 [FREE Full text] [CrossRef] [Medline]
  87. Gouveia R, Karapanos E, Hassenzahl M. How Do We Engage With Activity Trackers?: A Longitudinal Study of Habito. In: Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 2015 Presented at: UbiComp'15; September 7-11, 2015; Osaka, Japan p. 1305-1316. [CrossRef]
  88. WRAL TechWire. 2018. Survey says: Wearables use growing, accuracy top priority.   URL: https://www.wraltechwire.com/2018/11/30/survey-says-wearables-use-growing-accuracy-top-priority/ [accessed 2019-05-05]
  89. Arthur C. The Guardian. 2014. Your smartphone's best app? Battery life, say 89% Britons.   URL: https:/​/www.​theguardian.com/​technology/​2014/​may/​21/​your-smartphones-best-app-battery-life-say-89-of-britons [accessed 2019-05-05]
  90. PR Newswire. New Survey from TE Connectivity Uncovers America's Desire for Wearables.   URL: https:/​/www.​prnewswire.com/​news-releases/​new-survey-from-te-connectivity-uncovers-americas-desire-for-wearables-300089552.​html [accessed 2019-05-05]
  91. Hester J, Peters T, Yun T, Peterson R, Skinner J, Golla B. Amulet: An Energy-Efficient, Multi-Application Wearable Platform. In: Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM. 2016 Presented at: SenSys'16; November 14-16, 2016; Stanford, CA, USA p. 216-229. [CrossRef]
  92. Sorber J, Shin M, Peterson R, Cornelius C, Mare S, Prasad A. An Amulet for Trustworthy Wearable mHealth. In: Proceedings of the Twelfth Workshop on Mobile Computing Systems & Applications. 2012 Presented at: HotMobile '12; February 28-29, 2012; San Diego, California. [CrossRef]
  93. Merck C, Maher C, Mirtchouk M, Zheng M, Huang Y, Kleinberg S. Multimodality sensing for eating recognition. In: Proceedings of the 10th EAI International Conference on Pervasive Computing Technologies for Healthcare. 2016 Presented at: PervasiveHealth'16; May 16-19, 2016; Cancun, Mexico p. 130-137. [CrossRef]
  94. Willett W. Nutritional Epidemiology. Third Edition. Oxford, England: Oxford University Press; 2012.
  95. Trabulsi J, Schoeller DA. Evaluation of dietary assessment instruments against doubly labeled water, a biomarker of habitual energy intake. Am J Physiol Endocrinol Metab 2001;281(5):E891-E899. [CrossRef]
  96. Hill RJ, Davies PS. The validity of self-reported energy intake as determined using the doubly labelled water technique. Br J Nutr 2001 Apr;85(4):415-430. [CrossRef] [Medline]
  97. Arab L, Estrin D, Kim DH, Burke J, Goldman J. Feasibility testing of an automated image-capture method to aid dietary recall. Eur J Clin Nutr 2011 Oct;65(10):1156-1162 [FREE Full text] [CrossRef] [Medline]
  98. Food and Agriculture Organization of the United Nations. 2018. Dietary Assessment: A Resource Guide to Method Selection and Application in Low Resource Settings.   URL: http://www.fao.org/3/i9940en/I9940EN.pdf [accessed 2019-10-10]
  99. Livingstone M, Black A. Markers of the validity of reported energy intake. J Nutr 2003 Mar;133(Suppl 3):895S-920S. [CrossRef] [Medline]
  100. Almiron-Roig E, Solis-Trapala I, Dodd J, Jebb SA. Estimating food portions. Influence of unit number, meal type and energy density. Appetite 2013 Dec;71:95-103 [FREE Full text] [CrossRef] [Medline]
  101. Subar AF, Freedman LS, Tooze JA, Kirkpatrick SI, Boushey C, Neuhouser ML, et al. Addressing current criticism regarding the value of self-report dietary data. J Nutr 2015 Dec;145(12):2639-2645 [FREE Full text] [CrossRef] [Medline]
  102. Natarajan L, Pu M, Fan J, Levine RA, Patterson RE, Thomson CA, et al. Measurement error of dietary self-report in intervention trials. Am J Epidemiol 2010 Oct 1;172(7):819-827 [FREE Full text] [CrossRef] [Medline]
  103. Martha SM, Caudill MA. Biochemical, Physiological, and Molecular Aspects of Human Nutrition. Third Edition. Amsterdam, Netherlands: Elsevier; 2012.
  104. Freedman L, Midthune DF, Carroll RF, Tasevska NF, Schatzkin AF, Mares JF, et al. Using regression calibration equations that combine self-reported intake and biomarker measures to obtain unbiased estimates and more powerful tests of dietary associations. Am J Epidemiol 2011 Dec 1;174(11):1238-1245 [FREE Full text] [CrossRef] [Medline]
  105. Mossavar-Rahmani YF, Shaw PF, Wong WF, Sotres-Alvarez DF, Gellman MF, Van Horn L, et al. Applying recovery biomarkers to calibrate self-report measures of energy and protein in the hispanic community health study/study of Latinos. Am J Epidemiol 2015 Jun 15;181(12):996-1007 [FREE Full text] [CrossRef] [Medline]
  106. Holtz B, Vasold K, Cotten S, Mackert M, Zhang M. Health care provider perceptions of consumer-grade devices and apps for tracking health: a pilot study. JMIR Mhealth Uhealth 2019 Jan 22;7(1):e9929 [FREE Full text] [CrossRef] [Medline]
  107. Patel VJ. HealthIT. 2018. Individuals' Use of Online Medical Records and Technology for Health Needs.   URL: https://www.healthit.gov/sites/default/files/page/2018-03/HINTS-2017-Consumer-Data-Brief-3.21.18.pdf [accessed 2019-10-15]


3D: 3-dimensional
AMT: Amazon Mechanical Turk
EAU: eating action unit
EEG: electroencephalography
FFQ: food frequency questionnaire
FNDDS: Food and Nutrient Database for Dietary Studies
mHealth: mobile health
RGB: red green blue
SVM: support vector machine


Edited by G Eysenbach; submitted 01.06.19; peer-reviewed by A Lucas, P Lindgreen; comments to author 16.07.19; revised version received 07.09.19; accepted 24.09.19; published 04.12.19

Copyright

©Nabil Alshurafa, Annie Wen Lin, Fengqing Zhu, Roozbeh Ghaffari, Josiah Hester, Edward Delp, John Rogers, Bonnie Spring. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.12.2019.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.