Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/54557, first published .
Artificial Intelligence Applications to Measure Food and Nutrient Intakes: Scoping Review

Artificial Intelligence Applications to Measure Food and Nutrient Intakes: Scoping Review

Artificial Intelligence Applications to Measure Food and Nutrient Intakes: Scoping Review

Authors of this article:

Jiakun Zheng1 Author Orcid Image ;   Junjie Wang2 Author Orcid Image ;   Jing Shen3 Author Orcid Image ;   Ruopeng An4 Author Orcid Image

Review

1School of Economics and Management, Shanghai University of Sport, Shanghai, China

2School of Kinesiology and Health Promotion, Dalian University of Technology, Dalian, China

3Department of Physical Education, China University of Geosciences (Beijing), Beijing, China

4Silver School of Social Work, New York University, New York, NY, United States

*these authors contributed equally

Corresponding Author:

Jiakun Zheng, PhD

School of Economics and Management

Shanghai University of Sport

399 Changhai Road, Yangpu District

Shanghai, 200438

China

Phone: 86 13817507993

Email: zhengjiakun07@163.com


Background: Accurate measurement of food and nutrient intake is crucial for nutrition research, dietary surveillance, and disease management, but traditional methods such as 24-hour dietary recalls, food diaries, and food frequency questionnaires are often prone to recall error and social desirability bias, limiting their reliability. With the advancement of artificial intelligence (AI), there is potential to overcome these limitations through automated, objective, and scalable dietary assessment techniques. However, the effectiveness and challenges of AI applications in this domain remain inadequately explored.

Objective: This study aimed to conduct a scoping review to synthesize existing literature on the efficacy, accuracy, and challenges of using AI tools in assessing food and nutrient intakes, offering insights into their current advantages and areas of improvement.

Methods: This review followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. A comprehensive literature search was conducted in 4 databases—PubMed, Web of Science, Cochrane Library, and EBSCO—covering publications from the databases’ inception to June 30, 2023. Studies were included if they used modern AI approaches to assess food and nutrient intakes in human subjects.

Results: The 25 included studies, published between 2010 and 2023, involved sample sizes ranging from 10 to 38,415 participants. These studies used a variety of input data types, including food images (n=10), sound and jaw motion data from wearable devices (n=9), and text data (n=4), with 2 studies combining multiple input types. AI models applied included deep learning (eg, convolutional neural networks), machine learning (eg, support vector machines), and hybrid approaches. Applications were categorized into dietary intake assessment, food detection, nutrient estimation, and food intake prediction. Food detection accuracies ranged from 74% to 99.85%, and nutrient estimation errors varied between 10% and 15%. For instance, the RGB-D (Red, Green, Blue-Depth) fusion network achieved a mean absolute error of 15% in calorie estimation, and a sound-based classification model reached up to 94% accuracy in detecting food intake based on jaw motion and chewing patterns. In addition, AI-based systems provided real-time monitoring capabilities, improving the precision of dietary assessments and demonstrating the potential to reduce recall bias typically associated with traditional self-report methods.

Conclusions: While AI demonstrated significant advantages in improving accuracy, reducing labor, and enabling real-time monitoring, challenges remain in adapting to diverse food types, ensuring algorithmic fairness, and addressing data privacy concerns. The findings suggest that AI has transformative potential for dietary assessment at both individual and population levels, supporting precision nutrition and chronic disease management. Future research should focus on enhancing the robustness of AI models across diverse dietary contexts and integrating biological sensors for a holistic dietary assessment approach.

J Med Internet Res 2024;26:e54557

doi:10.2196/54557

Keywords



Measuring food and nutrient intake is foundational in nutrition research, dietary surveillance, and clinical practice [1]. Traditional methods, such as 24-hour dietary recalls, food diaries, and food frequency questionnaires, have been the cornerstones of such endeavors [2]. However, these self-reported tools frequently encounter issues associated with recall error, where individuals inadvertently omit, underreport, or exaggerate certain food items or quantities [3]. Furthermore, social desirability bias further complicates matters, with respondents potentially altering their reports to reflect what they perceive as more socially acceptable or healthier dietary habits [4]. While clinical measures in controlled environments, such as laboratories, offer higher accuracy, they have drawbacks [5]. These objective measures often entail labor-intensive processes, significant costs, and potential intrusiveness for participants [6]. Such constraints render them less suitable for large-scale, population-level studies or individuals seeking to personally monitor their food and nutrient intake for disease management and other health-related objectives [6]. In light of these challenges, there is an escalating interest in leveraging artificial intelligence (AI) to enhance the accuracy and feasibility of dietary intake assessment [7].

AI, a branch of computer science focusing on developing algorithms that simulate human cognitive functions, has shown transformative potential across diverse sectors [8]. In health-related research, AI’s ability to process vast amounts of data at incredible speeds and its adeptness at pattern recognition has made substantial strides in medical imaging, predictive modeling of disease outbreaks, and personalized medicine [9,10]. In the context of dietary assessment, AI offers several distinct advantages. First, it can potentially mitigate the biases inherent in self-reported methods by using image recognition to identify and quantify food items with minimal input from the user [11]. Advanced machine learning algorithms can analyze photographs of meals and provide instant, objective assessments of portion sizes and nutrient content [11,12]. In addition to image-based methods, AI techniques also use sound, jaw motion from wearable devices, and text data for dietary assessment. These methods provide diverse approaches to capture dietary intake, enhancing the accuracy and comprehensiveness of assessments. Second, AI can offer continuous, real-time monitoring, bridging the temporal gap in methods like 24-hour recalls [13]. Finally, while laboratory-based clinical measures are costly and labor-intensive, once developed, AI-driven tools can be scaled up relatively inexpensively, making them more feasible for large population studies and individual dietary tracking [14]. Given these attributes, AI emerges as a promising candidate to revolutionize the landscape of food and nutrient intake measurement.

While numerous reviews have covered objective measures of dietary intake, our review specifically focuses on the application of AI technologies in this field. This scoping review provides a comprehensive synthesis of recent advancements, highlights the unique challenges faced by AI methodologies, and identifies critical gaps that future research should address. Our work adds to the existing literature by providing a detailed analysis of AI’s role in improving the accuracy and efficiency of dietary assessment.

To the best of our knowledge, a comprehensive scoping review that delves into the applications of AI for measuring food and nutrient intakes has not yet been conducted. This gap in the literature underlines the novelty and urgency of our investigation. The primary objective of this review is to explore and map out the current landscape of AI applications in dietary assessment, detailing methodologies, tools, and their associated findings.

This endeavor holds transformative potential for several reasons. First, by consolidating and synthesizing the vast yet dispersed body of knowledge, researchers, clinicians, and policy makers can gain a cohesive understanding of the current state-of-the-art and its implications for the future. Second, the review will spotlight any existing limitations or gaps in the current AI methodologies, paving the way for targeted advancements in technology and research design. Finally, given the paramount importance of accurate dietary assessment in myriad health outcomes and policy decisions, our findings can directly inform best practices, promote technology adoption in clinical and research settings, and guide future funding and priorities in technological and nutritional research sectors.


Overview

This scoping review followed the guidelines of the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews; see Multimedia Appendix 1) [15].

Study Selection Criteria

Predefined inclusion and exclusion criteria were established and applied to all identified studies during the screening process. Textbox 1 provides a detailed overview of the inclusion and exclusion criteria, outlining the study characteristics considered for eligibility in this review.

Textbox 1. Inclusion and exclusion criteria for study selection.

Inclusion criteria:

  • Study design: Experimental studies (eg, randomized controlled trials [RCTs], pre-post interventions) and observational studies (eg, cross-sectional, longitudinal).
  • Analytic approach: Modern AI approaches, including machine learning (ML), deep learning (DL), and reinforcement learning (RL).
  • Participants: Individuals of all ages.
  • Data type: Input data, including food images, plate images, etc.
  • Outcome: Measures on food and nutrient intakes.
  • Article type: Original, empirical, peer-reviewed journal publications.
  • Language: Articles written in English.
  • Search time frame: From the inception of electronic bibliographic databases to June 30, 2023.

Exclusion criteria

  • Study design: Studies that do not involve human subjects, observational or experimental design.
  • Analytic approach: Studies using rule-based (“hard-coded”) approaches instead of example-based ML, DL, or RL.
  • Participants: Non-human subjects.
  • Data type: Studies not using dietary input data.
  • Outcome: Studies without outcomes related to food and nutrient intakes.
  • Article type: Letters, editorials, study or review protocols, case reports, or review articles.
  • Language: Non–English-language articles.
  • Search time frame: Studies published after June 30, 2023.

Search Strategy

A comprehensive search was performed in 4 electronic bibliographic databases: PubMed, Web of Science, Cochrane Library, and EBSCO. The search strategy used a combination of controlled vocabulary (eg, MeSH terms in PubMed) and free-text keywords. The search terms were structured around two main concepts: (1) AI and (2) nutrition or dietary intake. The AI-related terms included: “artificial intelligence,” “machine learning,” “deep learning,” “neural networks,” “natural language processing,” “computer vision,” “algorithms,” “data mining,” “big data,” “predictive modeling,” and “automated pattern recognition.” The nutrition-related terms included: “nutrition,” “dietetics,” “nutritional sciences,” “diet,” “dietary behavior,” “beverage intake,” “food intake,” “nutrient intake,” and “healthy eating.” These keywords were combined using Boolean operators (AND, OR) to ensure a comprehensive search. The complete search strategy, including database-specific modifications and detailed search strings, is provided in Multimedia Appendix 2. After the initial search, 2 coauthors independently screened the titles and abstracts for the articles found through the keyword search, obtained potentially relevant articles, and reviewed their full texts. The inter-rater agreement between these two authors was evaluated using Cohen κ (κ=0.85). Disagreements were settled through conversation.

Data Extraction and Synthesis

The following methodological and outcome variables were collected from each study using a standardized data extraction form: authors, year of publication, country or region, study objective, sample size, sample characteristics, AI models used, tasks and applications, type of input data, outcome measures, and perceived usefulness of AI technologies. No meta-analysis was feasible, given the substantial heterogeneity of the models, outcome measures, and applications. Therefore, we synthesized the study findings narratively and categorized them into distinct themes.


Identification of Studies

Figure 1 illustrates the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram, outlining the structured literature search and selection procedure. The initial database search identified 6132 articles. After removing duplicates, 5499 unique articles were retained for preliminary screening based on their titles and abstracts. From this collection, 5456 articles were evaluated as irrelevant and, consequently, excluded from the review. Applying the study selection criteria to the remaining 43 articles resulted in the further exclusion of 18 studies due to various reasons, including lack of AI technology adoption (n=7), absence of food and nutrient intake measurements (n=6), being a commentary rather than original empirical research (n=3), and a focus on smartphone-based apps (n=2). Ultimately, 25 studies met the relevance criteria and were included in the review [12,14,16-38].

Figure 1. PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) flow diagram illustrating the study selection process.

Study Characteristics

Table 1 reports the characteristics, type of input data, outcome measures, and main findings of the 25 studies incorporated in the review (more details in Multimedia Appendix 3). The studies spanned a range of publication years, with the earliest appearing in 2010 [16] and singular studies being published in 2013 [17], 2015 [18], and 2023 [38]. Publications in the following years were more frequent, with 2 studies each in 2016 [19,20], 2018 [21,22], and 2020 [14,27], 3 in 2021 [28-30], 5 in 2019 [12,23-26], and 7 in 2022 [31-37]. The geographical spread of the studies was diverse, with research conducted in several different countries: 14 in the United States [12,16,17,19-22,24-27,30,31,37], 4 in Switzerland [14,18,23,29], 2 in France [32,36], and 1 each in Canada [33], China [38], Denmark [34], Philippines [35], and Slovenia [28].

Table 1. Geographic location, sample size, sample characteristics, artificial intelligence models, type of input data, task, outcome measures, and main findings in the studies included in the review.
Author, YearCountry or
Region
Sample sizeSample characteristicsAIa modelsType of input dataTaskOutcome measures
Lopez-Meyer et al, 2010 [16]United States18Healthy adults (BMI: 28.01, SD 6.35)SVMb, RBFkcSound, strain signalClassificationFood intake
Fontana et al, 2013 [17]United States12Healthy adults (BMI: 24.39, SD 3.81)RFdJaw motion signal, hand gesture signal, body accelerationClassificationFood intake
Anthimopoulos et al, 2015 [18]Switzerland144Images of dishesCVe, SVMImageRegressionCarbohydrates counting
Farooq and Sazonov, 2016 [19]United States10Healthy adults (BMI: 27.87, SD 5.51)SVM, DTfJaw motion signal, body acceleration signalClassificationFood intake

Hezarjaribi et al, 2016 [20]United States10gSRMh, NLPi, SMMjAudio signalRegressionCalorie intake
Goldstein et al, 2018 [21]United States12Adults with overweight/obesity (BMI: 33.60, SD 5.66)RF, DT, Logit.Boot, BNk, Bagging, Random subspaceTextRegressionDietary lapses
Hezarjaribi et al, 2018 [22]United States30NLP, LAlAudio signalRegressionCalorie intake
Lu et al, 2019 [23]Switzerland644Meal images (pixel: 640×480)MTNnetm, DTMn, RANSAC algorithmImageRegressionNutrient intake
Fang et al, 2019 [12]United States4190Food images (pixel: 224×224)GANo, CNNpImageRegressionFood energy
Jia et al, 2019 [24]United States38,415Images (pixel: 640×480)CNNImageRegressionDietary assessment
Chin et al, 2019 [25]United States567Food descriptionsLASSOq, Ridge, FFNNr, XGBs modelsTextRegressionAmount of lactose
Farooq et al, 2019 [26]United States40Healthy adults (BMI: 26.1, SD 5.2)NNCtHand gesture, jaw motion, body accelerationClassificationFood intake
Heremans et al, 2020 [27]United States126Adults with dyspepsiaANNuHeart rate variability signalClassificationFood intake
Lu et al, 2021 [14]Switzerland644Meal images (pixel: 640×480)MTCNetv, FSLBCw, 3D-SCAxImageRegressionNutrient intake
Mezgec and Koroušić Seljak, 2021 [28]Slovenia520Food images (pixel: 512×512)DNNyImageClassificationDietary assessment
Papathanail et al, 2021 [29]Switzerland866Meal images (pixel: 640×480)CNN, PSPNetz, DeepLabv3 networkImageRegressionEnergy, nutrient intake
Taylor et al, 2021 [30]United States34Healthy adults (mean BMI: 24)CNN, SMMText, voice dataRegressionEnergy intake
Ghosh and Sazonov, 2022 [31]United States17Adolescents and adultsTime-CNN, ResNetaa, FCNab, IMac, MLPadAccelerometer, optical sensor dataClassificationFood intake
Van Wymelbeke-Delannoy et al, 2022 [32]France22,544Dishes imagesDNNFood imageRegressionFood item
Pfisterer et al, 2022 [33]Canada689Plate images (pixel: 640×480)Deep-CNNPlate imageRegressionFood intake
Pedersen et al, 2022 [34]Denmark100Adults with normal weightRFPsychophysiological responsesRegressionFood intake
Siy Van et al, 2022 [35]Philippines618ChildrenRF, SVM, LDAae, LRafTextRegressionUnder-nutrition
Granal et al, 2022 [36]France375Adults with chronic kidney diseaseBN, BTANNagTextRegressionDietary potassium intake
Nguyen et al, 2022 [37]United States36AdolescentPop-socketImage, textRegressionDietary intake
Shao et al, 2023 [38]China5920Food imagesRGB-Dah fusion networkImageRegressionEnergy, nutrient intake

aAI: artificial intelligence.

bSVM: support vector machine.

cRBFk: radial basis function kernels.

dRF: random forest.

eCV: computer vision.

fDT: decision tree.

gNot applicable.

hSRM: speech recognition model.

iNLP: natural language processing.

jSMM: string matching module.

kBN: Bayesian network.

lLA: levenshtein algorithm.

mMTNnet: multi-task neural network.

nDTM: delaunay triangulation method.

oGAN: generative adversarial networks.

pCNN: convolutional neural network.

qLASSO: least absolute shrinkage and selection operator.

rFFNN: feed forward neural network.

sXGB: eXtreme gradient boosting.

tNNC: neural network classifier.

uANN: artificial neural network.

vMTCNet: multi-task contextual network.

wFSLBC: few-shot learning-based classifier.

xSCA: surface construction algorithm.

yDNN: deep neural network.

zPSPNet: pyramid scene parsing network.

aaResNet: residual neural network.

abFCN: fully convolutional neural network.

acIM: inception network.

adMLP: multilayer perceptron.

aeLDA: linear discriminant analysis.

afLR: logistic regression.

agBTANN: bayesian tree augmented naive network.

ahRGB-D: Red, Green, Blue-Depth.

The studies varied in sample sizes, ranging from 10 to 38,415. Specifically, 10 studies had sample sizes between 10 and 99 [16,17,19-22,26,30,31,37], 3 had between 100 and 199 [18,27,34], and the remaining 12 had sample sizes exceeding 300. Among the 25 studies, while all involved human subjects, 10 studies focused on analyzing food images, dish images, or plate images to estimate dietary intake [12,14,18,23,24,28,29,32,33,38], 4 targeted healthy adults dealing with obesity or overweight [16,19,21,26], 3 focused on adults with normal weight [17,30,34], 3 engaged with children and adolescents [31,35,37], and 2 addressed adults with diseases [27,36]. Over the years, there have been notable advancements in AI-based dietary assessment. Early studies primarily focused on developing basic image recognition algorithms. More recent studies have integrated advanced machine learning models, such as deep learning and convolutional neural networks, which have significantly improved the accuracy of food recognition and nutrient estimation.

Among the 25 studies, 10 used image data, 9 used sound or jaw motion data from wearable devices, 4 used text data, and the remaining 2 combined multiple types of input data for dietary assessment. We classified the applications into 4 categories: dietary intake assessment, food detection, nutrient estimation, and food intake prediction.

Applications in Dietary Intake Assessment

Our review identified several critical steps involved in the processing of dietary intake assessment systems, specifically for image-based methods. These steps include (1) identifying images with food, (2) identifying the foods, (3) separating the foods into separate parts, (4) estimating portion sizes served and remaining to estimate intake, and (5) estimating nutrient intake. Each of these steps involves distinct AI methodologies with varying degrees of accuracy and potential errors.

Identifying Images With Food

AI models, particularly convolutional neural networks (CNNs), are widely used for recognizing the presence of food in images. Studies, such as those by Fang et al (2019) [12] and Jia et al (2019) [24], have demonstrated high accuracy in detecting food presence using end-to-end image-based automatic food energy estimation techniques and real-world egocentric images, respectively.

Identifying the Foods

Once food is identified in an image, the next step is to classify and recognize different food items. Techniques such as support vector machines (SVMs) and deep learning models, including GANs (Generative Adversarial Networks) and advanced CNNs, are used for this purpose. For example, the GoCARB system developed by Anthimopoulos et al [18] used computer vision to estimate carbohydrate content by recognizing different food items from smartphone images.

Separating Foods Into Separate Parts

Segmenting individual food items within an image is crucial for accurate portion size estimation. Techniques such as image segmentation using deep neural networks (DNNs) have been effective in this regard. The study by Mezgec and Koroušić Seljak [28] showcased the use of DNNs for image-based dietary assessment with a classification accuracy of 86.72%.

Estimating Portion Sizes Served and Remaining

Estimating the portion sizes of served and remaining food requires precise volume and area measurements, which can be challenging due to varying presentation and occlusion of food items. AI models using RGB-D (Red, Green, Blue-Depth) imagery, as seen in the work by Shao et al [38], have shown promise in improving the precision of such estimations by using depth information to enhance the accuracy of food volume assessments.

Estimating Nutrient Intake

The final step involves estimating the nutrient intake based on the identified and quantified food items. This step often leverages databases such as the US Department of Agriculture (USDA) nutritional database to map food items to their nutrient profiles. The integration of AI for this purpose is exemplified by systems like the S2NI platform, which combines speech recognition and natural language processing to monitor dietary composition from spoken data, achieving high accuracy in nutrient computation.

Non–image-based dietary assessment methods, including those using sound, jaw motion from wearable devices, and text analysis, can also be categorized similarly. These methods contribute to various steps, particularly in identifying food intake and estimating nutrient content. For instance, the use of jaw motion signals analyzed by SVMs, as studied by Lopez-Meyer et al [16], provides high accuracy in detecting food intake.

Applications in Food Detection

Food detection refers to the identification and recognition of food items using AI technologies. AI applications have become increasingly important in automating food detection, providing foundational advancements crucial for accurate nutrient estimation and food intake prediction. SVM and random forests are highlighted as prevalent machine learning models across the studies, aiming to achieve high food detection accuracy [16,17,19]. Random forest classification emphasizes the importance of time and frequency domain features in food intake detection with wearable sensor systems, focusing predominantly on jaw motion and accelerometer signals [17].

Another essential facet in this AI-infused dietary landscape is the integration of image-based assessments [28,33]. The development and validation of deep neural networks like NutriNet for food and beverage image recognition have showcased the ability of image-based approaches to identify multiple food or beverage items in a single image. Moreover, incorporating FCNs and deep residual networks (ResNet) magnifies the efficacy of segmenting food images, presenting a robust method in automated dietary assessments. Notably, Pfisterer et al [33] offered insights into the application of deep convolutional encoder-decoder food networks with depth-refinement (EDFN-D) in long-term care settings, providing an automated imaging system for quantifying food intake with high precision and objectivity, addressing the existing limitations in these settings [33].

A noticeable trend across the studies is the use of wearable and mobile devices, demonstrating the integration of technology with daily human activities for real-time and accurate data collection [19,24,32,37]. Wearable devices, such as the Automatic Ingestion Monitor (AIM) and other novel devices with sensors on the temporalis muscle and accelerometers, have shown potential in reducing the influence of motion artifacts and speech on food intake detection accuracy [19]. Furthermore, mobile AI technologies, such as FRANI (Food Recognition Assistance and Nudging Insights), illustrate their feasibility and reliability in resource-constrained settings, offering a comparable alternative to traditional methods like weighed records (WRs) [37].

DNN and CNN are central in recognizing and detecting food items from images, providing an automated approach to food detection and segmentation. The FoodIntech system, using a DNN-based approach, has demonstrated reliability in recognizing a variety of dishes and assessing food consumption [32]. Similarly, algorithms designed for egocentric images from wearable cameras have achieved substantial accuracy in food detection, addressing concerns related to data processing burdens and privacy [24].

Combining AI with RGB-D imagery is an evolving approach, showing promise in refining the precision of food nutrition estimation. The use of RGB-D fusion networks has revealed advancements in performing multimodal and multiscale feature fusion, offering a refined accuracy in nutrient analysis [38]. This approach successfully estimated calories and mass with a lower percentage mean absolute error and effectively visualized the estimation results of 4 nutrients [38].

Despite the advancements, there is a discernable disparity in the reported accuracy and reliability among the studies, with accuracy ranging from 74% to 99.85% [19,24]. This variance reflects the diverse methodologies, sensor modalities, ML algorithms, and the nature of features extracted for analysis. The ongoing refinements in methods and technologies showcase the evolving nature of AI applications in food detection, signaling a step forward in automating dietary assessment in varied environments and demographic settings.

Applications in Nutrient Estimation

AI has been used to address the challenges associated with accurate nutrient intake assessment and dietary management for various medical conditions and patient demographics. The GoCARB system [18] exemplifies how AI can assist individuals with type 1 diabetes in carbohydrate counting, using computer vision to automate the estimation process using smartphones, hence aiding in optimal insulin dosage estimations. This application relies on the segmentation and recognition of food items, calculating the carbohydrate content based on food volumes and the USDA nutritional database, demonstrating a mean absolute percentage error in carbohydrate estimation of approximately 10%.

In addressing the nutrition assessment needs of hospitalized patients, an AI-based system has been developed [23,29] that uses RGB-D image pairs to estimate nutrient intake. These applications offer a means to counter malnutrition risks in hospital settings by delivering more accurate and automated nutrient intake assessments. The systems segment images into different food components, estimate the volume consumed, and calculate energy and macronutrient intake, showing a 15% estimation error [23] and improved agreement with expert estimations compared to standard clinical procedures [29].

Efforts have also been made to estimate food energy values using GAN architecture [12]. By mapping food images to their energy distributions, the technology has shown promise in improving the accuracy of dietary assessments, with an average error of 209 kcal per eating occasion in a real-world study setting.

In the context of 24-hour food recalls, machine learning models and database matching have been instrumental in estimating nutrients not directly outputted by specific dietary assessment tools [25]. For instance, lactose was relatively accurately estimated using models like XGB regressor and database matching methods.

Meanwhile, studies on the interplay between behavioral and physiologic variables in predicting food intake [34] have provided foundational insights. However, the predictive capability of combined or separate measures of food reward or biometric responses has not outperformed traditional models in clinical settings. The approach, however, lays the groundwork for further exploration of behavioral nutrition and personalized nutrition strategies.

Furthermore, the development of predictive tools leveraging AI for patients with chronic kidney disease has exhibited the potential to estimate dietary potassium intake, emphasizing the role of AI in clinical and therapeutic management [36]. This application has been noteworthy for its ability to classify potassium diet in 3 classes of potassium excretion with 74% accuracy, focusing more on clinical characteristics and renal pathology than on the potassium content of the ingested food.

Using mobile platforms that incorporate speech and natural language processing to convert spoken data to nutrient information offers a lens into the transformative potential of voice-based solutions [20,22,30]. These solutions, such as S2NI, Speech2Health, and the COCO Nutritionist app, achieve substantial accuracy in computing calorie intake, emphasizing the importance of real-time and pervasive monitoring. They demonstrate an integrated approach to capture dietary information more frequently, revealing the user preference toward voice-based interfaces over text-based and image-based nutrition monitoring due to their ease of use and accessibility.

Applications in Food Intake Prediction

Food intake prediction involves estimating the amount and type of food consumed based on detected items. Advancements in AI are significantly shaping the landscape of food intake prediction by offering various innovative solutions and techniques. For instance, ML techniques in predicting dietary lapses during weight loss interventions have demonstrated the potential to augment adherence to dietary guidelines and offer real-time interventions, providing a comprehensive perspective on combining individual and group-level data to enrich predictions [21].

The adaptability and efficiency of ML are further highlighted in the studies focusing on detecting food intake using various sensor technologies and algorithms. Developing and validating sensor-based food intake detection methods, such as AIM, have illustrated high accuracy and reliability, presenting a promising future for food intake monitoring in unconstrained environments [26,31]. SVMs have been effectively used in monitoring ingestive behavior, yielding up to 94% accuracy in detecting food intake by analyzing chews and swallows [16].

In particular, the utility of DL algorithms, like ResNet and Fully Convolutional Neural Network (FCN), is revealed to be paramount in differentiating food intake from other activities using sensor signals. The competitive performance of these algorithms indicates the significance of selecting appropriate methods for precise classifications in real-world scenarios, establishing their importance in the evolving field of dietary monitoring and health interventions [31].

The exploration of DNN in automatic food intake detection through dynamic analysis of heart rate variability has opened avenues for addressing meal-related disorders. The notable accuracy of DNN, especially in neuromodulation treatments for conditions like obesity and diabetes, establishes the potential of ML in contributing to varied health care settings [27].

Furthermore, the studies using ML algorithms like the random forest have provided a robust method for identifying and comparing nutritional risk, offering valuable insights into developing targeted nutritional interventions and effectively addressing undernutrition. Such approaches are crucial in considering local dietary culture and delivering more nuanced and culturally competent health care solutions [35].

Another essential facet in this AI-infused dietary landscape is the integration of image-based assessments [28,33]. The development and validation of deep neural networks like NutriNet for food and beverage image recognition have showcased the ability of image-based approaches to identify multiple food or beverage items in a single image. Furthermore, incorporating FCNs and deep residual networks (ResNet) magnifies the efficacy of segmenting food images, presenting a robust method in automated dietary assessments. Notably, Pfisterer et al [33] offered insights into the application of deep convolutional EDFN-D in long-term care settings, providing an automated imaging system for quantifying food intake with high precision and objectivity, addressing the existing limitations in these settings [33].


Principal Findings

The increasing intersection of AI with dietary assessment has emerged as a transformative trend, as evidenced by our scoping review. Our literature search revealed 25 pertinent studies published between 2010 and 2023. These studies spanned several nations, diverse demographics, and a spectrum of methodologies. At its core, AI has primarily been used in 3 domains: food detection, nutrient estimation, and food intake prediction. Machine learning models like SVMs and random forests and deep learning models like CNNs have proved instrumental in enhancing the accuracy of food detection and nutrient estimation, often integrated with wearable devices and mobile platforms. Another observation was the use of AI in designing user-friendly interfaces, such as voice-based inputs, to improve adherence to dietary tracking. User experience with AI-based dietary assessment tools varies, but studies indicate generally positive feedback regarding ease of use and convenience. Users appreciate the real-time feedback and reduced burden of manual input. However, there are concerns about accuracy and privacy. Enhanced user training and transparent data privacy policies could improve user trust and interaction with these tools. The collective findings underscore the potential of AI to revolutionize dietary assessment, providing robust accuracy and user-centric solutions. This amalgamation of technology and nutrition research addresses the inherent limitations of traditional methods and charts a path for more personalized, accurate, and real-time dietary assessments in varied settings.

As illustrated by the reviewed studies, integrating AI into food and nutrient intake assessments showcases a marked advancement over traditional methodologies commonly used in nutritional science [14,37]. Historically, methods such as 24-hour recalls, food frequency questionnaires, and dietary records have been the mainstream of dietary assessments [2]. While these methods have provided invaluable insights, they have inherent limitations like recall bias, inaccuracies stemming from self-reporting, and the logistical challenges of frequent, detailed data recording [3]. The reviewed studies, however, highlighted the significant potential of AI to alleviate some of these concerns. For instance, AI-backed systems such as FRANI have been shown to offer a reliable alternative to weighed records, which, although thorough, can be burdensome for participants [37]. Similarly, tools like the GoCARB system automate carbohydrate counting, which, if done manually, demands meticulous attention and can be prone to errors, especially for individuals with conditions like diabetes [18].

Furthermore, the versatility of AI applications across various nutritional assessments is evident from the reviewed literature. For instance, SVMs and random forests, when deployed in monitoring ingestive behaviors, have demonstrated high accuracy in detecting food intake by analyzing nuances such as chews and swallows [16]. This level of precision is difficult to attain through manual observation or self-reports. Applying DNNs to recognize food items from images underscores another leap, automating a process that traditionally demands human expertise. Furthermore, the intersection of AI with RGB-D imagery suggests an improved accuracy in nutrient analysis, an area where traditional methods may not always yield precise results [38]. However, it is crucial to note the variability in reported accuracy among studies, which underscores the importance of refining methodologies and recognizing the evolving nature of AI applications. Despite this, the current trajectory indicates that AI is poised to bring a paradigm shift in automating dietary assessment, melding accuracy with efficiency [36,37]. Wearable technology that detects food intake based on chews and swallows offers significant benefits in real-time dietary monitoring, particularly in clinical and research settings. These devices can be integrated with mobile applications and other wearable sensors to provide comprehensive dietary assessments. While continuous camera use may not be practical for all users, advancements in discreet wearable sensors and intermittent image capture can enhance user compliance and accuracy.

While AI’s promise in food and nutrient intake measurement is evident, its application comes with intrinsic challenges and limitations. The reviewed studies, as well as the broader literature, highlight some consistent concerns. First, the AI models heavily depend on the quality and breadth of training data [39]. A model trained on a limited dataset may not recognize diverse food items, particularly those from various global cuisines or those prepared using unique methods [40]. This can lead to inaccuracies in nutrient estimation. Common biases include algorithmic biases resulting from non-diverse training datasets that fail to represent global food diversity. In addition, limitations in image-based recognition systems often stem from varying image quality and presentation, which can affect the accuracy of food and nutrient estimations. The variability in food presentation, portion sizes, and the physical environment in which the food is captured (eg, lighting conditions) can pose challenges for image-based recognition systems [41,42]. Furthermore, while tools like FRANI and GoCARB show promise, they also underscore the current limitations in recognizing mixed dishes or deciphering layered foods with multiple ingredients [18,37]. It is also worth noting that AI systems while reducing human biases, introduce computational biases that may arise from algorithmic designs or training datasets [43,44]. These challenges highlight the need for more comprehensive datasets and improved image processing techniques to enhance AI model reliability. Finally, a potential digital divide exists, where populations without access to advanced technology or those not adept at using it might be excluded from AI-based dietary assessments, thereby limiting its universal applicability [45,46].

Many AI-based dietary assessment tools rely on dietitians to validate and estimate dietary intake from images due to the complexities involved in accurate food identification and portion size estimation. With the constant addition of new food items, maintaining up-to-date nutrient databases is challenging. Some studies have focused narrowly on estimating energy intake or working with a limited set of foods under controlled conditions, which limits the generalizability of their findings. Future research should focus on developing scalable AI models that can handle a broader range of foods and integrate real-time updates to nutrient databases. In addition, enhancing the collaboration between AI technologies and dietitians can help improve the accuracy and applicability of these tools.

Current objective methods face significant limitations, including inaccuracies in nutrient composition tables, the complexity of multi-ingredient dishes, and variability in nutrient composition of commercially available foods. In addition, these methods do not account for individual metabolic differences in nutrient processing. Integrating biological sensors with AI technologies could offer a more definitive approach by providing real-time data on circulating nutrients and individual metabolic responses, thereby improving the accuracy of dietary assessments.

The sequential nature of AI-based dietary assessment introduces cumulative errors, where inaccuracies at each stage—from food detection to nutrient estimation—can compound, leading to significant overall errors. Biological sensors that measure circulating nutrients in real-time offer a promising solution to overcome these limitations, as they provide direct data on nutrient absorption and metabolism, reducing reliance on intermediate estimations and improving overall accuracy.

Our search strategy, while comprehensive, may not have captured all studies involving AI and dietary assessment. Despite significant advancements, several gaps remain in the application of AI for dietary assessment. Future research should focus on enhancing the diversity of training datasets to reduce algorithmic biases and improve the accuracy of AI models in recognizing a wide variety of food items. In addition, integrating real-time metabolic data with dietary assessments could offer more comprehensive insights into individual nutritional statuses. Among the AI tools evaluated, image-based recognition systems like the GoCARB system are highly effective for carbohydrate counting in diabetes management, while wearable devices monitoring jaw motion offer promising real-time intake data, particularly useful in clinical settings.

Ethical considerations in AI-based dietary assessment are paramount. Data privacy concerns arise from the extensive personal data required for accurate assessments, necessitating robust security measures and transparent consent processes. Algorithmic biases can lead to inaccuracies and unfair outcomes, highlighting the need for diverse training datasets. In addition, the digital divide poses a significant challenge, as populations without access to advanced technologies may be excluded from the benefits of AI. Addressing these issues requires comprehensive strategies, including inclusive technology design and stringent ethical standards in data handling and algorithm development.

As AI continues to evolve, there is vast potential for revolutionary enhancements in dietary and nutrient intake measurement. Based on current trajectories in nutrition science and AI advancements, we might anticipate a future where AI systems can recognize food items with high precision and factor in variables like cooking methods, regional variations, and the bioavailability of nutrients. These AI systems could be trained on increasingly diverse datasets, capturing the nuances of global diets and potentially integrating real-time metabolic and physiological data from wearable devices to provide a more comprehensive view of an individual’s nutrient absorption [47,48]. AI could facilitate large-scale dietary assessment studies on a population level, helping researchers discern dietary patterns, nutrient deficiencies, and even epidemiological correlations faster and more accurately [49,50]. With the rise of precision nutrition, AI might enable personalized dietary recommendations, considering an individual's genetic, metabolic, and health profile [51]. This tailored approach could radically improve disease management, particularly for conditions like diabetes or cardiovascular diseases, where dietary interventions play a pivotal role [52].

Conclusion

In conclusion, the scoping review highlighted the burgeoning role of AI in advancing the measurement of food and nutrient intakes, with notable advancements in accuracy and efficiency compared to traditional methods. However, while the potential of AI in this domain is substantial, it is imperative to acknowledge its current limitations and areas requiring refinement. As the nexus between nutrition science and technology continues to strengthen, future research must focus on refining AI methodologies, ensuring their applicability across diverse populations, and integrating them into broader nutritional and health studies. This interdisciplinary collaboration promises a future where dietary assessments are accurate and instrumental in shaping individual and public health outcomes.

Acknowledgments

This research received no external funding.

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analyzed during this study.

Authors' Contributions

JZ and RA contributed to conceptualization. JW contributed to the methodology. JS handled the software. JZ, JW, and JS performed validation. JW and JS conducted the formal analysis. JZ and RA conducted the investigation. RA handled the resources. JW and JS performed data curation. JZ and RA contributed to writing—original draft preparation. JW and JS contributed to writing—review and editing. JW performed visualization. RA performed supervision. RA contributed to project administration.

Conflicts of Interest

None declared.

Multimedia Appendix 1

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist (Note that our paper is a scoping review rather than a systematic review, so some criteria in the checklist may not apply and are thus omitted).

PDF File (Adobe PDF File), 80 KB

Multimedia Appendix 2

Database search algorithms.

DOCX File , 15 KB

Multimedia Appendix 3

Main findings in the studies included in the review.

DOCX File , 21 KB

  1. Kirkpatrick SI, Collins CE. Assessment of nutrient intakes: introduction to the special issue. Nutrients. 2016;8(4):184. [FREE Full text] [CrossRef] [Medline]
  2. Shim JS, Oh K, Kim HC. Dietary assessment methods in epidemiologic studies. Epidemiol Health. 2014;36:e2014009. [FREE Full text] [CrossRef] [Medline]
  3. Ravelli MN, Schoeller DA. Traditional self-reported dietary instruments are prone to inaccuracies and new approaches are needed. Front Nutr. 2020;7:90. [FREE Full text] [CrossRef] [Medline]
  4. Hebert JR, Clemow L, Pbert L, Ockene IS, Ockene JK. Social desirability bias in dietary self-report may compromise the validity of dietary intake measures. Int J Epidemiol. 1995;24(2):389-398. [CrossRef] [Medline]
  5. Masterton S, Hardman CA, Boyland E, Robinson E, Makin HE, Jones A. Are commonly used lab-based measures of food value and choice predictive of self-reported real-world snacking? An ecological momentary assessment study. Br J Health Psychol. 2023;28(1):237-251. [FREE Full text] [CrossRef] [Medline]
  6. Jobarteh ML, McCrory MA, Lo B, Sun M, Sazonov E, Anderson AK, et al. Development and validation of an objective, passive dietary assessment method for estimating food and nutrient intake in households in low- and middle-income countries: a study protocol. Curr Dev Nutr. 2020;4(2):nzaa020. [FREE Full text] [CrossRef] [Medline]
  7. Oliveira Chaves L, Gomes Domingos AL, Louzada Fernandes D, Ribeiro Cerqueira F, Siqueira-Batista R, Bressan J. Applicability of machine learning techniques in food intake assessment: a systematic review. Crit Rev Food Sci Nutr. 2023;63(7):902-919. [CrossRef] [Medline]
  8. Collins C, Dennehy D, Conboy K, Mikalef P. Artificial intelligence in information systems research: a systematic literature review and research agenda. International Journal of Information Management. 2021;60:102383. [FREE Full text] [CrossRef]
  9. Schork NJ. Artificial intelligence and personalized medicine. Cancer Treat Res. 2019;178:265-283. [FREE Full text] [CrossRef] [Medline]
  10. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL. Artificial intelligence in radiology. Nat Rev Cancer. 2018;18(8):500-510. [FREE Full text] [CrossRef] [Medline]
  11. Sudo K, Murasaki K, Kinebuchi T, Kimura S, Waki K. Machine learning-based screening of healthy meals from image analysis: system development and pilot study. JMIR Form Res. 2020;4(10):e18507. [FREE Full text] [CrossRef] [Medline]
  12. Fang S, Shao Z, Kerr DA, Boushey CJ, Zhu F. An end-to-end image-based automatic food energy estimation technique based on learned energy distribution images: protocol and methodology. Nutrients. 2019;11(4):877. [FREE Full text] [CrossRef] [Medline]
  13. Folson GK, Bannerman B, Atadze V, Ador G, Kolt B, McCloskey P, et al. Validation of mobile artificial intelligence technology-assisted dietary assessment tool against weighed records and 24-Hour recall in adolescent females in Ghana. J Nutr. 2023;153(8):2328-2338. [FREE Full text] [CrossRef] [Medline]
  14. Lu Y, Stathopoulou T, Vasiloglou MF, Christodoulidis S, Stanga Z, Mougiakakou S. An artificial intelligence-based system to assess nutrient intake for hospitalised patients. IEEE Trans. Multimedia. 2021;23:1136-1147. [CrossRef]
  15. Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467-473. [FREE Full text] [CrossRef] [Medline]
  16. Lopez-Meyer P, Schuckers S, Makeyev O, Sazonov E. Detection of periods of food intake using Support Vector Machines. Annu Int Conf IEEE Eng Med Biol Soc. 2010;2010:1004-1007. [CrossRef] [Medline]
  17. Fontana JM, Farooq M, Sazonov E. Estimation of feature importance for food intake detection based on random forests classification. Annu Int Conf IEEE Eng Med Biol Soc. 2013;2013:6756-6759. [CrossRef] [Medline]
  18. Anthimopoulos M, Dehais J, Shevchik S, Ransford BH, Duke D, Diem P, et al. Computer vision-based carbohydrate estimation for type 1 patients with diabetes using smartphones. J Diabetes Sci Technol. 2015;9(3):507-515. [FREE Full text] [CrossRef] [Medline]
  19. Farooq M, Sazonov E. A novel wearable device for food intake and physical activity recognition. Sensors (Basel). 2016;16(7):1067. [FREE Full text] [CrossRef] [Medline]
  20. Hezarjaribi N, Reynolds CA, Miller DT, Chaytor N, Ghasemzadeh H. S2NI: a mobile platform for nutrition monitoring from spoken data. Annu Int Conf IEEE Eng Med Biol Soc. 2016;2016:1991-1994. [CrossRef] [Medline]
  21. Goldstein SP, Zhang F, Thomas JG, Butryn ML, Herbert JD, Forman EM. Application of machine learning to predict dietary lapses during weight loss. J Diabetes Sci Technol. 2018;12(5):1045-1052. [FREE Full text] [CrossRef] [Medline]
  22. Hezarjaribi N, Mazrouee S, Ghasemzadeh H. Speech2Health: a mobile framework for monitoring dietary composition from spoken data. IEEE J Biomed Health Inform. 2018;22(1):252-264. [CrossRef] [Medline]
  23. Lu Y, Stathopoulou T, Vasiloglou MF, Christodoulidis S, Blum B, Walser T, et al. An artificial intelligence-based system for nutrient intake assessment of hospitalised patients. Annu Int Conf IEEE Eng Med Biol Soc. 2019;2019:5696-5699. [CrossRef] [Medline]
  24. Jia W, Li Y, Qu R, Baranowski T, Burke LE, Zhang H, et al. Automatic food detection in egocentric images using artificial intelligence technology. Public Health Nutr. 2019;22(7):1168-1179. [FREE Full text] [CrossRef] [Medline]
  25. Chin EL, Simmons G, Bouzid YY, Kan A, Burnett DJ, Tagkopoulos I, et al. Nutrient estimation from 24-hour food recalls using machine learning and database mapping: a case study with lactose. Nutrients. 2019;11(12):3045. [FREE Full text] [CrossRef] [Medline]
  26. Farooq M, Doulah A, Parton J, McCrory MA, Higgins JA, Sazonov E. Validation of sensor-based food intake detection by multicamera video observation in an unconstrained environment. Nutrients. 2019;11(3):609. [FREE Full text] [CrossRef] [Medline]
  27. Heremans ERM, Chen AS, Wang X, Cheng J, Xu F, Martinez AE, et al. Artificial neural network-based automatic detection of food intake for neuromodulation in treating obesity and diabetes. Obes Surg. 2020;30(7):2547-2557. [CrossRef] [Medline]
  28. Mezgec S, Koroušić Seljak B. Deep neural networks for image-based dietary assessment. J Vis Exp. 2021;(169):e61906. [CrossRef] [Medline]
  29. Papathanail I, Brühlmann J, Vasiloglou MF, Stathopoulou T, Exadaktylos AK, Stanga Z, et al. Evaluation of a novel artificial intelligence system to monitor and assess energy and macronutrient intake in hospitalised older patients. Nutrients. 2021;13(12):4539. [FREE Full text] [CrossRef] [Medline]
  30. Taylor S, Korpusik M, Das S, Gilhooly C, Simpson R, Glass J, et al. Use of natural spoken language with automated mapping of self-reported food intake to food composition data for low-burden real-time dietary assessment: method comparison study. J Med Internet Res. 2021;23(12):e26988. [FREE Full text] [CrossRef] [Medline]
  31. Ghosh T, Sazonov E. A comparative study of deep learning algorithms for detecting food intake. Annu Int Conf IEEE Eng Med Biol Soc. 2022;2022:2993-2996. [CrossRef] [Medline]
  32. Van Wymelbeke-Delannoy V, Juhel C, Bole H, Sow AK, Guyot C, Belbaghdadi F, et al. A cross-sectional reproducibility study of a standard camera sensor using artificial intelligence to assess food items: the foodIntech project. Nutrients. 2022;14(1):221. [FREE Full text] [CrossRef] [Medline]
  33. Pfisterer KJ, Amelard R, Chung AG, Syrnyk B, MacLean A, Keller HH, et al. Automated food intake tracking requires depth-refined semantic segmentation to rectify visual-volume discordance in long-term care homes. Sci Rep. 2022;12(1):83. [FREE Full text] [CrossRef] [Medline]
  34. Pedersen H, Diaz LJ, Clemmensen KKB, Jensen MM, Jørgensen ME, Finlayson G, et al. Predicting food intake from food reward and biometric responses to food cues in adults with normal weight using machine learning. J Nutr. 2022;152(6):1574-1581. [FREE Full text] [CrossRef] [Medline]
  35. Siy Van VT, Antonio VA, Siguin CP, Gordoncillo NP, Sescon JT, Go CC, et al. Predicting undernutrition among elementary schoolchildren in the Philippines using machine learning algorithms. Nutrition. 2022;96:111571. [CrossRef] [Medline]
  36. Granal M, Slimani L, Florens N, Sens F, Pelletier C, Pszczolinski R, et al. Prediction tool to estimate potassium diet in chronic kidney disease patients developed using a machine learning tool: the univerSel study. Nutrients. 2022;14(12):2419. [FREE Full text] [CrossRef] [Medline]
  37. Nguyen PH, Tran LM, Hoang NT, Trương DTT, Tran THT, Huynh PN, et al. Relative validity of a mobile AI-technology-assisted dietary assessment in adolescent females in Vietnam. Am J Clin Nutr. 2022;116(4):992-1001. [FREE Full text] [CrossRef] [Medline]
  38. Shao W, Min W, Hou S, Luo M, Li T, Zheng Y, et al. Vision-based food nutrition estimation via RGB-D fusion network. Food Chem. 2023;424:136309. [CrossRef] [Medline]
  39. Aldoseri A, Al-Khalifa KN, Hamouda AM. Re-Thinking data strategy and integration for artificial intelligence: concepts, opportunities, and challenges. Applied Sciences. 2023;13(12):7082. [FREE Full text] [CrossRef]
  40. Mezgec S, Eftimov T, Bucher T, Koroušić Seljak B. Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment. Public Health Nutr. 2019;22(7):1193-1202. [FREE Full text] [CrossRef] [Medline]
  41. Allegra D, Battiato S, Ortis A, Urso S, Polosa R. A review on food recognition technology for health applications. Health Psychol Res. 2020;8(3):9297. [FREE Full text] [CrossRef] [Medline]
  42. Tahir GA, Loo CK. A comprehensive survey of image-based food recognition and volume estimation methods for dietary assessment. Healthcare (Basel). 2021;9(12):1676. [FREE Full text] [CrossRef] [Medline]
  43. Detopoulou P, Voulgaridou G, Moschos P, Levidi D, Anastasiou T, Dedes V, et al. Artificial intelligence, nutrition, and ethical issues: a mini-review. Clinical Nutrition Open Science. 2023;50:46-56. [FREE Full text] [CrossRef]
  44. Belenguer L. AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry. AI Ethics. 2022;2(4):771-787. [FREE Full text] [CrossRef] [Medline]
  45. Papathanail I, Abdur Rahman L, Brigato L, Bez NS, Vasiloglou MF, van der Horst K, et al. The nutritional content of meal images in free-living conditions-automatic assessment with goFOOD. Nutrients. 2023;15(17):3835. [FREE Full text] [CrossRef] [Medline]
  46. Zhou J, Wang Z, Liu Y, Yang J. Research on the influence mechanism and governance mechanism of digital divide for the elderly on wisdom healthcare: the role of artificial intelligence and big data. Front Public Health. 2022;10:837238. [FREE Full text] [CrossRef] [Medline]
  47. Romero-Tapiador S, Lacruz-Pleguezuelos B, Tolosana R, Freixer G, Daza R, Fernández-Díaz CM, et al. AI4FoodDB: a database for personalized e-Health nutrition and lifestyle through wearable devices and artificial intelligence. Database (Oxford). 2023;2023:baad049. [FREE Full text] [CrossRef] [Medline]
  48. Shei RJ, Holder IG, Oumsang AS, Paris BA, Paris HL. Wearable activity trackers-advanced technology or advanced marketing? Eur J Appl Physiol. 2022;122(9):1975-1990. [FREE Full text] [CrossRef] [Medline]
  49. Sak J, Suchodolska M. Artificial intelligence in nutrients science research: a review. Nutrients. 2021;13(2):322. [FREE Full text] [CrossRef] [Medline]
  50. Kirk D, Kok E, Tufano M, Tekinerdogan B, Feskens EJM, Camps G. Machine learning in nutrition research. Adv Nutr. 2022;13(6):2573-2589. [FREE Full text] [CrossRef] [Medline]
  51. de Toro-Martín J, Arsenault BJ, Després JP, Vohl MC. Precision nutrition: a review of personalized nutritional approaches for the prevention and management of metabolic syndrome. Nutrients. 2017;9(8):913. [FREE Full text] [CrossRef] [Medline]
  52. Livingstone KM, Ramos-Lopez O, Pérusse L, Kato H, Ordovas JM, Martínez JA. Precision nutrition: a review of current approaches and future endeavors. Trends in Food Science & Technology. 2022;128:253-264. [FREE Full text] [CrossRef]


AI: artificial intelligence
AIM: Automatic Ingestion Monitor
CNN: convolutional neural network
DNN: deep neural network
EDFN-D: encoder-decoder food networks with depth-refinement
FCN: Fully Convolutional Neural Network
FRANI: Food Recognition Assistance and Nudging Insights
GAN: Generative Adversarial Network
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews
ResNet: residual networks
RGB-D: Red, Green, Blue-Depth
SVM: support vector machine
USDA: US Department of Agriculture
WR: weighed record


Edited by A Mavragani; submitted 14.11.23; peer-reviewed by TAR Sure, S Kommireddy, M Elbattah, T Davies, T Baranowski; comments to author 30.05.24; revised version received 18.07.24; accepted 08.10.24; published 28.11.24.

Copyright

©Jiakun Zheng, Junjie Wang, Jing Shen, Ruopeng An. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 28.11.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.