Published on in Vol 23, No 6 (2021): June

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/18035, first published .
Health Recommender Systems: Systematic Review

Health Recommender Systems: Systematic Review

Health Recommender Systems: Systematic Review

Review

1Department of Computer Science, KU Leuven, Leuven, Belgium

2Faculty of Health Sciences, University of Maribor, Maribor, Slovenia

Corresponding Author:

Robin De Croon, PhD

Department of Computer Science

KU Leuven

Celestijnenlaan 200A

Leuven, 3001

Belgium

Phone: 32 16373976

Email: robin.decroon@kuleuven.be


Background: Health recommender systems (HRSs) offer the potential to motivate and engage users to change their behavior by sharing better choices and actionable knowledge based on observed user behavior.

Objective: We aim to review HRSs targeting nonmedical professionals (laypersons) to better understand the current state of the art and identify both the main trends and the gaps with respect to current implementations.

Methods: We conducted a systematic literature review according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines and synthesized the results. A total of 73 published studies that reported both an implementation and evaluation of an HRS targeted to laypersons were included and analyzed in this review.

Results: Recommended items were classified into four major categories: lifestyle, nutrition, general health care information, and specific health conditions. The majority of HRSs use hybrid recommendation algorithms. Evaluations of HRSs vary greatly; half of the studies only evaluated the algorithm with various metrics, whereas others performed full-scale randomized controlled trials or conducted in-the-wild studies to evaluate the impact of HRSs, thereby showing that the field is slowly maturing. On the basis of our review, we derived five reporting guidelines that can serve as a reference frame for future HRS studies. HRS studies should clarify who the target user is and to whom the recommendations apply, what is recommended and how the recommendations are presented to the user, where the data set can be found, what algorithms were used to calculate the recommendations, and what evaluation protocol was used.

Conclusions: There is significant opportunity for an HRS to inform and guide health actions. Through this review, we promote the discussion of ways to augment HRS research by recommending a reference frame with five design guidelines.

J Med Internet Res 2021;23(6):e18035

doi:10.2196/18035

Keywords



Research Goals

Current health challenges are often related to our modern way of living. High blood pressure, high glucose levels, and physical inactivity are all linked to a modern lifestyle characterized by sedentary living, chronic stress, or a high intake of energy-dense foods and recreational drugs [1]. Moreover, people usually make poor decisions related to their health for distinct reasons, for example, busy lifestyles, abundant options, and a lack of knowledge [2]. Practically, all modern lifestyle health risks are directly affected by people’s health decisions [3], such as an unhealthy diet or physical inactivity, which can contribute up to three-fourth of all health care costs in the United States [4]. Most risks can be minimized, prevented, or sometimes even reversed with small lifestyle changes. Eating healthily, increasing daily activities, and knowing where to find validated health information could lead to improved health status [5].

Health recommender systems (HRSs) offer the potential to motivate and engage users to change their behavior [6] and provide people with better choices and actionable knowledge based on observed behavior [7-9]. The overall objective of the HRS is to empower people to monitor and improve their health through technology-assisted, personalized recommendations. As one approach of modern health care is to involve patients in the cocreation of their own health, rather than just leaving it in the hands of medical experts [10], we limit the scope of this paper to HRSs that focus on laypersons, for example, nonhealth care professionals. These HRSs are different from clinical decision support systems that provide recommendations for health care professionals. However, laypersons also need to understand the rationale of recommendations, as echoed by many researchers and practitioners [11]. This paper also studies the role of a graphical user interface. To guide this study, we define our research questions (RQs) as follows:

RQ1: What are the main applications of the recent HRS, and what do these HRSs recommend?

RQ2: Which recommender techniques are being used across different HRSs?

RQ3: How are the HRSs evaluated, and are end users involved in their evaluation?

RQ4: Is a graphical user interface designed, and how is it used to communicate the recommended items to the user?

Recommender Systems and Techniques

Recommender techniques are traditionally divided into different categories [12,13] and are discussed in several state-of-the-art surveys [14]. Collaborative filtering is the most used and mature technique that compares the actions of multiple users to generate personalized suggestions. An example of this technique can typically be found on e-commerce sites, such as “Customers who bought this item also bought...” Content-based filtering is another technique that recommends items that are similar to other items preferred by the specific user. They rely on the characteristics of the objects themselves and are likely to be highly relevant to a user’s interests. This makes content-based filtering especially valuable for application domains with large libraries of a single type of content, such as MedlinePlus’ curated consumer health information [15]. Knowledge-based filtering is another technique that incorporates knowledge by logic inferences. This type of filtering uses explicit knowledge about an item, user preferences, and other recommendation criteria. However, knowledge acquisition can also be dynamic and relies on user feedback. For example, a camera recommender system might inquire users about their preferences, fixed or changeable lenses, and budget and then suggest a relevant camera. Hybrid recommender systems combine multiple filtering techniques to increase the accuracy of recommendation systems. For example, the companies you may want to follow feature in LinkedIn uses both content and collaborative filtering information [16]: collaborative filtering information is included to determine whether a company is similar to the ones a user already followed, whereas content information ensures whether the industry or location matches the interests of the user. Finally, recommender techniques are often augmented with additional methods to incorporate contextual information in the recommendation process [17], including recommendations via contextual prefiltering, contextual postfiltering, and contextual modeling [18].

HRSs for Laypersons

Ricci et al [12] define recommender systems as:

Recommender Systems (RSs) are software tools and techniques providing suggestions for items to be of use to a user [13,19,20]. The suggestions relate to various decision-making processes, such as what items to buy, what music to listen to, or what online news to read.

In this paper, we analyze how recommender systems have been used in health applications, with a focus on laypersons. Wiesner and Pfeifer [21] broadly define an HRS as:

a specialization of an RS [recommender system] as defined by Ricci et al [12]. In the context of an HRS, a recommendable item of interest is a piece of nonconfidential, scientifically proven or at least generally accepted medical information.

Researchers have sought to consolidate the vast body of literature on HRSs by publishing several surveys, literature reviews, and state-of-the-art overviews. Table 1 provides an overview of existing summative studies on HRSs that identify existing research and shows the number of studies included, the method used to analyze the studies, the scope of the paper, and their contribution.

Table 1. An overview of the existing health recommender system overview papers.
ReviewPapers, nMethodScopeContribution
Sezgin and Özkan (2013) [22]8Systematic reviewProvides an overview of the literature in 2013Identifying challenges (eg, cyber-attacks, difficult integration, and data mining can cause ethical issues) and opportunities (eg, integration with personal health data, gathering user preferences, and increased consistency)
Calero Valdez et al (2016) [23]17SurveyStresses the importance of the interface and HCIa of an HRSbProviding a framework to incorporate domain understanding, evaluation, and specific methodology into the development process
Kamran and Javed (2015) [24]7Systematic reviewProvides an overview of existing recommender systems with more focus on health care systemsProposing a hybrid HRS
Afolabi et al (2015) [25]22Systematic reviewResearch empirical results and practical implementations of HRSsPresenting a novel proposal for the integration of a recommender system into smart home care
Ferretto et al (2017) [26]8Systematic reviewIdentifies and analyzes HRSs available in mobile appsIdentifying HRSs that do not have many mobile health care apps
Hors-Fraile et al 2018 [27]19Systematic reviewIdentifies, categorizes, and analyzes existing knowledge on the use of HRSs for patient interventionsProposing a multidisciplinary taxonomy, including integration with electronic health records and the incorporation of health promotion theoretical factors and behavior change theories
Schäfer et al (2017) [28]24SurveyDiscusses HRSs to find personalized, complex medical interventions or support users with preventive health care measuresIdentifying challenges subdivided into patient and user challenges, recommender challenges, and evaluation challenges
Sadasivam et al (2016) [29]15Systematic reviewResearch limitations of current CTHCc systemsIdentifying challenges of incorporating recommender systems into CTHC. Proposing a future research agenda for CTHC systems
Wiesner and Pfeifer (2014) [21]Not reportedSurveyIntroduces HRSs and explains their usefulness to personal health record systemsOutlining an evaluation approach and discussing challenges and open issues
Cappella et al (2015) [30]Not reportedSurveyExplores approaches to the development of a recommendation system for archives of public health messagesReflecting on theory development and applications

aHCI: human-computer interaction.

bHRS: health recommender system.

cCTHC: computer-tailored health communication.

As can be seen in Table 1, the scope of the existing literature varies greatly. For example, Ferretto et al [26] focused solely on HRSs in mobile apps. A total of 3 review studies focused specifically on the patient side of the HRS: (1) Calero Valdez et al [23] analyzed the existing literature from a human-computer interaction perspective and stressed the importance of a good HRS graphical user interface; (2) Schäfer et al [28] focused on tailoring recommendations to end users based on health context, history, and goals; and (3) Hors-Fraile et al [27] focused on the individual user by analyzing how HRSs can target behavior change strategies. The most extensive study was conducted by Sadasivam et al [29]. In their study, most HRSs used knowledge-based recommender techniques, which might limit individual relevance and the ability to adapt in real time. However, they also reported that the HRS has the opportunity to use a near-infinite number of variables, which enables tailoring beyond designer-written rules based on data. The most important challenges reported were the cold start [31] where limited data are available at the start of the intervention, limited sample size, adherence, and potential unintended consequences [29]. Finally, we observed that these existing summative studies were often restrictive in their final set of papers.

Our contributions to the community are four-fold. First, we analyze a broader set of research studies to gain insights into the current state of the art. We do not limit the included studies to specific devices or patients in a clinical setting but focus on laypersons in general. Second, through a comprehensive analysis, we aim to identify the applications of recent HRS apps and gain insights into actionable knowledge that HRSs can provide to users (RQ1), to identify which recommender techniques have been used successfully in the domain (RQ2), how HRSs have been evaluated (RQ3), and the role of the user interface in communicating recommendations to users (RQ4). Third, based on our extensive literature review, we derive a reference frame with five reporting guidelines for future layperson HRS research. Finally, we collected and coded a unique data set of 73 papers, which is publicly available in Multimedia Appendix 1 [7-9,15,32-100] for other researchers.


Search Strategy

This study was conducted according to the key steps required for systematic reviews according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [101]. A literature search was conducted using the ACM Digital Library (n=2023), IEEExplore (n=277), and PubMed (n=93) databases. As mentioned earlier, in this systematic review we focused solely on HRSs with a focus on laypersons. However, many types of systems, algorithms, and devices can be considered as a HRS. For example, push notifications in a mobile health app or health tips prompted by web services can also be considered as health-related recommendations. To outline the scope, we limited the search terms to include a recommender or recommendation, as reported by the authors. The search keywords were as follows, using an inclusive OR: (recommender OR recommendation systems OR recommendation system) AND (health OR healthcare OR patient OR patients).

In addition, a backward search was performed by examining the bibliographies of the survey and review papers discussed in the Introduction section and the reference list of included studies to identify any additional studies. A forward search was performed to search for articles that cited the work summarized in Table 1.

Study Inclusion and Exclusion Criteria

As existing work did not include many studies (Table 1) and focused on a specific medical domain or device, such as mobile phones, this literature review used nonrestrictive inclusion criteria. Studies that met all the following criteria were included in the review: described an HRS whose primary focus was to improve health (eg, food recommenders solely based on user preferences [102] were not included); targeted laypersons (eg, activity recommendations targeted on a proxy user such as a coach [103] were not included); implemented the HRS (eg, papers describing an HRS concept are not included); reported an evaluation, either web-based or offline evaluation; peer-reviewed and published papers; published in English.

Papers were excluded when one of the following was true: the recommendations of HRSs were unclear; the full text was unavailable; or a newer version was already included.

Finally, when multiple papers described the same HRS, only the latest, relevant full paper was included.

Classification

To address our RQs, all included studies were coded for five distinct coding categories.

Study Details

To contextualize new insights, the publication year and publication venue were analyzed.

Recommended Items

HRSs are used across different health domains. To provide details on what is recommended, all papers were coded according to their respective health domains. To not limit the scope of potential items, no predefined coding table was used. Instead, all papers were initially coded by the first author. These resulting recommendations were then clustered together in collaboration with the coauthors into four categories, as shown in Multimedia Appendix 2.

Recommender Techniques

This category encodes the recommender techniques that were used: collaborative filtering [104], content-based filtering [105], knowledge-based filtering [106], and their hybridizations [107]. Some studies did not specify any algorithmic details or compared multiple techniques. Finally, when an HRS used contextual information, it was coded whether they used pre- or postfiltering or contextual modeling.

Evaluation Approach

This category encodes which evaluation protocols were used to measure the effect of HRSs. We coded whether the HRSs were evaluated through offline evaluations (no users involved), surveys, heuristic feedback from expert users, controlled user studies, deployments in the wild, and randomized controlled trials (RCTs). We also coded sample size and study duration and whether ethical approval was gathered and needed.

Interface and Transparency

Recommender systems are often perceived as a black box, as the rationale for recommendations is often not explained to end users. Recent research increasingly focuses on providing transparency to the inner logic of the system [11]. We encoded whether explanations are provided and, in this case, how such transparency is supported in the user interface. Furthermore, we also classified whether the user interface was designed for a specific platform, categorized as mobile, web, or other.

Data Extraction, Intercoder Reliability, and Quality Assessment

The required information for all included technologies and studies was coded by the first author using a data extraction form. Owing to the large variety of study designs, the included studies were assessed for quality (detailed scores given in Multimedia Appendix 1) using the tool by Hawker et al [108]. Using this tool, the abstract and title, introduction and aims, method and data, sample size (if applicable), data analysis, ethics and bias, results, transferability or generalizability, and implications and usefulness were allocated a score between 1 and 4, with higher scoring studies indicating higher quality. A random selection with 14% (10/73) of the papers was listed in a spreadsheet and coded by a second researcher following the defined coding categories and subcategories. The decisions made by the second researcher were compared with the first. With the recommended items (Multimedia Appendix 2), there was only one small disagreement between physical activity and leisure activity [32], but all other recommended items were rated exactly the same; the recommender techniques had a Cohen κ value of 0.71 (P<.001) and the evaluation approach scored a Cohen κ value of 0.81 (P<.001). There was moderate agreement (Cohen κ=0.568; P<.001) between the researchers concerning the quality of the papers. The interfaces used were in perfect agreement. Finally, the coding data are available in Multimedia Appendix 1.


Study Details

The literature in three databases yielded 2340 studies, of which only 23 were duplicates and 53 were full proceedings, leaving 2324 studies to be screened for eligibility. A total of 2161 studies were excluded upon title or abstract screening because they were unrelated to health or targeted at medical professionals or because the papers did not report an evaluation. Thus, the remaining 163 full-text studies were assessed for eligibility. After the removal of 90 studies that failed the inclusion criteria or met the exclusion criteria, 73 published studies remained. The search process is illustrated in Figure 1.

Figure 1. Flow diagram according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. EC: exclusion criteria; IC: inclusion criteria.
View this figure

All included papers were published in 2009 or later, following an upward trend of increased popularity. The publication venues of HRSs are diverse. Only the PervasiveHealth [33-35], RecSys [36,37,109], and WI-IAT [38-40] conferences published 3 papers each that were included in this study. The Journal of Medical Internet Research was the only journal that occurred more frequently in our data set; 5 papers were published by Journal of Medical Internet Research [41-45]. The papers were first rated using Hawker tool [108]. Owing to a large number of offline evaluations, we did not include the sample score to enable a comparison between all included studies. The papers received an average score of 24.32 (SD 4.55, max 32; data set presented in Multimedia Appendix 1). Most studies scored very poor on reporting ethics and potential biases, as illustrated in Figure 2. However, there is an upward trend over the years in more adequate reporting of ethical issues and potential biases. The authors also limited themselves to their specific case studies and did not make any recommendations for policy (last box plot is presented in Figure 2). All 73 studies reported the use of different data sets. Although all recommended items were health related, only Asthana et al [46] explicitly mentioned using electronic health record data. Only 14% (10/73) [7,47-55] explicitly reported that they addressed the cold-start problem.

Figure 2. Distribution of the quality assessment using Hawker tool.
View this figure

Recommended Items

Overview

Most HRSs operated in different domains and thus recommended different items. In this study, four nonmutually exclusive categories of recommended items were identified: lifestyle 33% (24/73), nutrition 36% (26/73), general health information 32% (23/73), and specific health condition–related recommendations 12% (9/73). The only significant trend we found is the increasing popularity of nutrition advice. Multimedia Appendix 2 shows the distribution of these recommended items.

Lifestyle

Many HRSs, 33% (24/73) of the included studies, suggest lifestyle-related items, but they differ greatly in their exact recommendations. Physical activity is often recommended. Physical activities are often personalized according to personal interests [56] or the context of the user [35]. In addition to physical activities, Kumar et al [32] recommend eating, shopping, and socializing activities. One study analyzes the data and measurements to be tracked for an individual and then recommends the appropriate wearable technologies to stimulate proactive health [46]. A total of 7 studies [7,9,42,53,57-59] more directly try to convince users to alter their behavior by recommending them to change, or alter their behavior: for example, Rabbi et al [7] learn “a user’s physical activity and dietary behavior and strategically suggests changes to those behaviors for a healthier lifestyle.” In another example, both Marlin et al [59] and Sadasivam et al [42] motivate users to stop smoking by providing them with tailored messages, such as “Keep in mind that cravings are temporary and will pass.” Messages could reflect the theoretical determinants of quitting, such as positive outcome expectations and self-efficacy enhancing small goals [42].

Nutrition

The influence of food on health is also clear from the large subset of HRSs dealing with nutrition recommendations. A mere 36% (26/73) of the studies recommend nutrition-related information, such as recipes [50], meal plans [36], restaurants [60], or even help with choosing healthy items from a restaurant menu [61]. Wayman and Madhvanath [37] provide automated, personalized, and goal-driven dietary guidance to users based on grocery receipt data. Trattner and Elsweiler [62] use postfiltering to focus on healthy recipes only and extended them with nutrition advice, whereas Ge et al [48] require users to first enter their preferences for better recommendations. Moreover, Gutiérrez et al [63] propose healthier alternatives through augmented reality when the users are shopping. A total of 7 studies specifically recommend healthy recipes [47,48,50,62,64-66]. Most HRSs consider the health condition of the user, such as the DIETOS system [67]. Other systems recommend recipes that are synthesized based on existing recipes and recommend new recipes [64], assist parents in making appropriate food for their toddlers [47], or help users to choose allergy-safe recipes [65].

General Health Information

According to 32% (23/73) of the included studies, providing access to trustworthy health care information is another common objective. A total of 5 studies focused on personalized, trustworthy information per se [15,55,68-70], whereas 5 others focused on guiding users through health care forums [52,71-74]. In total, 3 studies [55,68,69] provided personalized access to general health information. For example, Sanchez Bocanegra et al [15] targeted health-related videos and augmented them with trustworthy information from the United States National Library of Medicine (MedlinePlus) [110]. A total of 3 studies [52,72,74] related to health care forums focused on finding relevant threads. Cho et al [72] built “an autonomous agent that automatically responds to an unresolved user query by posting an automated response containing links to threads discussing similar medical problems.” However, 2 studies [71,73] helped patients to find similar patients. Jiang and Yang [71] investigated approaches for measuring user similarity in web-based health social websites, and Lima-Medina et al [73] built a virtual environment that facilitates contact among patients with cardiovascular problems. Both studies aim to help users seek informational and emotional support in a more efficient way. A total of 4 studies [41,75-77] helped patients to find appropriate doctors for a specific health problem, and 4 other studies [51,78-80] focused on finding nearby hospitals. A total of 2 studies [78,79] simply focused on the clinical preferences of the patients, whereas Krishnan et al [111] “provide health care recommendations that include Blood Donor recommendations and Hospital Specialization.” Finally, Tabrizi et al [80] considered patient satisfaction as the primary feature of recommending hospitals to the user.

Specific Health Conditions

The last group of studies (9/73, 12%) focused on specific health conditions. However, the recommended items vary significantly. Torrent-Fontbona and Lopez Ibanez [81] have built a knowledge-based recommender system to assist diabetes patients in numerous cases, such as the estimated carbohydrate intake and past and future physical activity. Pustozerov et al [43] try to “reduce the carbohydrate content of the desired meal by reducing the amount of carbohydrate-rich products or by suggesting variants of products for replacement.” Li and Kong [82] provided diabetes-related information, such as the need for a low-sodium lunch, targeted on American Indians through a mobile app. Other health conditions supported by recommender systems include depression and anxiety [83], mental disorders [45], and stress [34,54,84,85]. Both the mental disorder [45] and the depression and anxiety [83] HRSs recommend mobile apps. For example, the app MoveMe suggests exercises tailored to the user’s mood. The HRS to alleviate stress includes recommending books to read [54] and meditative audios [85].

Recommender Techniques

Overview

The recommender techniques used varied greatly. Table 2 shows the distributions of these recommender techniques.

Table 2. Overview of the different recommender techniques used in the studies.
Main techniqueaStudyTotal studies, n (%)
Collaborative filtering[59,69,76]3 (4)
Content-based filtering[15,32,54,63,72,86,87]7 (10)
Knowledge-based filtering[9,38,44,50,57,64,66,68,79,81,82,84,88-91]16 (22)
Hybrid[7,29,34,36,37,39-41,43,46-48,53,55,56,61,65,67,69,70,73,74,77,78,80,85,92-96,111]32 (44)
Context-based techniques[33,35,58,97]4 (5)
Not specified[45,83,98]3 (4)
Comparison between techniques[8,49,52,60,62,71,75,99]8 (11)

aThe papers are classified based on how the authors reported their techniques.

Recommender Techniques in Practice

The majority of HRSs (49/73, 67%) rely on knowledge-based techniques, either directly (17/49, 35%) or in a hybrid approach (32/49, 65%). Knowledge-based techniques are often used to incorporate additional information of patients into the recommendation process [112] and have been shown to improve the quality of recommendations while alleviating other drawbacks such as cold-start and sparsity issues [14]. Some studies use straightforward approaches, such as if-else reasoning based on domain knowledge [9,79,81,82,88,90,100]. Other studies use more complex algorithms such as particle swarm optimization [57], fuzzy logic [68], or reinforcement algorithms [44,84].

In total, 32 studies reported using a combination of recommender techniques and are classified as hybrid recommender systems. Different knowledge-based techniques are often combined. For example, Ali et al [56] used a combination of rule-based reasoning, case-based reasoning, and preference-based reasoning to recommend personalized physical activities according to the user’s specific needs and personal interests. Asthana et al [46] combined the knowledge of a decision tree and demographic information to identify the health conditions. When health conditions are known, the system knows which measurements need to be monitored. A total of 7 studies used a content-based technique to recommend educational content [15,72,87], activities [32,86], reading materials [54], or nutritional advice [63].

Although collaborative filtering is a popular technique [113], it is not used frequently in the HRS domain. Marlin et al [59] used collaborative filtering to personalize future smoking cessation messages based on explicit feedback on past messages. This approach is used more often in combination with other techniques. A total of 2 studies [38,92] combined content-based techniques with collaborative filtering. Esteban et al [92], for instance, switched between content-based and collaborative approaches. The former approach is used for new physiotherapy exercises and the latter, when a new patient is registered or when previous recommendations to a patient are updated.

Context-Based Recommender Techniques

From an HRS perspective, context is described as an aggregate of various information that describes the setting in which an HRS is deployed, such as the location, the current activity, and the available time of the user. A total of 5 studies use contextual information to improve their recommendations but use a different technique; a prefilter uses contextual information to select or construct the most relevant data for generating recommendations. For example, in Narducci et al [75], the set of potentially similar patients was restricted to consultation requests in a specific medical area. Rist et al [33] applied a rule-based contextual prefiltering approach [114] to filter out inadequate recommendations, for example, “if it is dark outside, all outdoor activities, such as ‘take a walk,’ are filtered out” [33] before they are fed to the recommendation algorithm. However, a postfilter removes the recommended items after they are generated, such as filtering outdoor activities while it is raining. Casino et al [97] used a postfiltering technique by running the recommended items through a real-time constraint checker. Finally, contextual modeling, which was used by 2 studies [35,58], uses contextual information directly in the recommendation function as an explicit predictor of a user’s rating for an item [114].

Location, agenda, and weather are examples of contextual information used by Lin et al [35] to promote the adoption of a healthy and active lifestyle. Cerón-Rios et al [58] used a decision tree to analyze user needs, health information, interests, time, location, and lifestyle to promote healthy habits. Casino et al [97] gathered contextual information through smart city sensor data to recommend healthier routes. Similarly, contextual information was acquired by Rist et al [33] using sensors embedded in the user’s environment.

Comparisons

A total of 8 papers compared different recommender techniques to find the most optimal algorithm for a specific data set, end users, domain, and goal. Halder et al [52] used two well-known health forum data sets (PatientsLikeMe [115] and HealthBoards [116]) to compare 7 recommender techniques (among collaborative filtering and content-based filtering) and found that a hybrid approach scored best [52]. Another example is the study by Narducci et al [75], who compared four recommendation algorithms: cosine similarity as a baseline, collaborative filtering, their own HealthNet algorithm, and a hybrid of HealthNet and cosine similarity. They concluded that a prefiltering technique for similar patients in a specific medical area can drastically improve the recommendation accuracy [75]. The average and SD of the resulting ratings of the two collaborative techniques are compared with random recommendations by Li et al [60]. They show that a hybrid approach of a collaborative filter augmented with the calculated health level of the user performs better. In their nutrition-based meal recommender system, Yang et al [49] used item-wise and pairwise image comparisons in a two-step process. In conclusion, the 8 studies showed that recommendations can be improved when the benefits of multiple recommender techniques are combined in a hybrid solution [60] or contextual filters are applied [75].

Evaluation Approach

Overview

HRSs can be evaluated in multiple ways. In this study, we found two categories of HRS evaluations: (1) offline evaluations that use computational approaches to evaluate the HRS and (2) evaluations in which an end user is involved. Some studies used both, as shown in Multimedia Appendix 3.

Offline Evaluations

Of the total studies, 47% (34/73) do not involve users directly in their method of evaluation. The evaluation metrics also vary greatly, as many distinct metrics are reported in the included papers (Multimedia Appendix 3). Precision 53% (18/34), accuracy 38% (13/34), performance 35% (12/34), and recall 32% (11/34) were the most commonly used offline evaluation metrics. Recall has been used significantly more in recent papers, whereas accuracy also follows an upward trend. Moreover, performance was defined differently across studies. Torrent-Fontbona and Lopez Ibanez [81] compared the “amount of time in the glycaemic target range by reducing the time below the target” as performance. Cho et al [72] compared the precision and recall to report the performance. Clarke et al [84] calculated their own reward function to compare different approaches, and Lin et al [35] measured system performance as the number of messages sent in their in the wild study. Finally, Marlin et al [59] tested the predictive performance using a triple cross-validation procedure.

Other popular offline evaluation metrics are accuracy-related measurements, such as mean absolute (percentage) error, 18% (6/34); normalized discounted cumulative gain (nDCG), 18% (6/34); F1 score, 15% (5/34); and root mean square error, 15% (5/34). The other metrics were measured inconsistently. For example, Casino et al [97] reported that they measure robustness but do not outline what they measure as robustness. However, they measured the mean absolute error. Torrent-Fontbona and Lopez Ibanez [81] defined robustness as the capability of the system to handle missing values. Effectiveness is also measured with different parameters, such as its ability to take the right classification decisions [75] or in terms of key opinion leaders’ identification [41]. Finally, Li and Zaman [68] measured trust with a proxy: “evaluate the trustworthiness of a particular user in a health care social network based on factors such as role and reputation of the user in the social community” [68].

User Evaluations
Overview

Of the total papers, 53% (39/73) included participants in their HRS evaluation, with an average sample size of 59 (SD 84) participants (excluding the outlier of 8057 participants, as recruited in the study by Cheung et al [83]). On average, studies ran for more than 2 months (68, SD 56 days) and included all age ranges. There is a trend of increasing sample size and study duration over the years. However, only 17 studies reported the study duration; therefore, these trends were not significant. Surveys (12/39, 31%), user studies (10/39, 26%), and deployments in the wild (10/39, 26%) were the most used user evaluations. Only 6 studies used an RCT to evaluate their HRS. Finally, although all the included studies focused on HRSs and were dealing with sensitive data, only 12% (9/73) [9,34,42-45,73,83,95] reported ethical approval by a review board.

Surveys

No universal survey was found, as all the studies deployed a distinct survey. Ge et al [48] used the system usability scale and the framework of Knijnenburg et al [117] to explain the user experience of recommender systems. Esteban et al [95] designed their own survey with 10 questions to inquire about user experience. Cerón-Rios [58] relied on the ISO/IEC (International Organization of Standardization/International Electrotechnical Commission) 25000 standard to select 7 usability metrics to evaluate usability. Although most studies did not explicitly report the surveys used, user experience was a popular evaluation metric, as in the study by Wang et al [69]. Other metrics range from measuring user satisfaction [69,99] and perceived prediction accuracy [59] (with 4 self-composed questions). Nurbakova et al [98] combined data analytics with surveys to map their participants’ psychological background, including orientations to happiness measured using the Peterson scale [118], personality traits using the Mini-International Personality Item Pool [119], and Fear of Missing Out based on the Przybylski scale [120].

Single-Session Evaluations (User Studies)

A total of 10 studies recruited users and asked them to perform certain tasks in a single session. Yang et al [49] performed a 60-person user study to assess its feasibility and effectiveness. Each participant was asked to rate meal recommendations relative to those made using a traditional survey-based approach. In a study by Gutiérrez et al [63], 15 users were asked to use the health augmented reality assistant and measure the qualities of the recommender system, users’ behavioral intentions, perceived usefulness, and perceived ease of use. Jiang and Xu [77] performed 30 consultations and invited 10 evaluators majoring in medicine and information systems to obtain an average rating score and nDCG. Radha et al [8] used comparative questions to evaluate the feasibility. Moreover, Cheng et al [89] used 2 user studies to rank two degrees of compromise (DOC). A low DOC assigns more weight to the algorithm, and a high DOC assigns more weight to the user’s health perspective. Recommendations with a lower DOC are more efficient for the user’s health, but recommendations with a high DOC could convince users to believe that the recommended action is worth doing. Other approaches used are structured interviews [58], ranking [86,89], asking for unstructured feedback [40,88], and focus group discussions [87]. Finally, 3 studies [15,75,90] evaluated their system through a heuristic evaluation with expert users.

In the Wild

Only 2 studies tested their HRS into the wild recruited patients (people with a diagnosed health condition) in their evaluation. Yom-Tov et al [44] provided 27 sedentary patients with type 2 diabetes with a smartphone-based pedometer and a personal plan for physical activity. They assessed the effectiveness by calculating the amount of activity that the patient performed after the last message was sent. Lima-Medina et al [73] interviewed 45 patients with cardiovascular problems after a 6-month study period to measure (1) social management results, (2) health care plan results, and (3) recommendation results. Rist et al [33] performed an in-situ evaluation in an apartment of an older couple and used the data logs to describe the usage but augmented the data with a structured interview.

Yang et al [49] conducted a field study of 227 anonymous users that consisted of a training phase and a testing phase to assess the prediction accuracy. Buhl et al [99] created three user groups according to the recommender technique used and analyzed log data to compare the response rate, open email rate, and consecutive log-in rate. Similarly, Huang et al [76] compared the ratio of recommended doctors chosen and reserved by patients with the recommended doctors. Lin et al [35] asked 6 participants to use their HRSs for 5 weeks, measured system performance, studied user feedback to the recommendations, and concluded with an open-user interview. Finally, Ali et al [56] asked 10 volunteers to use their weight management systems for a couple of weeks. However, they do not focus on user-centric evaluation, as “only a prototype of the [...] platform is implemented.”

Rabbi et al [7] followed a single case with multiple baseline designs [121]. Single-case experiments achieve sufficient statistical power with a large number of repeated samples from a single individual. Moreover, Rabbi et al [7] argued that HRSs suit this requirement “since enough repeated samples can be collected with automated sensing or daily manual logging [121].” Participants were exposed to 2, 3, or 4 weeks of the control condition. The study ran for 7-9 weeks to compensate for the novelty effects. Food and exercise log data were used to measure changes in food calorie intake and calorie loss during exercise.

Randomized Controlled Trials

Only 6 studies followed an RCT approach. In the RCT by Bidargaddi et al [45], a large group of patients (n=192) and control group (n=195) were asked to use a web-based recommendation service for 4 weeks that recommended mental health and well-being mobile apps. Changes in well-being were measured using the Mental Health Continuum-Short Form [122]. The RCT by Sadasivam et al [42] enrolled 120 current smokers (n=74) and control group (n=46) as a follow-up to a previous RCT [123] that evaluated their portal to specifically evaluate the HRS algorithm. Message ratings were compared between the intervention and control groups.

Cheung et al [83] measured app loyalty through the number of weekly app sessions over a period of 16 weeks with 8057 users. In the study by Paredes et al [34], 120 participants had to use the HRS for at least 26 days. Self-reported stress assessment was performed before and after the intervention. Agapito et al [67] used an RCT with 40 participants to validate the sensitivity (true positive rate/[true positive rate+false negative rate]) and specificity (true negative rate/[true negative rate+false positive rate]) of the DIETOS HRS. Finally, Luo et al [93] performed a small clinical trial for more than 3 months (but did not report the number of participants). Their primary outcome measures included two standard clinical blood tests: fasting blood glucose and laboratory-measured glycated hemoglobin, before and after the intervention.

Interface
Overview

Only 47% (34/73) of the studies reported implementing a graphical user interface to communicate the recommended health items to the user. As illustrated in Table 3, 53% (18/34) use a mobile interface, usually through a mobile (web) app, whereas 36% (14/34) use a web interface to show the recommended items. Rist et al [33] built a kiosk into older adults’ homes, as illustrated in Figure 3. Gutiérrez et al [63] used Microsoft HoloLens to project healthy food alternatives in augmented reality surrounding a physical object that the user holds, as shown in Figure 4.

Table 3. Distribution of the interfaces used among the different health recommender systems (n=34).
InterfaceStudyTotal studies, n (%)
Mobile[7,34,35,40,44,48,56,58,66,69,77,78,82-84,86,88,97]18 (53)
Web[9,15,37,41,45,49,61,70,73,75,79,85,90,95]14 (41)
Kiosk[33]1 (3)
HoloLens[63]1 (3)
Figure 3. Rist et al installed a kiosk in the home of older adults as a direct interface to their health recommender system.
View this figure
Figure 4. An example of the recommended healthy alternatives by Gutiérrez et al.
View this figure
Visualization

A total of 7 studies [33,34,37,63,79,88,97] or approximately one-fourth of the studies with an interface included visualizations. However, the approach used was different for all studies, as shown in Table 4. Showing stars to show the relevance of a recommended item are only used by Casino et al [97] and Gutiérrez et al [63]. Wayman and Madhvanath [37] also used bar charts to visualize the progress toward a health goal. They visualize the healthy proportions, that is, what the user should eat. Somewhat more complex visualizations are used by Ho and Chen [88] who visualized the user’s ECG zones. Paredes et al [34] presented an emotion graph as an input screen. Rist et al [33] visualized an example of how to perform the recommended activity.

Table 4. Distribution of the visualizations used among the different health recommender systems (n=7).
Visualization techniqueStudyTotal studies, n (%)
Bar chartsWayman and Madhvanath [37] and Gutiérrez et al [63]2 (29)
HeatmapHo and Chen [88]1 (14)
Emotion graphParedes et al [34]1 (14)
Visual example of actionRist et al [33]1 (14)
MapAvila-Vazquez et al [79]1 (14)
Star ratingCasino et al [97]1 (14)
Transparency

In the study by Lage et al [87], participants expressed that:

they would like to have more control over recommendations received. In that sense, they suggested more information regarding the reasons why the recommendations are generated and more options to assess them.

A total of 7 studies [7,37,41,45,63,66,82] explained the reasoning behind recommendations to end users at the user interface. Gutiérrez et al [63] provided recommendations for healthier food products and mentioned that the items (Figure 4) are based on the users’ profile. Ueta et al [66] explained the relationship between the recommended dishes and a person’s health conditions. For example, a person with acne can see the following text: “15 dishes that contained Pantothenic acid thought to be effective in acne a lot became a hit” [66]. Li and Kong [82] showed personalized recommended health actions in a message center. Color codes are used to differentiate between reminders, missed warnings, and recommendations. Rabbi et al [7] showed tailored motivational messages to explain why activities are recommended. For example, when the activity walk near East Ave is recommended, the app shows the additional message:

1082 walks in 240 days, 20 mins of walk everyday. Each walk nearly 4 min. Let us get 20 mins or more walk here today
[ 7 ]

Wayman and Madhvanath [37] first visualized the user’s personal nutrition profile and used the lower part of the interface to explain why the item was recommended. They provided an illustrative example of spaghetti squash. The explanation shows that:

This product is high in Dietary_fiber, which you could consume more of. Try to get 3 servings a week
[ 37 ]

Guo et al [41] recommended doctors and showed a horizontal bar chart to visualize the user’s values compared with the average values. Finally, Bidargaddi et al [45] visualized how the recommended app overlaps with the goal set by the users, as illustrated in Figure 5.

Figure 5. A screenshot from the health recommender system of Bidargaddi et al. Note the blue tags illustrating how each recommended app matches the users’ goals.
View this figure

Principal Findings

HRSs cover a multitude of subdomains, recommended items, implementation techniques, evaluation designs, and means of communicating the recommended items to the target user. In this systematic review, we clustered the recommended items into four groups: lifestyle, nutrition, general health care information, and specific health conditions. There is a clear trend toward HRSs that provide well-being recommendations but do not directly intervene in the user’s medical status. For example, almost 70% (50/73; lifestyle and nutrition) focused on no strict medical recommendations. In the lifestyle group, physical activities (10/24, 42%) and advice on how to potentially change behavior (7/24, 29%) were recommended most often. In the nutrition group, these recommendations focused on nutritional advice (8/26, 31%), diets (7/26, 27%), and recipes (7/26, 27%). A similar trend was observed in the health care information group, where HRSs focused on guiding users to the appropriate environments such as hospitals (5/23, 22%) and medical professionals (4/23, 17%) or on helping users find qualitative information (5/23, 22%) on validated sources or from experiences by similar users and patients on health care forums (3/23, 13%). Thus, they only provide general information and do not intervene by recommending, for example, changing medication. Finally, when HRSs targeted specific health conditions, they recommended nonintervening actions, such as meditation sessions [84] or books to read [54].

Although collaborative filtering is commonly the most used technique in other domains [124], here only 3 included studies reported the use of a collaborative filtering approach. Moreover, 43% (32/73) of the studies applied a hybrid approach, showing that HRS data sets might need special attention, which might also be the reason why all 73 studies used distinct data sets. In addition, the HRS evaluations varied greatly and were divided over evaluations where the end user was involved and evaluations that did not evolve users (offline evaluations). Only 47% (34/73) of the studies reported implementing a user interface to communicate recommendations to the user, despite the need to show the rationale of recommendations, as echoed by many researchers and practitioners [11]. Moreover, only 15% (7/47) included a (basic) visualization.

Unfortunately, this general lack of agreement on how to report HRSs might introduce researcher bias, as a researcher is currently completely unconstrained in defining what and how to measure the added value of an HRS. Therefore, further debate in the health recommender community is needed on how to define and measure the impact of HRSs. On the basis of our review and contribution to this discussion, we put forward a set of essential information that researchers should report in their studies.

Considerations for Practice

The previously discussed results have direct implications in practice and provide suggestions for future research. Figure 6 shows a reference frame of these requirements that can be used in future studies as a quality assessment tool.

Figure 6. A reference frame to report health recommender system studies. On the basis of the results of this study, we suggest that it should be clear what and how items are recommended (A), who the target user is (B), which data are used (C), and which recommender techniques are applied (D). Finally, the evaluation design should be reported in detail (E).
View this figure
Define the Target User

As shown in this review, HRSs are used in a plethora of subdomains and each domain has its own experts. For example, in nutrition, the expert is most likely a dietician. However, the user of an HRS is usually a layperson without the knowledge of these domain experts, who often have different viewing preferences [125]. Furthermore, each user is unique. All individuals have idiosyncratic reasons for why they act, think, behave, and feel in a certain way at a specific stage of their life [126]. Not everybody is motivated by the same elements. Therefore, it is important to know the target user of the HRS. What is their previous knowledge, what are their goals, and what motivates them to act on a recommended item?

Show What Is Recommended (and How)

Researchers have become aware that accuracy is not sufficient to increase the effectiveness of a recommender system [127]. In recent years, research on human factors has gained attention. For example, He et al [11] surveyed 24 existing interactive recommender systems and compared their transparency, justification, controllability, and diversity. However, none of these 24 papers discussed HRSs. This indicates the gap between HRSs and recommender systems in other fields. Human factors have gained interest in the recommender community by “combining interactive visualization techniques with recommendation techniques to support transparency and controllability of the recommendation process” [11]. However, in this study, only 10% (7/73) explained the rationale of recommendations and only 10% (7/73) included a visualization to communicate the recommendations to the user. We do not argue that all HRSs should include a visualization or an explanation. However, researchers should pay attention to the delivery of these recommendations. Users need to understand, believe, and trust the recommended items before they can act on it.

To compare and assess HRSs, researchers should unambiguously report what the HRS is recommending. After all, typical recommender systems act like a black box, that is, they show suggestions without explaining the provenance of these recommendations [11]. Although this approach is suitable for typical e-commerce applications that involve little risk, transparency is a core requirement in higher risk application domains such as health [128]. Users need to understand why a recommendation is made, to assess its value and importance [12]. Moreover, health information can be cumbersome and not always easy to understand or situate within a specific health condition [129]. Users need to know whether the recommended item or action is based on a trusted source, tailored to their needs, and actionable [130].

Report the Data Set Used

All 73 studies used a distinct data set. Furthermore, some studies combine data from multiple databases, making it even more difficult to judge the quality of the data [35]. Nonetheless, most studies use self-generated data sets. This makes it difficult to compare and externally validate HRSs. Therefore, we argued that researchers should clarify the data used and potentially share whether these data are publicly available. However, in health data are often highly privacy sensitive and cannot be shared among researchers.

Outline the Recommender Techniques

The results show that there is no panacea for which recommender technique to use. The included studies differ from logic filters to traditional recommender techniques, such as collaborative filtering and content-based filtering to hybrid solutions and self-developed algorithms. However, with 44% (32/73), there is a strong trend toward the use of hybrid recommender techniques. The low number of collaborative filter techniques might be related to the fact that the evaluation sample sizes were also relatively low. Unfortunately, some studies have not fully disclosed the techniques used and only reported on the main algorithm used. It is remarkable that studies published in high-impact journals, such as studies by Bidargaddi et al [45] and Cheung et al [83], did not provide information on the recommender technique used. Nonetheless, disclosing the recommender technique allows other researchers not only to build on empirically tested technologies but also to verify whether key variables are included [29]. User data and behavior data can be identified to augment theory-based studies [29]. Researchers should prove that the algorithm is capable of recommending valid and trustworthy recommendations to the user based on their available data set.

Elaborate on the Evaluation Protocols

HRSs can be evaluated using different evaluation protocols. However, the protocol should be outlined mainly by the research goals of the authors. On the basis of the papers included in this study, we differentiate between the two approaches. In the first approach, the authors aim to influence their users’ health, for example, by providing personalized diabetes guidelines [81] or prevention exercises for users with low back pain [95]. Therefore, the end user should always be involved in both the design and evaluation processes. However, only 8% (6/73) performed an RCT and 14% (10/73) deployed their HRS in the wild. This lack of user involvement has been noted previously by researchers and has been identified as a major challenge in the field [27,28]. Nonetheless, in other domains, such as job recommenders [131] or agriculture [132], user-centered design has been proposed as an important methodology in the design and development of tools used by end users, with the purpose of gaining trust and promoting technology acceptance, thereby increasing adoption with end users. Therefore, we recommend that researchers evaluate their HRSs with actual users. A potential model for a user-centric approach to recommender system evaluation is the user-centric framework proposed by Knijnenburg et al [117].

Research protocols need to be elaborated and approved by an ethical review board to prevent any impact on users. Authors should report how they informed their users and how they safeguarded the privacy of the users. This is in line with the modern journal and conference guidelines. For example, editorial policies of the Journal of Medical Internet Research state that “when reporting experiments on human subjects, authors should indicate IRB (Institutional Rese[a]rch Board, also known as REB) approval/exemption and whether the procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation” [133]. However, only 12% (9/73) reported their approval by an ethical review board. Acquiring review board approval will help the field mature and transition from small incremental studies to larger studies with representative users to make more reliable and valid findings.

In the second approach, the authors aim to design a better algorithm, where better is again defined by the authors. For example, the algorithm might perform faster, be more accurate, and be more efficient in computing power. Although the F1 score, the mean absolute error, and nDCG are well defined and known within the recommender domain, other parameters are more ambiguous. For example, the performance or effectiveness can be assessed using different measurements. However, a health parameter can be monitored, such as the duration that a user remains within healthy ranges [81]. Furthermore, it could be a predictive parameter, such as an improved precision and recall as a proxy for performance [72]. Unfortunately, this difference makes it difficult to compare health recommendation algorithms. Furthermore, this inconsistency in measurement variables makes it infeasible to report in this systematic review which recommender techniques to use. Therefore, we argue that HRS algorithms should always be evaluated for other researchers to validate the results, if needed.

Limitations

This study has some limitations that affect its contribution. Although an extensive scope search was conducted in scientific databases and most relevant health care informatic journals, some relevant literature in other domains might have been excluded. The keywords used in the search string could have impacted the results. First, we did not include domain-specific constructs of health, such as asthma, pregnancy, and iron deficiency. Many studies may implicitly report healthy computer-generated recommendations when they research the impact of a new intervention. In these studies, however, building an HRS is often not their goal and, therefore, was excluded from this study. Second, we searched for papers that reported studying an HRS; nonincluded studies might have built an HRS but did not report it as such. Considering our RQs, we deemed it important that authors explicitly reported their work as a recommender system. To conclude, in this study, we provide a large cross-domain overview of health recommender techniques targeted to laypersons and deliver a set of recommendations that could help the field of HRS mature.

Conclusions

This study presents a comprehensive report on the use of HRS across domains. We have discussed the different subdomains HRS applied in, the different recommender techniques used, the different manners in which they are evaluated, and finally, how they present the recommendations to the user. On the basis of this analysis, we have provided research guidelines toward a consistent reporting of HRSs. We found that although most applications are intended to improve users’ well-being, there is a significant opportunity for HRSs to inform and guide users’ health actions. Although many of the studies present a lack of a user-centered evaluation approach, some studies performed full-scale RCT evaluations or elaborated in the wild studies to validate their HRS, showing the field of HRS is slowly maturing. On the basis of this study, we argue that it should always be clear what the HRS is recommending and to whom these recommendations are for. Graphical assets should be added to show how recommendations are presented to users. Authors should also report which data sets and algorithms were used to calculate the recommendations. Finally, detailed evaluation protocols should be reported.

We conclude that the results motivate the creation of richer applications in future design and development of HRSs. The field is maturing, and interesting opportunities are being created to inform and guide health actions.

Acknowledgments

This work was part of the research project PANACEA Gaming Platform with project HBC.2016.0177, which was financed by Flanders Innovation & Entrepreneurship and research project IMPERIUM with research grant G0A3319N from the Research Foundation-Flanders (FWO) and the Slovenian Research Agency grant ARRS-N2-0101. Project partners were BeWell Innovations and the University Hospital of Antwerp.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Coded data set of all included papers.

XLS File (Microsoft Excel File), 50 KB

Multimedia Appendix 2

Overview of recommended items by 73 studies.

PNG File , 323 KB

Multimedia Appendix 3

Overview of evaluation approaches.

PNG File , 375 KB

References

  1. Stevens G, Mascarenhas M, Mathers C. Global health risks: progress and challenges. Bull World Health Organ 2009 Sep;87(9):646 [FREE Full text] [CrossRef] [Medline]
  2. de Choudhury M, Sharma S, Kiciman E. Characterizing Dietary Choices, Nutrition, and Language in Food Deserts via Social Media. In: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing New York, NY. New York, United States: Association for Computing Machinery; 2016 Mar 2 Presented at: CSCW'16; Feb 27–March 2, 2016; San Francisco p. 1157-1170. [CrossRef]
  3. Glanz K, Rimer B, Viswanath K. Health Behavior and Health Education: Theory, Research, and Practice. San Francisco: John Wiley & Sons; 2008.
  4. Woolf SH. The power of prevention and what it requires. J Am Med Assoc 2008 May 28;299(20):2437-2439. [CrossRef] [Medline]
  5. Elsweiler D, Ludwig B, Said A, Schaefer H, Trattner C. Engendering Health with Recommender Systems. In: Proceedings of the 10th ACM Conference on Recommender Systems. New York, United States: Association for Computing Machinery; 2016 Presented at: RecSys'16; September 15 - September 19, 2016; New York, NY p. -. [CrossRef]
  6. Yürüten O. Recommender Systems for Healthy Behavior Change Internet. École polytechnique fédérale de Lausanne. 2017.   URL: https://infoscience.epfl.ch/record/231155/files/EPFL_TH7973.pdf [accessed 2021-06-04]
  7. Rabbi M, Aung MH, Zhang M, Choudhury T. MyBehavior: Automatic Personalized Health Feedback from User Behaviors Preferences Using Smartphones. In: Proceedings of the 2015 ACM International Joint Conference on Pervasive Ubiquitous Computing New York, NY. New York, United States: Association for Computing Machinery; 2015 Presented at: UbiComp'15; September 7-11, 2015; Osaka p. A. [CrossRef]
  8. Radha M, Willemsen MC, Boerhof M, IJsselsteijn WA. Lifestyle Recommendations for Hypertension through Rasch-based Feasibility Modeling. In: Proceedings of the 2016 Conference on User Modeling Adaptation Personalization. New York, United States: Association for Computing Machinery; 2016 Presented at: UMAP'16; July 13–17, 2016; Halifax, Nova Scotia, Canada p. 239-247. [CrossRef]
  9. Hidalgo JI, Maqueda E, Risco-Martín JL, Cuesta-Infante A, Colmenar JM, Nobel J. glUCModel: a monitoring and modeling system for chronic diseases applied to diabetes. J Biomed Inform 2014 Apr;48:183-192 [FREE Full text] [CrossRef] [Medline]
  10. Grönvall E, Verdezoto N, Bagalkot N, Sokoler T. Concordance: A Critical Participatory Alternative in Healthcare IT. In: Proceedings of The Fifth Decennial Aarhus Conference on Critical Alternatives. 2015 Oct 5 Presented at: CA'15; August 17-21, 2015; Aarhus, Denmark p. 21-24. [CrossRef]
  11. He C, Parra D, Verbert K. Interactive recommender systems: a survey of the state of the art and future research challenges and opportunities. Expert Syst Appl 2016 Sep;56:9-27. [CrossRef]
  12. Ricci F, Rokach L, Shapira B. Recommender Systems Handbook. USA: Springer US; 2015:1003.
  13. Kobsa A, Nejdl W. Hybrid web recommender systems. In: Brusilovsky P, Kobsa A, Nejdl W, editors. The Adaptive Web Methods and Strategies of Web Personalization. Berlin: Springer Berlin Heidelberg; 2007:377-480.
  14. Adomavicius G, Tuzhilin A. Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. IEEE Trans Knowl Data Eng 2005 Jun;17(6):734-749. [CrossRef]
  15. Bocanegra CL, Sevillano JL, Rizo C, Civit A, Fernandez-Luque L. HealthRecSys: a semantic content-based recommender system to complement health videos. BMC Med Inform Decis Mak 2017 May 15;17(1):63 [FREE Full text] [CrossRef] [Medline]
  16. Wu L, Shah S, Choi S, Tiwari M, Posse C. The Browsemaps: Collaborative Filtering at LinkedIn. In: Proceedings of the 6th Workshop on Recommender Systems and the Social Web. 2014 Presented at: RecSys'14; October 6-10, 2014; Foster City, CA, USA   URL: http://ceur-ws.org/Vol-1271/Paper3.pdf
  17. Verbert K, Duval E, Lindstaedt SN, Gillet D. Context-aware recommender systems. J Univers Comput Sci 2010;16(16):2175-2178. [CrossRef]
  18. Verbert K, Manouselis N, Ochoa X, Wolpers M, Drachsler H, Bosnic I, et al. Context-aware recommender systems for learning: a survey and future challenges. IEEE Trans Learn Technol 2012 Oct;5(4):318-335. [CrossRef]
  19. Mahmood T, Ricci F. Improving Recommender Systems With Adaptive Conversational Strategies. In: Proceedings of the 20th ACM Conference on Hypertext Hypermedia. USA: Association for Computing Machinery; 2009 Presented at: ACM'09; June 29–July 1, 2009; Torino, Italy p. 73-85. [CrossRef]
  20. Resnick P, Varian HR. Recommender systems. Commun ACM 1997 Mar;40(3):56-58. [CrossRef]
  21. Wiesner M, Pfeifer D. Health recommender systems: concepts, requirements, technical basics and challenges. Int J Environ Res Public Health 2014 Mar 3;11(3):2580-2607 [FREE Full text] [CrossRef] [Medline]
  22. Sezgin E, Özkan S. A Systematic Literature Review on Health Recommender Systems. In: E-Health and Bioengineering Conference (EHB).: IEEE; 2013 Presented at: EHB'13; November 21-23, 2013; Iasi, Romania p. 1. [CrossRef]
  23. Valdez AC, Ziefle M, Verbert K, Felfernig A, Holzinger A. Recommender systems for health informatics: state-of-the-art and future perspectives. In: Holzinger A, editor. Machine Learning for Health Informatics. Lecture Notes in Computer Science. Cham, Switzerland: Springer International Publishing; 2016:391-414.
  24. Kamran M, Javed A. A survey of recommender systems and their application in healthcare. Tech J 2015;20(4):111-119 [FREE Full text]
  25. Afolabi AO, Toivanen P, Haataja K, Mykkänen J. Systematic literature review on empirical results and practical implementations of healthcare recommender systems: lessons learned and a novel proposal. Int J Healthcare Inform Syst Inf 2015;10(4):1-21 [FREE Full text] [CrossRef]
  26. Ferretto LR, Cervi CR, de Marchi AC. Recommender systems in mobile apps for health a systematic review. In: 12th Iberian Conference on Information Systems and Technologies (CISTI).: IEEE; 2017 Presented at: CISTI'17; June 21-24, 2017; Lisbon, Portugal p. 1-6. [CrossRef]
  27. Hors-Fraile S, Rivera-Romero O, Schneider F, Fernandez-Luque L, Luna-Perejon F, Civit-Balcells A, et al. Analyzing recommender systems for health promotion using a multidisciplinary taxonomy: a scoping review. Int J Med Inform 2018 Jun;114:143-155. [CrossRef] [Medline]
  28. Schäfer H, Hors-Fraile S, Karumur RP, Valdez AC, Said A, Torkamaan H, et al. Towards Health (Aware) Recommender Systems. In: Proceedings of the 2017 International Conference on Digital Health. USA: ACM; 2017 Presented at: DH'17; July 2-5, 2017; London p. 157-161. [CrossRef]
  29. Sadasivam RS, Cutrona SL, Kinney RL, Marlin BM, Mazor KM, Lemon SC, et al. Collective-intelligence recommender systems: advancing computer tailoring for health behavior change into the 21st century. J Med Internet Res 2016 Mar 7;18(3):e42 [FREE Full text] [CrossRef] [Medline]
  30. Cappella JN, Yang S, Lee S. Constructing recommendation systems for effective health messages using content, collaborative, and hybrid algorithms. Ann Am Acad Pol Soc Sci 2015 Apr 9;659(1):290-306. [CrossRef]
  31. Lika B, Kolomvatsos K, Hadjiefthymiades S. Facing the cold start problem in recommender systems. Expert Syst Appl 2014 Mar;41(4):2065-2073. [CrossRef]
  32. Kumar G, Jerbi H, Gurrin C, O’Mahony M. Towards Activity Recommendation from Lifelogs. In: Proceedings of the 16th International Conference on Information Integration and Web-based Applications & Services. 2014 Presented at: CIWAS'14; December 4-6 2014; Hanoi, Viet Nam. [CrossRef]
  33. Rist T, Seiderer A, Hammer S, Mayr M, André E. CARE - Extending a Digital Picture Frame with a Recommender Mode to Enhance Well-Being of Elderly People. In: Proceedings of the 9th International Conference on Pervasive Computing Technologies for Healthcare. 2015 Presented at: IEEE'115; May 20-23, 2015; Istanbul, Turkey. [CrossRef]
  34. Paredes P, Gilad-Bachrach R, Czerwinski M, Roseway A, Rowan K, Hernandez J. PopTherapy: Coping With Stress Through Pop-culture. In: Proceedings of the 8th International Conference on Pervasive Computing Technologies for Healthcare. 2014 Presented at: PCTH'14; May 20-23, 2014; Oldenburg, Germany. [CrossRef]
  35. Lin Y, Jessurun J, de Vries B, Timmermans H. Motivate: towards Context-Aware Recommendation Mobile System for Healthy Living. In: 5th International ICST Conference on Pervasive Computing Technologies for Healthcare. 2011 Presented at: PCTH'11; May 23-26, 2011; Dublin, Republic of Ireland. [CrossRef]
  36. Elsweiler D, Harvey M. Towards Automatic Meal Plan Recommendations for Balanced Nutrition. In: Proceedings of the 9th ACM Conference on Recommender Systems. 2015 Presented at: ACM'15; September 16–20, 2015; Vienna, Austria. [CrossRef]
  37. Wayman E, Madhvanath S. Nudging Grocery Shoppers to Make Healthier Choices. In: Proceedings of the 9th ACM Conference on Recommender Systems. 2015 Presented at: ACM'15; September 16-20, 2015; Vienna, Austria. [CrossRef]
  38. Subagdja B, Tan AH. Coordinated Persuasion with Dynamic Group Formation for Collaborative Elderly Care. In: 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology. 2015 Presented at: WI-IAT'15; December 6-9, 2015; Singapore. [CrossRef]
  39. Lafta R, Zhang J, Tao X, Li Y, Tseng VS. An Intelligent Recommender System Based on Short-Term Risk Prediction for Heart Disease Patients. In: 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology. 2015 Presented at: WI-IAT'15; December 6-9, 2015; Singapore. [CrossRef]
  40. Takama Y, Sasaki W, Okumura T, Yu C, Chen L, Ishikawa H. Walking Route Recommendation System for Taking a Walk as Health Promotion. In: 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology. 2015 Presented at: WI-IAT'15; December 6-9, 2015; Singapore. [CrossRef]
  41. Guo L, Jin B, Yao C, Yang H, Huang D, Wang F. Which doctor to trust: a recommender system for identifying the right doctors. J Med Internet Res 2016 Jul 7;18(7):e186 [FREE Full text] [CrossRef] [Medline]
  42. Sadasivam RS, Borglund EM, Adams R, Marlin BM, Houston TK. Impact of a collective intelligence tailored messaging system on smoking cessation: the perspect randomized experiment. J Med Internet Res 2016 Nov 8;18(11):e285 [FREE Full text] [CrossRef] [Medline]
  43. Pustozerov E, Popova P, Tkachuk A, Bolotko Y, Yuldashev Z, Grineva E. Development and evaluation of a mobile personalized blood glucose prediction system for patients with gestational diabetes mellitus. JMIR Mhealth Uhealth 2018 Jan 9;6(1):e6 [FREE Full text] [CrossRef] [Medline]
  44. Yom-Tov E, Feraru G, Kozdoba M, Mannor S, Tennenholtz M, Hochberg I. Encouraging physical activity in patients with diabetes: intervention using a reinforcement learning system. J Med Internet Res 2017 Oct 10;19(10):e338 [FREE Full text] [CrossRef] [Medline]
  45. Bidargaddi N, Musiat P, Winsall M, Vogl G, Blake V, Quinn S, et al. Efficacy of a web-based guided recommendation service for a curated list of readily available mental health and well-being mobile apps for young people: randomized controlled trial. J Med Internet Res 2017 May 12;19(5):e141 [FREE Full text] [CrossRef] [Medline]
  46. Asthana S, Megahed A, Strong R. A Recommendation System for Proactive Health Monitoring Using IoT and Wearable Technologies. In: 2017 IEEE International Conference on AI & Mobile Services. 2017 Presented at: AIMS'17; 25-30 June, 2017; Honolulu, HI, USA. [CrossRef]
  47. Ng YK, Jin M. Personalized Recipe Recommendations for Toddlers Based on Nutrient Intake and Food Preferences. In: Proceedings of the 9th International Conference on Management of Digital EcoSystems. 2017 Presented at: CMDE'17; November 7–10, 2017; Bangkok, Thailand. [CrossRef]
  48. Ge M, Elahi M, Fernaández-Tobías I, Ricci F, Massimo D. Using Tags and Latent Factors in a Food Recommender System. In: Proceedings of the 5th International Conference on Digital Health 2015. 2015 Presented at: CDH'15; May 18–20, 2015; Florence, Italy. [CrossRef]
  49. Yang L, Hsieh C, Yang H, Pollak J, Dell N, Belongie S, et al. Yum-me: a personalized nutrient-based meal recommender system. ACM Trans Inf Syst 2017 Aug;36(1):1-31 [FREE Full text] [CrossRef] [Medline]
  50. Pawar KR, Ghorpade T, Shedge R. Constraint Based Recipe Recommendation Using Forward Checking Algorithm. In: International Conference on Advances in Computing, Communications and Informatics. 2016 Presented at: ICACCI'16; Spetember 21-24, 2016; Jaipur, India. [CrossRef]
  51. Krishna S, Venkatesh P, Swagath S, Valliyammai C. A Trust Enhanced Recommender System for Medicare Applications. In: Sixth International Conference on Advanced Computing. 2014 Presented at: ICoAC'14; December 17-19, 2014; Chennai, India. [CrossRef]
  52. Halder K, Kan MY, Sugiyama K. Health Forum Thread Recommendation Using an Interest Aware Topic Model. In: Proceedings of the 2017 ACM on Conference on Information Knowledge Management. 2017 Presented at: ACM'17; November 6–10, 2017; Singapore. [CrossRef]
  53. Farrell RG, Danis CM, Ramakrishnan S, Kellogg WA. Intrapersonal Retrospective Recommendation: Lifestyle Change Recommendations Using Stable Patterns of Personal Behavior. In: Proceedings of the First International Workshop on Recommendation Technologies for Lifestyle Change. 2012 Presented at: LIFESTYLE'12; September 13, 2012; Dublin, Ireland.
  54. Xin Y, Chen Y, Jin L, Cai Y, Feng L. TeenRead: An Adolescents Reading Recommendation System Towards Online Bibliotherapy. In: IEEE International Congress on Big Data. 2017 Presented at: BigData Congress'17; June 25-30, 2017; Honolulu, HI, USA. [CrossRef]
  55. Zaman N, Li J. Semantics-Enhanced Recommendation System for Social Healthcare. In: 28th International Conference on Advanced Information Networking and Applications. 2014 Presented at: IEEE'14; May 13–16, 2014; Victoria, Canada. [CrossRef]
  56. Ali R, Afzal M, Hussain M, Ali M, Siddiqi MH, Lee S, et al. Multimodal hybrid reasoning methodology for personalized wellbeing services. Comput Biol Med 2016 Feb 1;69:10-28. [CrossRef] [Medline]
  57. Pop CB, Chifu VR, Salomie I, Cozac A, Mesaros I. Particle Swarm Optimization-based Method for Generating Healthy Lifestyle Recommendations. In: 9th International Conference on Intelligent Computer Communication and Processing. 2013 Presented at: ICCP'13; September 5-7, 2013; Cluj-Napoca, Romania. [CrossRef]
  58. Cerón-Rios G, López DM, Blobel B. Architecture and user-context models of cocare: a context-aware mobile recommender system for health promotion. Stud Health Technol Inform 2017;237:140-147. [Medline]
  59. Marlin BM, Adams RJ, Sadasivam R, Houston TK. Towards collaborative filtering recommender systems for tailored health communications. AMIA Annu Symp Proc 2013;2013:1600-1607 [FREE Full text] [Medline]
  60. Li H, Zhang Q, Lu K. Integrating Mobile Sensing and Social Network for Personalized Health-care Application. In: Proceedings of the 30th Annual ACM Symposium on Applied Computing. 2015 Presented at: ACM'15; April 13-17, 2015; Salamanca, Spain. [CrossRef]
  61. Ntalaperas D, Bothos E, Perakis K, Magoutas B, Mentzas G. An Intelligent System for Personalized Nutritional Recommendations in Restaurants. In: Proceedings of the 19th Panhellenic Conference on Informatics. 2015 Presented at: PCI'15; October 1-3, 2015; Athens, Greece. [CrossRef]
  62. Trattner C, Elsweiler D. Investigating the Healthiness of Internet-Sourced Recipes: Implications for Meal Planning and Recommender Systems. In: Proceedings of the 26th International Conference on World Wide Web. 2017 Presented at: WWW'17; April 3–7, 2017; Perth, Australia. [CrossRef]
  63. Gutiérrez F, Cardoso B, Verbert K. PHARA: A Personal Health Augmented Reality Assistant to Support Decision-making at Grocery Stores. In: Proceedings of the International Workshop on Health Recommender Systems co-located with ACM. 2017 Presented at: RecSys'17; August 1-4, 2017; Como, Italy. [CrossRef]
  64. Lo YW, Zhao Q, Chen RC. Automatic Generation and Recommendation of Recipes Based on Outlier Analysis. In: 7th International Conference on Awareness Science and Technology. 2015 Presented at: iCAST'15; May 6-9, 2015; Qinhuangdao, China. [CrossRef]
  65. Ooi C, Iiba C, Takano C. Ingredient Substitute Recommendation for Allergy-safe Cooking Based on Food Context. In: Pacific Rim Conference on Communications, Computers and Signal Processing. 2015 Presented at: PACRIM'15; August 24-26, 2015; Victoria, BC, Canada. [CrossRef]
  66. Ueta T, Iwakami M, Ito T. Implementation of a Goal-Oriented Recipe Recommendation System Providing Nutrition Information. In: International Conference on Technologies and Applications of Artificial Intelligence. 2011 Presented at: TAAI'11; November 11-13, 2011; Chung Li, Taiwan. [CrossRef]
  67. Agapito G, Simeoni M, Calabrese B, Caré I, Lamprinoudi T, Guzzi PH, et al. DIETOS: a dietary recommender system for chronic diseases monitoring and management. Comput Methods Programs Biomed 2018 Jan;153:93-104. [CrossRef] [Medline]
  68. Li J, Zaman N. Personalized Healthcare Recommender Based on Social Media. In: Proceedings of the 2014 IEEE 28th International Conference on Advanced Information Networking and Applications. 2014 Presented at: AINA'14; May 13-16, 2014; Victoria, BC, Canada. [CrossRef]
  69. Wang S, Chen YL, Kuo AM, Chen H, Shiu YS. Design and evaluation of a cloud-based mobile health information recommendation system on wireless sensor networks. Comput Electr Engg 2016 Jan;49:221-235. [CrossRef]
  70. Wang CS, Li CY. Integrated Baby-care Recommender Platform Based on Hybrid Commonsense Reasoning and Case-based Reasoning Algorithms. In: The 6th International Conference on Networked Computing and Advanced Information Management. 2010 Presented at: NCAIM'10; August 16-18, 2010; Seoul, Korea (South).
  71. Jiang L, Yang CC. User recommendation in healthcare social media by assessing user similarity in heterogeneous network. Artif Intell Med 2017 Sep;81:63-77. [CrossRef] [Medline]
  72. Cho JH, Sondhi P, Zhai C, Schatz BR. Resolving Healthcare Forum Posts via Similar Thread Retrieval. In: Proceedings of the 5th ACM Conference on Bioinformatics, Computational Biology, Health Informatics. 2014 Presented at: ACM'14; September 20–23, 2014; Newport Beach, CA, USA. [CrossRef]
  73. Lima-Medina E, Loques O, Mesquita C. 'Minha Saude' a Healthcare Social Network for Patients With Cardiovascular Diseases. In: 3nd International Conference on Serious Games and Applications for Health (SeGAH). 2014 Presented at: SeGAH'14; May 14-16, 2014; Rio de Janeiro, Brazil. [CrossRef]
  74. Wang J, Man C, Zhao Y, Wang F. An Answer Recommendation Algorithm for Medical Community Question Answering Systems. In: IEEE International Conference on Service Operations and Logistics, and Informatics. 2016 Presented at: SOLI'16; July 10-12, 2016; Beijing, China. [CrossRef]
  75. Narducci F, Lops P, Semeraro G. Power to the patients: the HealthNetsocial network. Information Systems 2017 Nov;71:111-122. [CrossRef]
  76. Huang Y, Liu P, Pan Q, Lin J. A Doctor Recommendation Algorithm Based on Doctor Performances and Patient Preferences. In: International Conference on Wavelet Active Media Technology and Information Processing. 2012 Presented at: ICWAMTIP'12; December 17-19, 2012; Chengdu, China. [CrossRef]
  77. Jiang H, Xu W. How to Find Your Appropriate Doctor: an Integrated Recommendation Framework in Big Data Context. In: IEEE Symposium on Computational Intelligence in Healthcare and e-health. 2014 Presented at: CICARE'14; December 9-12, 2014; Orlando, FL, USA. [CrossRef]
  78. Chen TT, Chiu M. Mining the preferences of patients for ubiquitous clinic recommendation. Health Care Manag Sci 2020 Jun 6;23(2):173-184. [CrossRef] [Medline]
  79. Ávila-Vázquez D, Lopez-Martinez M, Maya D, Olvera X, Guzmán G, Torres M, et al. Geospatial Recommender System for the Location of Health Services. In: 2014 9th Iberian Conference on Information Systems and Technologies. 2014 Presented at: CISTI'14; June 18-21, 2014; Barcelona, Spain. [CrossRef]
  80. Tabrizi TS, Khoie MR, Sahebkar E, Rahimi S, Marhamati N. Towards a Patient Satisfaction Based Hospital Recommendation System. In: 2016 International Joint Conference on Neural Networks. 2016 Presented at: IJCNN'16; July 24-29, 2016; Vancouver, BC, Canada. [CrossRef]
  81. Torrent-Fontbona F, Lopez B. Personalized adaptive CBR bolus recommender system for type 1 diabetes. IEEE J Biomed Health Inform 2019 Jan;23(1):387-394. [CrossRef] [Medline]
  82. Li J, Kong J. Cell Phone-based Diabetes Self-management and Social Networking System for American Indians. In: IEEE 18th International Conference on e-Health Networking, Applications and Services. 2016 Presented at: Healthcom'16; September 14-16, 2016; Munich, Germany. [CrossRef]
  83. Cheung K, Ling W, Karr CJ, Weingardt K, Schueller SM, Mohr DC. Evaluation of a recommender app for apps for the treatment of depression and anxiety: an analysis of longitudinal user engagement. J Am Med Inform Assoc 2018 Aug 1;25(8):955-962 [FREE Full text] [CrossRef] [Medline]
  84. Clarke S, Jaimes LG, Labrador MA. Mstress: a Mobile Recommender System for Just-in-time Interventions for Stress. In: 14th IEEE Annual Consumer Communications & Networking Conference. 2017 Presented at: CCNC'17; January 8-11, 2017; Las Vegas, NV, USA. [CrossRef]
  85. Zaini N, Latip MF, Omar H, Mazalan L, Norhazman H. Online Personalized Audio Therapy Recommender Based on Community Ratings. In: International Symposium on Computer Applications and Industrial Electronics. 2012 Presented at: ISCAIE'12; December 3-4, 2012; Kota Kinabalu, Malaysia. [CrossRef]
  86. Bravo-Torres JF, Ordoñez-Ordoñez JO, Gallegos-Segovia PL, Vintimilla-Tapia PE, López-Nores M, Blanco-Fernández Y. A Context-aware Platform for Comprehensive Care of Elderly People: Proposed Architecture. In: CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies. 2017 Presented at: CHILECON'17; October 18-20, 2017; Pucon, Chile. [CrossRef]
  87. Lage R, Durao F, Dolog P, Stewart A. Applicability of Recommender Systems to Medical Surveillance Systems. In: Proceedings of the Second International Workshop on Web Science and Information Exchange in the Medical Web. 2011 Presented at: WSIE'11; October 28, 2011; Glasgow, Scotland, UK. [CrossRef]
  88. Ho TC, Chen X. ExerTrek: a Portable Handheld Exercise Monitoring, Tracking and Recommendation System. In: 11th International Conference on e-Health Networking, Applications and Services. 2009 Presented at: Healthcom'09; December 16-18, 2009; Sydney, NSW, Australia. [CrossRef]
  89. Cheng CY, Qian X, Tseng SH, Fu LC. Recommendation Dialogue System Through Pragmatic Argumentation. In: 26th IEEE International Symposium on Robot and Human Interactive Communication. 2017 Presented at: RO-MAN'17; August 29-30, 2017; Lisbon, Portugal. [CrossRef]
  90. Wongpun S, Guha S. Elderly Care Recommendation System for Informal Caregivers Using Case-based Reasoning. In: 2nd Advanced Information Technology, Electronic and Automation Control Conference. 2017 Presented at: IAEAC'17; March 25-26, 2017; Chongqing, China. [CrossRef]
  91. Gai K, Qiu M, Jayaraman S, Tao L. Ontology-Based Knowledge Representation for Secure Self-Diagnosis in Patient-Centered Teleheath with Cloud Systems. In: 2015 IEEE 2nd International Conference on Cyber Security and Cloud Computing. 2015 Presented at: IEEE'15; November 3-5, 2015; New York, NY, USA. [CrossRef]
  92. Phanich M, Pholkul P, Phimoltares S. Food Recommendation System Using Clustering Analysis for Diabetic Patients. In: International Conference on Information Science and Applications. 2010 Presented at: CISA'10; April 21-23, 2010; Seoul, Korea (South). [CrossRef]
  93. Luo Y, Ling C, Schuurman J, Petrella R. GlucoGuide: An Intelligent Type-2 Diabetes Solution Using Data Mining and Mobile Computing. In: IEEE International Conference on Data Mining Workshop.: IEEE; 2014 Presented at: IEEE'14; December 14, 2014; Shenzhen, China. [CrossRef]
  94. Mustaqeem A, Anwar SM, Khan AR, Majid M. A statistical analysis based recommender model for heart disease patients. Int J Med Inform 2017 Dec;108:134-145. [CrossRef] [Medline]
  95. Esteban B, Tejeda-Lorente, Porcel C, Arroyo M, Herrera-Viedma E. TPLUFIB-WEB: a fuzzy linguistic web system to help in the treatment of low back pain problems. Know Based Syst 2014 Sep;67:429-438. [CrossRef]
  96. Lee C, Wang M, Lan S. Adaptive personalized diet linguistic recommendation mechanism based on type-2 fuzzy sets and genetic fuzzy markup language. IEEE Trans Fuzzy Syst 2015 Oct;23(5):1777-1802. [CrossRef]
  97. Casino F, Patsakis C, Batista E, Borras F, Martinez-Balleste A. Healthy routes in the smart city: a context-aware mobile recommender. IEEE Softw 2017 Nov;34(6):42-47. [CrossRef]
  98. Nurbakova D, Laporte L, Calabretto S, Gensel J. Users Psychological Profiles for Leisure Activity Recommendation: User Study. In: Proceedings of the International Workshop on Recommender Systems for Citizens. 2017 Presented at: RSC'17; August 27, 2017; Como, Italy p. 1-4. [CrossRef]
  99. Buhl M, Famulare J, Glazier C, Harris J, McDowell A, Waldrip G, et al. Optimizing Multi-channel Health Information Delivery for Behavioral Change. In: IEEE Systems and Information Engineering Design Symposium. 2016 Presented at: SIEDS'16; April 29, 2016; Charlottesville, VA, USA. [CrossRef]
  100. Chen RC, Ting YH, Chen JK, Lo YW. The Nutrients of Chronic Diet Recommended Based on Domain Ontology and Decision Tree. In: Conference on Technologies and Applications of Artificial Intelligence. 2015 Presented at: TAAI'15; November 20-22, 2015; Tainan, Japan. [CrossRef]
  101. Moher D, Liberati A, Tetzlaff J, Altman D, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009 Jul 21;6(7):e1000097 [FREE Full text] [CrossRef] [Medline]
  102. Massimo D, Elahi M, Ge M, Ricci F. Item Contents Good, User Tags Better: Empirical Evaluation of a Food Recommender System. In: Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. USA: ACM; 2017 Presented at: UMAP'17; July 9-12, 2017; Bratislava, Slovakia p. 373-374. [CrossRef]
  103. Pilloni P, Piras L, Carta S, Fenu G, Mulas F, Boratto L. Recommendation in Persuasive Ehealth Systems: an Effective Strategy to Spot Users’ Losing Motivation to Exercise. In: 2nd International Workshop on Health Recommender Systems. 2017 Presented at: HealthRecSys'17; August 14, 2017; Singapore. [CrossRef]
  104. Herlocker J, Konstan J, Terveen L, Riedl J. Evaluating collaborative filtering recommender systems. ACM Trans Inf Syst 2004 Jan;22(1):5-53. [CrossRef]
  105. Pazzani M, Billsus D. Content-based recommendation systems. In: Brusilovsky P, Nejdl W, editors. The Adaptive Web: Methods and Strategies of Web Personalization. Berlin: Springer Berlin Heidelberg; 2007:325-341.
  106. Burke R. Knowledge-based recommender systems. Encyclopedia of library and information science 2000;69:1-23.
  107. Burke R. Hybrid recommender systems: survey and experiments. User Model User-Adap Inter 2002;12:331-370. [CrossRef]
  108. Hawker S, Payne S, Kerr C, Hardey M, Powell J. Appraising the evidence: reviewing disparate data systematically. Qual Health Res 2002 Nov;12(9):1284-1299. [CrossRef] [Medline]
  109. Adams R, Sadasivam RS, Balakrishnan K, Kinney RL, Houston TK, Marlin BM. PERSPeCT: Collaborative Filtering for Tailored Health Communications. In: Proceedings of the 8th ACM Conference on Recommender Systems. 2014 Presented at: ACM'14; October 6–10, 2014; Foster City, Silicon Valley, CA, USA. [CrossRef]
  110. MedlinePlus - Health Information from the National Library of Medicine. US National Library of Medicine.   URL: https://medlineplus.gov/ [accessed 2019-01-27]
  111. Krishnan S, Patel J, Franklin M, Goldberg K. A Methodology for Learning, Analyzing, Mitigating Social Influence Bias in Recommender Systems. In: Proceedings of the 8th ACM Conference on Recommender Systems. 2014 Presented at: ACM'14; October 6–10, 2014; Foster City, Silicon Valley, CA, USA. [CrossRef]
  112. Wang Z, Huang H, Cui L, Chen J, An J, Duan H, et al. Using natural language processing techniques to provide personalized educational materials for chronic disease patients in china: development and assessment of a knowledge-based health recommender system. JMIR Med Inform 2020 Apr 23;8(4):e17642 [FREE Full text] [CrossRef] [Medline]
  113. Cacheda F, Carneiro V, Fernández D, Formoso V. Comparison of collaborative filtering algorithms. ACM Trans Web 2011 Feb;5(1):1-33. [CrossRef]
  114. Adomavicius G, Tuzhilin A. Context-aware recommender systems. In: Ricci F, Rokach L, Shapira B, editors. Recommender Systems Handbook. New York, USA: Springer; 2015.
  115. Patients Like Me.   URL: https://www.patientslikeme.com/ [accessed 2020-01-29]
  116. Health Boards.   URL: https://www.healthboards.com/ [accessed 2020-01-29]
  117. Knijnenburg BP, Willemsen MC, Gantner Z, Soncu H, Newell C. Explaining the user experience of recommender systems. User Model User-Adap Inter 2012 Mar 10;22(4-5):441-504. [CrossRef]
  118. Peterson C, Park N, Seligman ME. Orientations to happiness and life satisfaction: the full life versus the empty life. J Happiness Stud 2005 Mar;6(1):25-41. [CrossRef]
  119. Donnellan MB, Oswald FL, Baird BM, Lucas RE. The mini-IPIP scales: tiny-yet-effective measures of the Big Five factors of personality. Psychol Assess 2006 Jun;18(2):192-203. [CrossRef] [Medline]
  120. Przybylski AK, Murayama K, DeHaan CR, Gladwell V. Motivational, emotional, and behavioral correlates of fear of missing out. Comput Hum Behav 2013 Jul;29(4):1841-1848. [CrossRef]
  121. Dallery J, Cassidy RN, Raiff BR. Single-case experimental designs to evaluate novel technology-based health interventions. J Med Internet Res 2013 Feb 8;15(2):e22 [FREE Full text] [CrossRef] [Medline]
  122. Keyes CL. Mental health in adolescence: is America's youth flourishing? Am J Orthopsychiatry 2006 Jul;76(3):395-402. [CrossRef] [Medline]
  123. Houston TK, Sadasivam RS, Allison JJ, Ash AS, Ray MN, English TM, et al. Evaluating the QUIT-PRIMO clinical practice ePortal to increase smoker engagement with online cessation interventions: a national hybrid type 2 implementation study. Implement Sci 2015 Nov 2;10(154):1-16 [FREE Full text] [CrossRef] [Medline]
  124. Ricci F, Rokach L, Shapira B. In: Ricci F, Rokach L, Shapira B, Kantor P, editors. Introduction to Recommender Systems Handbook. Boston, UK: Springer; 2011.
  125. Uzor S, Baillie L, Htun NN, Smit P. Inclusively designing IDA: effectively communicating falls risk to stakeholders. In: Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2018 Presented at: International Conference on Human-Computer Interaction with Mobile Devices and Services; September 3–6, 2018; Barcelona, Spain p. 281-287. [CrossRef]
  126. Ferro LS, Walz SP, Greuter S. Towards Personalised, Gamified Systems: an Investigation Into Game Design, Personality and Player Typologies. In: Proceedings of the 9th Australasian Conference on Interactive Entertainment: Matters of Life and Death. 2013 Presented at: ACIE'13; October 1, 2013; Melbourne, VIC, Australia p. 1-6. [CrossRef]
  127. Swearingen K, Sinha R. Beyond Algorithms: An HCI Perspective on Recommender Systems. In: ACM SIGIR Workshop on Recommender Systems. 2001 Presented at: ACM SIGIR'01; April 9-11, 2001; New Orleans Louisiana USA.
  128. Sinha R, Swearingen K. The Role of Transparency in Recommender Systems. In: Extended Abstracts on Human Factors in Computing Systems. 2002 Presented at: CHCS'02; April 20-25, 2002; Minneapolis, Minnesota, USA. [CrossRef]
  129. Nielsen-Bohlman L, Panzer AM, Kindig DA, editors. Health Literacy: A Prescription to End Confusion. New York, USA: National Academies Press; 2004.
  130. Marx P. Providing Actionable Recommendations. Florida, USA: Josef Eul Verlag GmbH; 2013.
  131. Gutiérrez F, Charleer S, De Croon R, Htun NN, Goetschalckx G, Verbert K. Explaining and Exploring Job Recommendations: a User-driven Approach for Interacting With Knowledge-based Job Recommender Systems. In: Proceedings of the 13th ACM Conference on Recommender Systems. 2019 Presented at: ACM'19; September 16–20, 2019; Copenhagen, Denmark. [CrossRef]
  132. Gutiérrez F, Htun NN, Schlenz F, Kasimati A, Verbert K. A review of visualisations in agricultural decision support systems: an HCI perspective. Comput Electr Agri 2019 Aug;163:104844. [CrossRef]
  133. Editorial Policies. Journal of Medical Reseach.   URL: https://www.jmir.org/about/editorialPolicies [accessed 2021-06-04]


DOC: degrees of compromise
HRS: health recommender system
ISO/IEC: International Organization of Standardization/International Electrotechnical Commission
nDCG: normalized discounted cumulative gain
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
RCT: randomized controlled trial
RQ: research question


Edited by G Eysenbach; submitted 29.01.20; peer-reviewed by A Calero Valdez, J Jiang; comments to author 10.03.20; revised version received 20.05.20; accepted 24.05.21; published 29.06.21

Copyright

©Robin De Croon, Leen Van Houdt, Nyi Nyi Htun, Gregor Štiglic, Vero Vanden Abeele, Katrien Verbert. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 29.06.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.