Uploaded by Muhtasim Rabib 1812879642

Tracking of Nutritional Intake Using Artificial Intelligence

advertisement
Caring is Sharing – Exploiting the Value in Data for Health and Innovation
M. Hägglund et al. (Eds.)
© 2023 European Federation for Medical Informatics (EFMI) and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/SHTI230340
1033
Tracking of Nutritional Intake Using
Artificial Intelligence
Marko PETKOVIĆa,b,1, Joyce MAASc, Milan PETKOVIĆa
a
Eindhoven University of Technology, The Netherlands
b
Data Science Department, 5M ICT, Serbia
c
Center for Eating Disorders, GGZ Oost Brabant, The Netherlands
ORCiD ID: Marko Petković https://orcid.org/0009-0000-4918-6027
Abstract. In this short communication paper, we present the results we achieved for
automated calorie intake measurement for patients with obesity or eating disorders.
We demonstrate feasibility of applying deep learning based image analysis to a
single picture of a food dish to recognize food types and make a volume estimation.
Keywords. Nutrition Measurement, Deep Learning, Eating Disorders
1. Introduction
An increasing number of people are affected by obesity and eating disorders. According
to the World Health Organization around two billion adults are overweight, more than
650 million are obese, and around 9% of the population is affected by eating disorders.
Most of these diseases can severely affect a person and lead to cardiovascular diseases
or even death, while they are expensive to treat. To better manage people with these
conditions, it is important to have a solution for automatic but accurate nutrition intake
tracking. In this paper, we propose a novel approach, which can accurately recognize and
estimate the calorie contents of 101 different food types. For this we make use of various
Deep Learning techniques which are integrated into our FitSprite Nutrition smartphone
app to protect privacy by running inference locally. The solution also provides a web
portal used by clinicians or dietitians where they can find an overview of their patients’
eating habits. This allows them to manage consumers/patients more efficiently.
2. Methods
For calculating the amount of calories from a food picture, we propose a framework
consisting of three Convolutional Neural Networks (CNNs). The three different models
we use are: (i) a CNN for food prediction; (ii) a CNN with U-Net architecture for food
segmentation, and (iii) a CNN with U-Net architecture for depth map prediction.
To train the model for food prediction, we make use of an EfficientNet model
pretrained on ImageNet. We trained the model on the Food-101 dataset [1], which
consists of 101 different food classes, with 1000 pictures per food item.
1
Corresponding Author: Marko Petković, E-mail: m.petkovic1@tue.nl
M. Petković et al. / Tracking of Nutritional Intake Using Artificial Intelligence
1034
For the task of food segmentation, we made use of the UEC-FoodPixComplete
dataset [2], which contains 10000 food pictures with their pixel-wise segmentation masks
for each individual food item in the picture (a different color mask for fries, steak etc.).
We trained a U-Net, using the food prediction model as encoder. To record the
performance of the model, we used the Intersection over Union metric.
For depth estimation, we trained a U-Net using self-supervised monocular depth
prediction. This model was trained on the EPIC-KITCHENS dataset [3], which contains
100 hours of recordings of food preparation.
3. Results
For the food prediction task, we achieved a top-1 accuracy of 0.81 and a top-5 accuracy
of 0.94. The image segmentation network achieved an Intersection over Union of 0.91,
while combined with the depth estimation network it achieved a mean absolute
percentage error of 11.7% in predicting food volume.
4. Discussion
Overall, we see that it is possible to make fairly accurate calorie intake estimations using
our proposed models. We also found that by combining the food prediction and
segmentation tasks, we were able to achieve a better performance for food segmentation.
The performance of our food prediction network is better than current methods which
are able to run locally on smartphones [4]. For the image segmentation task, we obtained
solid performance by achieving a higher Intersection over Union than the current best
food segmentation network [5].
5. Conclusion
In this paper, we have shown that automated tracking of nutrition is feasible. We
finalized two pilots with users of a fitness club and patients with binge eating disorder
and their clinicians in the Center for Eating Disorders in Helmond in the Netherlands.
The pilots demonstrated the benefits for both, users/patients for the automation in the
creation of patient dairies, and coaches/clinicians for more efficient patient management.
References
[1]
[2]
[3]
[4]
[5]
Bossard L, Guillaumin M, Van Gool L. Food-101–mining discriminative components with random
forests. In Computer Vision–ECCV 2014 Proceedings, Part VI (pp. 446-461). Springer.
Okamoto K, Yanai K. UEC-FoodPIX Complete: A large-scale food image segmentation dataset. In
Pattern Recognition ICPR. Proceedings, Part V 2021 (pp. 647-659). Springer International Publishing.
Damen D, Doughty et al. Rescaling egocentric vision: Collection, pipeline and challenges for epickitchens-100. International Journal of Computer Vision. 2022 Jan 1:1-23.
Lo FP, Sun Y, Qiu J, Lo B. Image-based food classification and volume estimation for dietary
assessment: A review. IEEE journal of biomedical and health informatics. 2020 Apr 30;24(7):1926-39.
Ando Y, Ege T, Cho J, Yanai K. Depthcaloriecam: A mobile application for volume-based foodcalorie
estimation using depth cameras. In MADiMa (pp. 76-81) https://doi.org/10.1145/3347448.3357172.
Download