2023
DOI: 10.1177/20552076221149320
|View full text |Cite
|
Sign up to set email alerts
|

MyDietCam: Development and usability study of a food recognition integrated dietary monitoring smartphone application

Abstract: Background Diet monitoring has been linked with improved eating habits and positive health outcomes such as prevention of obesity. However, this is often unsustainable as traditional methods place a high burden on both participants and researchers through pen and paper recordings and manual nutrient coding respectively. The digitisation of dietary monitoring has greatly reduced these barriers. This paper proposes a diet application with a novel food recognition feature with a usability study conducted in the r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 49 publications
0
7
0
Order By: Relevance
“…The best performance of the Inception-ResNet-v2 model on identifying the diet images from Weibo users is dish style level, in which top-10's accuracy is 59.945%, and top-1's accuracy is 37.259%. A total of 169,673 images of 251 diets that are currently stored in the food image table of the Diet Information Expansion dataset and the Vireo food-251 26 were used to train the image recognition model.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The best performance of the Inception-ResNet-v2 model on identifying the diet images from Weibo users is dish style level, in which top-10's accuracy is 59.945%, and top-1's accuracy is 37.259%. A total of 169,673 images of 251 diets that are currently stored in the food image table of the Diet Information Expansion dataset and the Vireo food-251 26 were used to train the image recognition model.…”
Section: Resultsmentioning
confidence: 99%
“…A smartphone application based on behavior change strategies was created by Kwon et al 25 Users could enter up to 4 food categories for each diet they eat every day and see the real-time effect on their risk of developing heart disease. Kong et al 26 developed a dietary monitoring smartphone application, MyDietCam, which could record dietary intake through food image recognition and provide nutrient analyses through visuals. Snap It™, a meal record application developed by FitNow based on a sizable food image library, allows users to take pictures of their food, identify it using image identification technology, and then estimate their diet size to determine the amount of nutrients in it.…”
Section: Introductionmentioning
confidence: 99%
“…These methods have been used to quantify nutrients of concern, such as folate, iron (ferritin and transferrin), copper, retinol, and zinc, in children, adolescents, and older adult populations [ 62 , 63 ]; however, objective measures of nutrient status have yet to be explored in college student populations experiencing FI. Innovative, noninvasive measures of nutrient status may be used to assess nutrient adequacy, including sensor-based technologies to detect sound and movement associated with eating patterns [ 64 ], wearable image-based devices to capture dietary intake [ 65 ], and spectroscopy-based measurements to measure nutrients in the skin and tissue [ 66 ], among other emerging assessments [ 61 , 67 , 68 ].…”
Section: Fi and Dietary Intakementioning
confidence: 99%
“…However, manually entering dietary intake information poses significant usability challenges for mHealth app users [24], potentially resulting in inaccurate or incomplete reporting, and thus undermining the efficacy of managing healthy eating habits [16]. This raises an urgent need to minimize the operational loading of users [25,26] as the ease and effectiveness of food data entry methods has a direct and significant impact on the usability of dietary tracking applications [23].…”
Section: Challenges In Dietary Intake Inputmentioning
confidence: 99%
“…In addition, errors could occur when participant required utilizing voice input to complete the reporting task when the image recognition did not work properly for some dishes. Failed image recognition could also occur due to poor image quality in the uploaded file, inappropriate lighting, the technical algorithm, food recognition technologies [31,40], limited food datasets [25,26], and the contexts of use [41].…”
Section: Accuracy In Air Versus Virmentioning
confidence: 99%