Microwave detection of breast tumours is a non-ionising and potentially low-cost and more certain alternative to X-ray mammography. Analogous to ground penetrating radar (GPR), microwaves are transmitted using an antenna array and the reflected signals, which contain reflections from tumours, are recorded. The work presented here employs a post reception synthetically focussed detection method developed for land mine detection (R. Benjamin et al., IEE Proc. Radar, Sonar and Nav., vol. 148, no.4, pp. 233-40, 2001); all elements of an antenna array transmit a broadband signal in turn, the elements sharing a field of view with the current transmit element then record the received signal. By predicting the path delay between transmit and receive antennas via any desired point in the breast, it is then possible to extract and time-align all signals from that point. Repeated for all points in the breast, this yields an image in which the distinct dielectric properties of malignant tissue are potentially visible. This contribution presents a theoretical evaluation of the breast imaging system using FDTD methods. The FDTD model realistically models a practical system incorporating wide band antenna elements. One major challenge in breast cancer detection using microwaves is the clutter arising from skin interface. Deeply located tumours can be detected using windowing techniques (R. Nilavalan et al., Electronics Letters, vol. 39, pp. 1787-1789, 2003); however tumours closer to the skin interface require additional consideration, as described herein
Capturing meal images using mobile phone cameras is a promising alternative to traditional dietary assessment methods. Acquiring photos is reasonably simple but nutritional content analysis of images is a major challenge. Automated food identification and portion size assessment is computationally and participant intensive; relying on participant feedback for accuracy (1) . Dietitian analysis of photos is accurate but time-consuming and expensive (2) . Crowdsourcing could offer a rapid low-cost alternative by utilising the life-long experience that all humans have in food identification. Previous crowdsourcing methods include the Eatery app, which produces a simple 11-point 'healthiness' scale for each meal (3) and the PlateMate system, which creates a list of all individual foods with portion sizes, energy and macronutrient content (4) . While the Eatery produces limited and subjective data on meal content, PlateMate represents a complex integrated system of multiple tasks requiring on average 25 workers, costing £2·75 and taking 90 min per image. For feasible data-capture in a large-scale longitudinal studies, crowdsourcing data from meal photos needs to be cheaper and quicker. We aimed to develop a simpler task and tested it's feasibility for crowdsourcing dietary data.FoodFinder, a single task for identifying food groups and portion sizes, developed using Qualtrics (www.qualtrics.com/), and linked to the Prolific Academic (https://prolific.ac/) crowdsourcing platform for recruitment and reimbursement of a UK crowd. Thirty meal photos with measured total meal weight (grams) were analysed by a dietitian and crowds ranging in size from 5 to 50 people. The difference between actual meal weight (the gold-standard) and total meal weight estimated by different sized crowds and ratings by a dietician were compared to each other. To establish group consensus crowd estimates were weighted by majority agreement (5) . Bland-Altman analysis assessed agreement with actual meal weight.A crowd of 5 people underestimated true meal weight by 63 g, equating to 15 % of actual meal weight with limits of agreement (LOA) from −299 to 174 g. In comparison experts overestimated by 28 g equating to 9 % of actual meal weight with LOA −158, 214 g. With a crowd of 5 people, crowdsourcing cost £3·35 and took a mean 2 mins 55 sec (SD 2 min 6 sec) per image. A crowd of 50 had similar accuracy and limits of agreement (−65 g LOA −278, 149 g) but was more expensive. Further development of FoodFinder is required to make rapid low-cost analysis of meal photos via crowdsourcing a feasible method for assessing diet.
Pregnancy is considered an underutilised opportunity for promoting lifestyle changes that have health benefits for a woman and her developing child. Changes to maternal diet during pregnancy have, to date, been assessed primarily by food frequency questionnaires which may be inaccurate. We have tested the feasibility of using a validated method of dietary data capture, using a smartphone application (app), in young (aged 23-25), UK-based, pregnant women.The Remote Food Photography Method (RFPM) collects dietary data using the SmartIntake app (1,2) . This method has been previously validated against doubly labelled water in 50 free-living US adults and underestimated energy intake by only 3·7 %(1) . Our study fieldworkers were trained in use of the app and practiced capturing dietary data themselves for at least 2 full days prior to training participants. Women were asked to record 6 full days of dietary intake by taking a photo before and after each eating/drinking occasion, and providing a brief text description of items they thought might be hard to identify from the image. The photos were sent, via e-mail, to a server for analysis by experts. Automatic reminder e-mails, at times of the participant's choosing, were generated and sent to the smartphone. Real-time monitoring of the quality of the photos and feedback to participants occurred for the first day. Data collection began in October 2015 and is on-going. We present usability findings from the first 6 months.Seventy-three participants were invited to use SmartIntake of which 25 (34%) agreed, with one participant withdrawing consent before capturing any photos. Fieldworkers estimated that training took around 20 minutes. Most participants found installation and set-up (95%), taking photos of meals (70%) and receiving reminders (81 %) easy or very easy. Twenty-one (84%) provided enough dietary data for analysis. The median number of days on which participants used the app was 6 (IQR 4-6): 14 (58%) women recorded at least one eating/drinking occasion on 6 days, although 4 (17%) used the app on 3 days or fewer. The women captured a total of 484 eating/drinking occasions; 66% of these included a text description. The total median number of photos taken/person was 42 (IQR 19-61) with a median 7 (IQR 5-11) photos/day. The maximum number of photos taken by a single participant was 94. 75% of eating/ drinking occasions were captured in 2 photos, with only 41 (8%) occasions taking 4 or more photos. The maximum number of photos for one meal was 12 (starter, main course, pudding, drinks).Fieldworkers reported that the main reasons for refusing to use the app were that the women were unable to use a phone at work or were too busy. Of the participants who provided fewer than 4 days, three reported that it was too difficult due to family commitments and one gave no reason. Nine other women provided comments on use; all found data capture inconvenient but 10 women (out of 18 who answered) indicated that they would use the method again.These results show that the phone a...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.