Aims/Introduction: The progression from prediabetes to type 2 diabetes is preventable by lifestyle intervention and/or pharmacotherapy in a large fraction of individuals with prediabetes. Our objective was to develop a risk score to screen for prediabetes in the Middle East, where diabetes prevalence is one of the highest in the world. Materials and Methods: In this cross-sectional, case-control study, we used data of 4,895 controls and 2,373 prediabetic adults obtained from the Qatar Biobank cohort. Significant risk factors were identified by logistic regression and other machine learning methods. The receiver operating characteristic was used to calculate the area under curve, cutoff point, sensitivity, specificity, positive and negative predictive values. The prediabetes risk score was developed from data of Qatari citizens, as well as long-term (≥15 years) residents. Results: The significant risk factors for the Prediabetes Risk Score in Qatar were age, sex, body mass index, waist circumference and blood pressure. The risk score ranges from 0 to 45. The area under the curve of the score was 80% (95% confidence interval 78-83%), and the cutoff point of 16 yielded sensitivity and specificity of 86.2% (95% confidence interval 82.7-89.2%) and 57.9% (95% confidence interval 65.5-71.4%), respectively. Prediabetes Risk Score in Qatar performed equally in Qatari nationals and long-term residents. Conclusions: Prediabetes Risk Score in Qatar is the first prediabetes screening score developed in a Middle Eastern population. It only uses risk factors measured non-invasively, is simple, cost-effective, and can be easily understood by the general public and health providers. Prediabetes Risk Score in Qatar is an important tool for early detection of prediabetes, and can help tremendously in curbing the diabetes epidemic in the region.
1. IntroductionTwitter is the micro blogging social media platform which has at most variety of content. The open access to Twitter data with the usage of Twitter APIs has made it a important area of research. Twitter has a useful feature called “Trends” which displays Hot topics or trending information differing for every location. This trending information is evolved by tweets being shared on Twitter in a particular location. But Twitter limits the trending information to current tweets, as the algorithm for finding trends is concentrated on generating trends in real time, rather than trends summarization of hot topics on daily basis. Thus a clear summarization of the contemporary trending information is missing and is much needed. Latest Twitter Trends - our application discussed in this paper, is built to get the aggregate of hot topics on Twitter for Arab Countries and the World. This is a real time application with summarization of Hot topics over time. It enables users to study the summarization of twitter trends by location with the help of a Word Cloud. The tool also enables the user to click on the particular trend, which will allow the user to navigate and search through Twitter Search - also in real time. This tool also overcomes a drawback of Twitter trending information, in addition to the Twitter trend algorithm. The trends differ for different languages in different locations and are often mixed. For eg, if #Eid-ul-Adha is trending in Arab countries, عيد الأضحى# is also trending. This application focuses on consolidating the trends in Arabic and English, which have the same meaning and display only one trending topic, instead of two same topics in different language. This application also gives an estimation of the different kind of Twitter users, analyzing the percentage of tweets made by Male and Female in that location.2. Trends data gatheringTwitter APIs give developers access to the real time data comprising of tweets & trends. The Twitter REST API is used by the tool - Latest Twitter Trends, to connect and get trending data from Twitter. The API is used to authenticate and establish connection with Twitter and also returns Twitter trending data in JSON format. Python programming language is used to write scripts to gather data from Twitter. A Data Crawling Script is developed for connecting with Twitter API by authenticating the credentials generated by twitter on creating an application from the app.twitter.com. The Customer Key, Customer Secret, Access Token, Access Token Secret are the credentials used to perform authentication by Twitter. The data returned by Twitter is in JSON (JavaScript Object Notation) format and the Python Data Crawling Script is commanded to handle the JSON files and create a CSV database. This High Level Gathering of Data comprises of the following: Python data crawling script connects and authenticates with Twitter API and gets trending places data in JSON format from Twitter. The data in JSON format is stored in to our tool database as a CSV file. The Twitter data gathered is all the trending location/places with the WOIED (Where On Earth ID). The WOEID is used as a key to get Twitter trending topics location by location - in real time using the Twitter REST API. The trends for every location are also returned to the tool in JSON format, which is again changed converted to CSV for saving in the tool database. This CSV file for Trends is appended every time a new trending data is collected from twitter. Another CSV file is maintained in the Database which holds only the current information for all trending places - for later use. Natural language processing is done on trends by location CSV data, dictionary, to consolidate and consider Arabic and English Trending topics as one. The results are stored in CSV file and will be used for the hot topic identification.3. Hot topic identificationAfter the High level Data Gathering, CSV files containing data are used as a Database for generating Word Cloud using D3.js. This trending data is processed by calculating the number of occurrences to give an estimate of which trending topic was trending for a long time. The frequency is taken as the count value for trending topic and a word cloud is generated. This algorithm for calculating frequency is a python script, written mainly for Word Cloud Data Crawling. This word cloud data crawling script takes the Trends by Location data as input and generates a huge database of trends by cities in JSON files. This word cloud crawling script gives output in the JSON files to be stored with key as the trend topic and value as the frequency of the trend occurrence.4. ArchitectureFigure 1: Latest Trends in Twitter Application Architecture The python scripts for data crawling and word cloud crawling are sued to connect with Twitter, gather data, process and store in a database. The D3.js and Google fusion table API are used for displaying the application results. Google Fusion Table API is used to create a Map containing current trends by location – geo-tagged on the map. Java program is used as a dedicated project to connect & authenticate with Google API and clear old fusion table data to import new updated rows in to the Google Fusion Table. Python script Tagcloud.py is used to generate cities. JSON with trending topics from the Trends.csv file. These files from the database for generating word cloud using D3.js, individually for every city/location. Fusion table is used to visualize the trending information from Twitter. A java program along with Google API is used to authenticate and connect. Also to delete previous information in fusion table and update/import new records of data.5. ResultsThe data crawling script establishes connection with Twitter and returns a JSON format as in Fig. 2. This data is processed and saved as a CSV in to our application database for later use. Figure 2: Trends Data Output from Twitter in JSON format The word cloud crawling script generates key value pairs of processed trending data from the database. The key containing trending topic and the value containing the frequency of the trending topic's occurrence. The Fig. 3 displays the JSON dataset used for generating word cloud. Figure 3: JSON data of the processed trending data The word cloud is generated using the D3.js library and is used to display summarized trending data to the user. Figure 4 shows the word cloud result for London country. Figure 4: Word cloud for trending data.
INTRODUCTION Obesity is one of the major health risk factors behind the rise of non-communicable conditions. Understanding the factors influencing obesity is very complex since there are many variables that can affect the health behaviors leading to it. Nowadays, multiple data sources can be used to study health behaviors, such as wearable sensors for physical activity and sleep, social media, mobile and health data. In this poster we describe a system which uses an off-the-shelf messaging app coupled with a recommender system to provide tailored health recommendations in arabic and/or english to mothers with overweight children in Qatar. This is part of the ICAN project. The ICAN project was funded by the Qatar National Research Fund (a member of Qatar Foundation) under project number NPRP X- 036- 3–013 (Adapted Cognitive Behavioral Approach to Addressing Overweight and Obesity among Qatari Youth). Childhood obesity is a growing epidemic, and with technological advancements, new tools can be used to monitor and analyze lifestyle factors leading to obesity, which in turn can help in timely health behavior modifications. In this paper we describe developed Telegram bot coupled with a recommender system to send educational messages using health recommendations. This bot allows automatic sending of messages that can be answered by the user. The answers can be ratings in a 1 to 5 star scale, or plain text, depending on the message. In a trial held in Qatar, an educational intervention for mothers using Telegram was carried out for a twelve-week period, which overlapped with the holy month of Ramadan and the school summer break. The goal was to keep the mothers motivated to actively work towards keeping their children healthy. Our nutritional advice took into account the religious month of Ramadan. METHODOLOGY We defined a pool of motivational messages for the mothers associated to different topics and challenges regarding the nutrition and physical activity of their children. The messages were created in English and Arabic language. A total of 24 keywords were defined. These keywords linked each message to features that define it (i.e. deals about vegetables, fats, religious quotes regarding healthy eating, healthy recipes, etc). A special set of 9 messages were specifically designed to be sent when users start the intervention. The answers to these messages define the initial user profile. Once an initial user profile is known, the user will start receiving messages on a weekly basis, one tailored message per week up to a maximum of 77 messages. Users can rate the messages based on their perceived usefulness. This feedback is stored in the user profile - user-keyword vector - so that the following messages that she receives contain keywords - message-keyword vector- that were included in previously rated messages. A total of 38 mothers, with children between 9 and 12 years old, joined the program to receive tailored messages across the entire summer based on their personal preferences. After each week, the system asked them whether they had completed the challenge, and their opinion about their difficulty, and usefulness. RESULTS During the 4 months of the intervention, about 500 messages were sent to the participants. The participants had a total of 94 challenges included in the messages that they could promise to accomplish. Of these, 11 challenges were totally completed, 42 challenges were almost completed, and 7 challenges had been given up by the mothers after they initially promised to do them. The data also shows that over 59 messages, only 17 were found easy to do, however 39 challenges were found ‘just fine’ to do. Over 59 challenges, 47 messages were found useful by mothers. CONCLUSIONS In this study we have tested the feasibility of a recommender system tailoring health messages delivered by an off-the-shelf messaging app as Telegram, and we consider the results show that the system achieved to rise some motivation from mothers to give a healthy diet to their children. However, further work is needed to assess how we can increase the engagement so that more of the proposed challenges are accepted, and completed, and also if to compare with groups of mothers that do not have the app in a formal randomized control trial.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.