This work presents a study for assessing the technology acceptance of a contact tracing app, also proposed by us, which is a hybrid crowdsensing application (opportunistic and participatory). The goal of the app is that users are notified if they were in contact with others infected. It also allows creating a heat map identifying streets, squares, and commercial locations to which contaminated users were, allowing more assertive hygiene actions and eliminating infectious disease outbreaks. Our methodology aimed on finding whether people would be willing to share their location, as well as their health issues related to COVID-19. It is composed by a survey for verifying the interest of the proposed application; the prototype of the application; and the use of Technology Acceptance Model (TAM). We can see that the vast majority of respondents to the first survey were interested in using a contact tracking application, even though they need to share their location and report when they become infected. In addition, the proposed RISCOVID application proved to be accepted for use by participants in the second survey.
T he hypothesis quality induced by current machine learning algorithms depends mainly on the quantity and quality of features and examples used in the training phase. Frequently, hypothesis with low precision are obtained in experiments using large databases with a large number of irrelevant features. Thus, one active research area in machine learning is to investigate techniques able to extend the capacity of machine learning algorithms to process a large number of examples, features and classes.To learn concepts from large databases using machine learning algorithms, two approaches can be used. The first approach is based on a selection of relevant features and examples, and the second one is the ensemble approach. An ensemble is a set of classifiers whose individual decisions are combined in some way to classify a new case. Although ensembles classify new examples better than each individual classifier, they behave like black-boxes, since they do not offer any explanation to the user about their classification.The purpose of this work is to consider a form of symbolic classifiers combination to work with large databases. Given a large database, it is equally divided randomly in small databases. These small databases are supplied to one or more symbolic machine learning algorithms. After that, the rules from the resulting classifiers are combined into one classifier. To analise the viability of this proposal, was implemented a system in logic programming language Prolog, called R ule System. This system has two purposes; the first one, implemented by the Rule Analises Module, is to evaluate rules induced by symbolic machine learning algorithms; the second one, implemented by the Combination and Explanation Module, is to evaluate several forms of combining symbolic classifiers as well as to explain ensembled classification of new examples. Both principal modules constitute the R ule System. This work describes ensemble construction methods and combination of classifiers methods found in the literature; the project and documentation of R ule System; the methodology developed to document the R ule System; and the implementation of the Combination and Explanation Module. Two different case studies using the Combination and Explanation Module are described. The first case study uses an artificial database. Through the use of this artificial database, it was possible to improve several of the heuristics used by the the Combination and Explanation Module. A real database was used in the second case study.
Distributed edge intelligence is a disruptive research area that enables the execution of machine learning and deep learning (ML/DL) algorithms close to where data are generated. Since edge devices are more limited and heterogeneous than typical cloud devices, many hindrances have to be overcome to fully extract the potential benefits of such an approach (such as data-in-motion analytics). In this paper, we investigate the challenges of running ML/DL on edge devices in a distributed way, paying special attention to how techniques are adapted or designed to execute on these restricted devices. The techniques under discussion pervade the processes of caching, training, inference, and offloading on edge devices. We also explore the benefits and drawbacks of these strategies.
In many real world prediction problems, a classifier must, or should, assign more than one label to an instance, e.g. prediction of machine failures, musical genre classification, etc. For this kind of problem, multi-label classification methods are needed. One approach frequently used to learn multi-label predictors divides the problem into one or more multi-class classification problems, and combines the models constructed for each sub-problem to classify new instances with multiple labels. Although there are many multi-label learning methods, there is a need for exploring methods that can lead to improvement in prediction power. In this work, we propose and evaluate a new method, called RB (Random-Bagging), based on dataset transformation and combination of classifiers. Six real-world datasets were used to evaluate our method, which was compared to three existing methods. Results were considered promising.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.