Artificial intelligence is a broad field that comprises a wide range of techniques, where deep learning is presently the one with the most impact. Moreover, the medical field is an area where data both complex and massive and the importance of the decisions made by doctors make it one of the fields in which deep learning techniques can have the greatest impact. A systematic review following the Cochrane recommendations with a multidisciplinary team comprised of physicians, research methodologists and computer scientists has been conducted. This survey aims to identify the main therapeutic areas and the deep learning models used for diagnosis and treatment tasks. The most relevant databases included were MedLine, Embase, Cochrane Central, Astrophysics Data System, Europe PubMed Central, Web of Science and Science Direct. An inclusion and exclusion criteria were defined and applied in the first and second peer review screening. A set of quality criteria was developed to select the papers obtained after the second screening. Finally, 126 studies from the initial 3493 papers were selected and 64 were described. Results show that the number of publications on deep learning in medicine is increasing every year. Also, convolutional neural networks are the most widely used models and the most developed area is oncology where they are used mainly for image analysis.
The publication of large amounts of open data is an increasing trend. This is a consequence of initiatives like Linked Open Data (LOD) that aims at publishing and linking data sets published in the World Wide Web. Linked Data publishers should follow a set of principles for their task. This information is described in a 2011 document that includes the consideration of reusing vocabularies as key. The Linked Open Vocabularies (LOV) project attempts to collect the vocabularies and ontologies commonly used in LOD. These ontologies have been classified by domain following the criteria of LOV members, thus having the disadvantage of introducing personal biases. This article presents an automatic classifier of ontologies based on the main categories appearing in Wikipedia. For that purpose, word-embedding models are used in combination with deep learning techniques. Results show that with a hybrid model of regular Deep Neural Networks (DNNs), Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN), classification could be made with an accuracy of 93.57%. A further evaluation of the domain matchings between LOV and the classifier brings possible matchings in 79.8% of the cases.
In the last years, Graphics Processing Units are evolving fast. This has had a big impact in several fields, such as Computer-Aided Design and particularly in 3D modeling, allowing the development of software for the creation of more detailed models. Nevertheless, building a 3D model is still a cumbersome and time-consuming task. Another field, that is evolving successfully due to this increase in computational capacity is Artificial Intelligence. These techniques are characterized among other things by the fact that they can automate tasks performed by humans. For example, reconstructing parts of images is being a hot topic recently. In this paper, a method based on Artificial Intelligence and in particular Deep Learning techniques is proposed to achieve this task. The aim is to automatically restore Greek temples based on renders of its ruins obtained from 3D model representations. Results show that adding segmented images to the training dataset gives better results. Also, restoration of the general part of the temples is well performed but the detailed elements have room for improvement.
European Union launched the RASFF portal in 1977 to ensure crossborder monitoring and a quick reaction when public health risks are detected in the food chain. There are not enough resources available to guarantee a comprehensive inspection policy, but RASFF data has enormous potential as a preventive tool. However, there are few studies of food and feed risk issues prediction and none with RASFF data. Although deep learning models are good prediction systems, it must be confirmed whether in this field they behave better than other machine learning techniques. The importance of categorical variables encoding as input for numerical models should be specially studied. Results in this paper show that deep learning with entity embedding is the best combination, with accuracies of 86.81%, 82.31%, and 88.94% in each of the three stages of the simplified RASFF process in which the tests were carried out. However, the random forest models with one hot encoding offer only slightly worse results, so it seems that in the quality of the results the coding has more weight than the prediction technique. Our work also demonstrates that the use of probabilistic predictions (an advantage of neural models) can also be used to optimize the number of inspections that can be carried out.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.