Aim To analyze the relative frequency of different types of odontogenic tumors based on the WHO 2005 histopathological classification of odontogenic tumours and to compare the data with published literature. Methods Data collected from seven different hospitals in the same region of the city (south Chennai) were systematically searched for all cases of odontogenic tumors operated on between the years 2005-2010. The histopathology slides of the tumours were reanalyzed for cross verification. The data were also checked for duplication and for recurrence cases. Age, gender and site prevalence were also studied. Results Of the 107 cases collected, with full records, 60 (56%) were odontomas. The second most common was ameloblastoma (14%), followed by Keratocystic odontogenic tumour (13%). The rest of the tumours formed 17% of the total. Conclusions A comprehensive tumour database should be initiated so that cross referring of cases would be easier and the patients, surgeons and the pathologists would be able to safe guard the information about the tumour for future reference. Many private hospitals lack the facility to store and catalogue histopathological evidences for a prolonged period of time.
Debris detection and classification is an essential function for autonomous floor-cleaning robots. It enables floor-cleaning robots to identify and avoid hard-to-clean debris, specifically large liquid spillage debris. This paper proposes a debris-detection and classification scheme for an autonomous floor-cleaning robot using a deep Convolutional Neural Network (CNN) and Support Vector Machine (SVM) cascaded technique. The SSD (Single-Shot MultiBox Detector) MobileNet CNN architecture is used for classifying the solid and liquid spill debris on the floor through the captured image. Then, the SVM model is employed for binary classification of liquid spillage regions based on size, which helps floor-cleaning devices to identify the larger liquid spillage debris regions, considered as hard-to-clean debris in this work. The experimental results prove that the proposed technique can efficiently detect and classify the debris on the floor and achieves 95.5% percent classification accuracy. The cascaded approach takes approximately 71 milliseconds for the entire process of debris detection and classification, which implies that the proposed technique is suitable for deploying in real-time selective floor-cleaning applications.
Insect detection and control at an early stage are essential to the built environment (human-made physical spaces such as homes, hotels, camps, hospitals, parks, pavement, food industries, etc.) and agriculture fields. Currently, such insect control measures are manual, tedious, unsafe, and time-consuming labor dependent tasks. With the recent advancements in Artificial Intelligence (AI) and the Internet of things (IoT), several maintenance tasks can be automated, which significantly improves productivity and safety. This work proposes a real-time remote insect trap monitoring system and insect detection method using IoT and Deep Learning (DL) frameworks. The remote trap monitoring system framework is constructed using IoT and the Faster RCNN (Region-based Convolutional Neural Networks) Residual neural Networks 50 (ResNet50) unified object detection framework. The Faster RCNN ResNet 50 object detection framework was trained with built environment insects and farm field insect images and deployed in IoT. The proposed system was tested in real-time using four-layer IoT with built environment insects image captured through sticky trap sheets. Further, farm field insects were tested through a separate insect image database. The experimental results proved that the proposed system could automatically identify the built environment insects and farm field insects with an average of 94% accuracy.
This work presents a table cleaning and inspection method using a Human Support Robot (HSR) which can operate in a typical food court setting. The HSR is able to perform a cleanliness inspection and also clean the food litter on the table by implementing a deep learning technique and planner framework. A lightweight Deep Convolutional Neural Network (DCNN) has been proposed to recognize the food litter on top of the table. In addition, the planner framework was proposed to HSR for accomplishing the table cleaning task which generates the cleaning path according to the detection of food litter and then the cleaning action is carried out. The effectiveness of the food litter detection module is verified with the cleanliness inspection task using Toyota HSR, and its detection results are verified with standard quality metrics. The experimental results show that the food litter detection module achieves an average of 96% detection accuracy, which is more suitable for deploying the HSR robots for performing the cleanliness inspection and also helps to select the different cleaning modes. Further, the planner part has been tested through the table cleaning tasks. The experimental results show that the planner generated the cleaning path in real time and its generated path is optimal which reduces the cleaning time by grouping based cleaning action for removing the food litters from the table.Sensors 2020, 20, 1698 2 of 20 vision-based techniques are widely used in cleaning robots for recognizing the litter and compute the cleaning action [14][15][16][17][18][19]. Andersen et al., built up a visual cleaning map for cleaning robots using a vision algorithm and a powerful light-transmitting diode. The sensor recognizes the grimy region and generates the dirt map by examining the surface pictures pixel-by-pixel utilizing the multi-variable statistical method [15]. David et al., proposed high-level manipulation actions for cleaning dirt from table surfaces using REEM a humanoid service robot. The author uses a background subtraction algorithm for recognizing the dirt from the table and Noisy Indeterministic Deictic (NID) rules-based learning algorithm to generate the sequence of cleaning action [16]. Ariyan et al., developed a planning algorithm for the removal of stains from non-planar surfaces where the author uses a depth-first branch-and-bound search to generate cleaning trajectories with the K-means clustering algorithm [17]. Hass et al., demonstrated the use of unsupervised clustering algorithm and Markov Decision Problem (MDP) for performing the cleaning task where unsupervised clustering algorithm is used to distinguish the dirt from surface and MDP algorithm is used to generate the maps, and transition model from clustered image is used to describe the robot cleaning action [18]. Nonetheless, these approaches have some practical issues and disadvantages for using in food court table cleaning; the detection ratio relies heavily on the textured surfaces, which makes it challenging to identify the litter type as solid...
The role of mobile robots for cleaning and sanitation purposes is increasing worldwide. Disinfection and hygiene are two integral parts of any safe indoor environment, and these factors become more critical in COVID-19-like pandemic situations. Door handles are highly sensitive contact points that are prone to be contamination. Automation of the door-handle cleaning task is not only important for ensuring safety, but also to improve efficiency. This work proposes an AI-enabled framework for automating cleaning tasks through a Human Support Robot (HSR). The overall cleaning process involves mobile base motion, door-handle detection, and control of the HSR manipulator for the completion of the cleaning tasks. The detection part exploits a deep-learning technique to classify the image space, and provides a set of coordinates for the robot. The cooperative control between the spraying and wiping is developed in the Robotic Operating System. The control module uses the information obtained from the detection module to generate a task/operational space for the robot, along with evaluating the desired position to actuate the manipulators. The complete strategy is validated through numerical simulations, and experiments on a Toyota HSR platform.
Aim The objective of this study was to compare the rate of complications encountered on using different incisions to access the fracture site for the open reduction and internal fixation of isolated subcondylar fractures. The parameters evaluated are: the occurrence of salivary fistula, infection, and injuries to the seventh facial nerve. An assessment of the surgical scar was also undertaken. Materials and Methods 20 patients who met the previous criteria and were willing to participate in the study were placed (five each) into the pre-auricular, submandibular, retromandibular transparotid or retromandibular transmassetric group based on the incision scar they selected after a description of the operation and being explained about the possible complications. Results and Conclusion Comparison of the complications could not ascertain the superiority of any approach over the other since the outcomes were not statistically significant. However, judging by operator and assistants' subjective assessment, the retromandibular approaches seem to provide a more direct visual field and an almost straight line access for the fixation of the fracture. The transmassetric approach seems to be a safer approach since the nerves encountered can be visualized and avoided.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.