Livestock welfare and management could be greatly enhanced by the replacement of branding or ear tagging with less invasive visual biometric identification methods. Biometric identification of cattle from muzzle patterns has previously indicated promising results. Significant barriers exist in the translation of these initial findings into a practical precision livestock monitoring system, which can be deployed at scale for large herds. The objective of this study was to investigate and address key limitations to the autonomous biometric identification of cattle. The contributions of this work are fourfold: (1) provision of a large publicly-available dataset of cattle face images (300 individual cattle) to facilitate further research in this field, (2) development of a two-stage YOLOv3-ResNet50 algorithm that first detects and extracts the cattle muzzle region in images and then applies deep transfer learning for biometric identification, (3) evaluation of model performance across a range of cattle breeds, and (4) utilizing few-shot learning (five images per individual) to greatly reduce both the data collection requirements and duration of model training. Results indicated excellent model performance. Muzzle detection accuracy was 99.13% (1024 × 1024 image resolution) and biometric identification achieved 99.11% testing accuracy. Overall, the two-stage YOLOv3-ResNet50 algorithm proposed has substantial potential to form the foundation of a highly accurate automated cattle biometric identification system, which is applicable in livestock farming systems. The obtained results indicate that utilizing livestock biometric monitoring in an advanced manner for resource management at multiple scales of production is possible for future agriculture decision support systems, including providing useful information to forecast acceptable stocking rates of pastures.
In this article, the development of an autonomous robot trajectory generation system based on a single eye-in-hand webcam, where the workspace map is not known a priori, is described. The system makes use of image processing methods to identify locations of obstacles within the workspace and the Quadtree Decomposition algorithm to generate collision free paths. The shortest path is then automatically chosen as the path to be traversed by the robot end-effector. The method was implemented using MATLAB running on a PC and tested on a two-link SCARA robotic arm. The tests were successful and indicate that the method could be feasibly implemented on many practical applications.
Abstract. In this article, we used image processing by a webcam connected on top of the arm robot. The robot navigation is in an unknown environment. Then start point and target point were determined for the robot, so the robot needs to have a program for path planning using Voronoi diagrams to find the path. After the possible path for moving the robot was found, the route information obtained was sent to the arm robot. The arm robot moves in the workspace and any time new information was processed via the webcam. The program was written using MATLAB software which at controls the robot's movement the unknown environment.
IntroductionAfter returned the manuscript must be appropriately modified. Today, vision based sensors such as webcams are falling in price more rapidly than any other sensors. This type of sensor is also a richer sensor than traditional ranging device, more data simultaneously [1]. Consequently, visual servo control of robotic manipulators has become an area of rapid research and development over the last two decades. Visual servo is the use of image data for manipulation and control of robot movement. Typically, the image of the robot workspace is captured, from which a target is identified. The position of the target is then estimated, and the corresponding robot joint angles and velocities are determined to enable the robot to reach its target. In this work, we present the development of a visual servo which enables a two-link planar robotic manipulator to navigate itself though arbitrarily positioned obstacles. The image of the workspace plane is captured using a webcam. The image is then processed to identify the edges of objects within the workspace. A Voronoi diagram (VD(S)) is then constructed, marking paths that avoid these objects. The optimal path is then computed, which would then be used as the robot trajectory.
Processing Unknown Environment StrategyThis strategy can efficiently use the available information and reduce the planning time. Navigation in an unknown environment is a more challenging topic. For example, unmanned machines with navigation ability in unknown environments could perform tasks in many dangerous places that humans would not wish to entry for safety. Navigation in an unknown environment means no
This paper proposes a medical pattern recognition system based on the Cellular automata (CA). CA or cellular machine is a dynamic mathematical model that consists of several similar and simple units organized by considerably simple local rules. Each cell acts as a simple computer automaton. This can lead to the implementation of the complex computations through uncomplicated methods. However, the CA model needs to determine certain rules for specific use and this model is regarded as suitable for modelling certain systems. To overcome this problem, a method is needed through which the favorable rules are extracted. Cellular Learning Automata (CLA) model is obtained from developing CA by appending a Learning Automaton (LA) to each cell. Many applications of CA are known today, especially in the field of pattern recognition. Therefore in this study, we use the CLA to design an automatic system to diagnosis the images which contain cancer tissue. Hence in this study, after applying the required approaches on lung Computed Tomography (CT) images, images are classified through the CLA model and the proposed methods are evaluated in terms of sensitivity, specificity and accuracy. The proposed system promises a flexible and low complexity model. The method has been tested on 22 slices of CT scan images from a real-world dataset and has yielded satisfactory results. The model with a low error rate (0.09), yielded a favorable accuracy (95.4%).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.