A number of configurable arithmetic structures for an FPGA architecture for the realisation of low complexity digital filters are investigated. The FPGA is based upon Primitive Operator Design technique in which digital filters are realised using signal flow graphs comprising low complexity operations. The authors evaluate the structures with a number of filter examples and compare their performance in terms of speed and area.
Abstract:In this study, the aim is to extract the attributes of the eye regions of laptop users. To achieve this, the iris and eye corners are detected by processing the images captured by the standard internal webcam of a laptop. In addition, an artificial neural network (ANN) is used for determining the eye region. Hereby, the iris and eye corners can be detected in the determined eye region. In the study, 107 user images are captured by using a laptop's internal camera under different light intensities, environments, viewpoints, and positions. These images are used for the training of the ANN. Two different methods are used for the iris detection. In the first, circular Hough transform (CHT) is employed for iris detection in the determined eye region. In the second, the right and the left iris regions are determined by using two different ANNs respectively and then CHT is employed for the iris. Higher success rates are achieved by the second method. In the next stage of the study, two different methods, weighted variance projection (WVPF) function and lowest valued pixels (LVP), are used for the detection of the eye corners. It is demonstrated that the second method has a higher performance than the first.
This study used two different Artificial Neural Networks (ANN) to determine the point on a computer screen that the user is looking at. First, an ANN, called ANN1 was developed to identify the eye region of a laptop user from a webcam image. The computer screen was then divided into 57 × 32 blocks of 24 × 24 pixels. One hundred of these were randomly selected, and 20 images were taken by the integrated webcam while the user was looking at each point. The eye region was found on each image by ANN1. This eye region data was used to train another ANN, called ANN2. Twenty blocks were selected, and 20 different images were used as the test set. The coordinates of the block at which the user was looking were determined by ANN2. The deviations between the actual location coordinates and the location coordinates estimated by ANN2 were small. We conclude that our ANN2 was successfully trained to find the viewpoint of the user.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.