Recent results in retinal research have shown that ganglion cell receptive fields cover the mammalian retina in a mosaic arrangement, with insignificant amounts of overlap in the central fovea. This means that the biological relevance of traditional and widely adapted edge-detection algorithms with convolution-based overlapping operator architectures has been disproved. However, using traditional filters with non-overlapping operator architectures leads to considerable losses in contour information. This paper introduces a novel, tremor- and drift-based edge-detection algorithm that reconciles these differences between the physiology of the retina and the overlapping architectures used by today’s widely adapted algorithms. The algorithm takes into consideration data convergence, as well as the dynamic properties of the retina, by incorporating a model of involuntary eye tremors and drifts and the impulse responses of ganglion cells. Based on the evaluation of the model, two hypotheses are formulated on the highly debated role of involuntary eye tremors: 1) The role of involuntary eye movements has information theoretical implications 2) From an information processing point of view, the functional role of involuntary eye movements extends to more than just the maintenance of action potentials. Involuntary eye-movements may be responsible for the compensation of information losses caused by a non-overlapping receptive field architecture.
The present paper proposes a model for intelligent image contour detection. The model is strongly based on the architecture and functionality of the mammalian visual cortex. A pixel-to-feature transformation is performed on the input image, the result of which is a set of abstract image features, instead of another set of pixels. The contouring task is performed by a vast and complex network of simple units of computation that work together in a parallel way. The use of a large number of such simple units allows a clear structure that can be implemented on a special hardware to allow constant time computation.
This paper applies new cognitive infocommunication channels in human-machine interaction to develop a new paradigm of robot teaching and supervision. The robot is considered as an unskilled worker who is strong and capable for precise manufacturing. It has a special kind of intelligence but it is handicapped in a sense, which requires it to be supervised. If people can learn how to communicate to this "new worker" they can get a new, capable "colleague". The goal is that the boss is able to give the daily task to a robot in a similar way as he/she gives the jobs to the human workers, for example using CAD documentations, gestures and some verbal explanation. This paper presents an industrial robot supervision system inspired by research results of cognitive infocommunication. The operator can steer the remote manipulator by certain gestures using a motion capture suit as input device. Every gesture has its own meaning, which corresponds to a specific movement of the robot. The manipulator interprets and executes the instructions invoking its on-board artificial intelligence, while feedback through a 3D visualization unit closes the supervisory loop. The system was designed to be independent of the geographical distance between the user and the manipulated environment, allowing to establish control loops spanning through countries and continents. Successful results have been achieved between Norway, France and Hungary.
This paper presents a visual cortex inspired cognitive model for contour and vertex detection. The model is strongly based on the receptive field characteristics of cortical neurons of the visual cortex. As a step forward compared to the previous version of the model, a new dimension has been added, which replaces the binary signals and operations by operations on real values. The resulting system yields a better approximation of the biological system, as well as provides stronger and more distinct contour lines and vertices. The contour detection and vertex extraction is performed by a vast network of simple units of computation simultaneously processing the visual data. The computational units are organized in a special structure, the Visual Feature Array (VFA), which allows the structural representation of complex operations. The goal of the model is to extract abstract information from an image, which in turn may be used as input for the recognition process of even more abstract visual objects. In order to achieve constant time execution of the model, the aspects of hardware implementation are also treated in this paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.