The entertainment industry, primarily the video games industry, continues to dictate the development and performance requirements of graphics hardware and computer graphics algorithms. However, despite the enormous progress in the last few years it is still not possible to achieve some of industry's demands, in particular high-fidelity rendering of complex scenes in real-time, on a single desktop machine. A realisation that sound/music and other senses are important to entertainment, led to an investigation of alternative methods, such as cross-modal interaction in order to try and achieve the goal of "realism in real-time". In this paper we investigate the cross-modal interaction between vision and audition for reducing the amount of computation required to compute visuals by introducing movement related sound effects. Additionally, we look at the effect of camera movement speed on temporal visual perception. Our results indicate that slow animations are perceived as smoother than fast animations. Furthermore, introducing the sound effect of footsteps to walking animations further increased the animation smoothness perception. This has the consequence that for certain conditions the number of frames that need to be rendered each second can be reduced, saving valuable computation time, without the viewer being aware of this reduction. The results presented are another step towards the full understanding of the auditory-visual cross-modal interaction and its importance for helping achieve "realism int real-time".
The quality of real-time computer graphics has progressed enormously in the last decade due to the rapid development in graphics hardware and its utilisation of new algorithms and techniques. The computer games industry, with its substantial software and hardware requirements, has been at the forefront in pushing these developments. Despite all the advances, there is still a demand for even more computational resources. For example, sound effects are an integral part of most computer games. This paper presents a method for reducing the amount of effort required to compute the computer graphics aspects of a game by exploiting movement related sound effects. We conducted a detailed psychophysical experiment investigating how camera movement speed and the sounds affect the perceived smoothness of an animation. The results show that walking (slow) animations were perceived as smoother than running (fast) animations. We also found that the addition of sound effects, such as footsteps, to a walking/running animation affects the animation smoothness perception. This entails that for certain conditions the number of frames that need to be rendered each second can be reduced saving valuable computation time. Our approach will enable the computed frame rate to be decreased, and thus the computational requirements to be lowered, without any perceivable visual loss of quality.
Abstract-Clouds provide an attractive infrastructural option to deploy highly-scalable distributed applications. Platform as a Service (PaaS) clouds offer basic software stack and services along with the execution containers to simplify the hosting of user applications. However, traditional many task computing architectures cannot be hosted as-is on current PaaS platforms due to certain limitations. This paper describes a novel modified architecture for master-worker, a well-known many task computing paradigm, to take advantage of the fast scalability provided by PaaS. The architecture is transformed into a multiagent system where the distributed agents use a message broker for communication and to store the computation progress. The agents are capable of dynamically shifting between a master and a worker role based on the information available with a durable message broker. This state-less feature of the agents makes them amenable for a PaaS platform and adds fault-tolerance to the system. The experiments illustrate the promising potential of the architecture to efficiently scale computationally intensive tasks on PaaS.
Desktop grids combine arbitrary computational resources connected to a network. However, the prevalent interactive rendering algorithms can't seamlessly handle the variable computational power offered by a desktop grid's nondedicated resources. In this article, a method for achieving interactive high-fidelity rendering on nondedicated machines such as desktop grids is developed, without the expensive requirements of a dedicated render farm. The proposed algorithm is also fault-tolerant.
In the ever-so-growing technological world, the Internet of things is one of the most prevalent technologies that is being utilized in offices, hospitals, and even at homes. However, IoT, when integrated with other technologies, becomes more powerful and user interactive. Our system
utilizes microcontroller (NodeMCU) as a Wifi based gateway which would connect different sensors with cloud-based servers and unlike conventional automation, face recognition would be introduced in IoT frameworks for room personalization, along with communication among the appliances, which
will be beyond the interaction with the internet. The focus is to minimize human intervention and personalize it as accurately as possible. People accessing a room will have different settings for an archetype. For each person, the settings of the room will be altered using facial recognition.
The machine learning would consist of two classes, i.e., known and unknown. For an unknown person approaching a room, a notification will be sent to the administrator. The collective data received from sensors (temperature, pressure, gas, humidity, current, and potentiometer) will be monitored
and controlled using relays connected with Arduino which transfers data to the cloud through NodeMCU.
Recent developments in Artificial Intelligence (AI) have resulted in breakthroughs in applications such as computer vision, natural language processing, robotics, and data mining. These breakthroughs have been optimally utilized in various military applications such as surveillance, reconnaissance , threat evaluation, underwater mine warfare, cyber security, intelligence analysis, command and control as well as military education and training. However, it is not easy to achieve these breakthroughs. They are subject to the package of challenges of being prone to high risks ; robustness and reliability crunch or absence of the required training to name a few. Present research work tries to explore such challenges and further attempts to study the possible interrelationships using ISM methodology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.