Serious games and game-based learning have received increased attention in recent\ud years as an adjunct to teaching and learning material. This has been well echoed in the\ud literature with numerous articles on the use of games and game theory in education.\ud Despite this, no policy for the incorporation of serious games in education exists to date.\ud This review paper draws from the literature to provide guideline recommendations that\ud would help educators and policymakers in making the first step towards this
Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs.
A numerical analysis of a novel birefringent photonic crystal fiber (PCF) biosensor\ud constructed on the surface plasmon resonance (SPR) model is presented in this paper.\ud This biosensor configuration utilizes circular air holes to introduce birefringence into\ud the structure. This PCF biosensor model shows promise in the area of multiple detection\ud using HEx\ud 11 and HEy\ud 11 modes to sense more than one analyte. A numerical study of the biosensor\ud is performed in two interrogation modes: amplitude and wavelength. Sensor resolution\ud values with spectral interrogation yielded 5 10 5 RIU (refractive index units) for\ud HEx\ud 11 modes and 6 10 5 RIU for HEy\ud 11 modes, whereas 3 10 5 RIU for HEx\ud 11 modes\ud and 4 10 5 RIU for HEy\ud 11 modes are demonstrated for the amplitude interrogation
R ecent film releases such as Avatar have revolutionized cinema by combining 3D technology and content production and real actors, leading to the creation of a new genre at the outset of the 2010s. The success of 3D cinema has led several major consumer electronics manufacturers to launch 3D-capable televisions and broadcasters to offer 3D content. Today's 3DTV technology is based on stereo vision, which presents left-and right-eye images through temporal or spatial multiplexing to viewers wearing a pair of glasses. The next step in 3DTV development will likely be a multiview autostereoscopic imaging system, which will record and present many pairs of video signals on a display and will not require viewers to wear glasses. 1,2 Although researchers have proposed several autostereoscopic displays, the resolution and viewing position is still limited. Furthermore, stereo and multiview technologies rely on the brain to fuse the two disparate images to create the 3D effect. As a result, such systems tend to cause eye strain, fatigue, and headaches after prolonged viewing because users are required to focus on the screen plane (accommodation) but to converge their eyes to a point in space in a different plane (convergence), producing unnatural viewing. Recent advances in digital technology have eliminated some of these human factors, but some intrinsic eye fatigue will always exist with stereoscopic 3D technology. 3 These facts have motivated researchers to seek alternative means for capturing true 3D content, most notably holography and holoscopic imaging. Due to the interference of the coherent light fields required to record holograms, their use is still limited and mostly confined to research laboratories. Holoscopic imaging (also referred to as integral imaging) in its simplest form, on the other hand, consists of a lens array mated to a digital sensor with each lens capturing perspective views of the scene. 49 In this case, the light field does not need to be coherent, so holoscopic color images can be obtained with full parallax. This conveniently lets us adopt more conventional live capture and display procedures. Furthermore, 3D holoscopic imaging offers fatigue-free viewing to more than one person, independent of the viewers' positions.Due to recent advances in theory and microlens manufacturing, 3D holoscopic imaging is becoming a practical, prospective 3D display technology and is thus attracting much interest in the 3D area. The 3D Live Immerse VideoAudio Interactive Multimedia (3D Vivant, www.3dvivant.eu) project, funded by the EU-FP7 ICT-4-1.5Networked Media and 3D Internet, has proposed advances in 3D holoscopic imaging technology for the capture, representation, processing, and display of 3D holoscopic content that overcome most of the aforementioned restrictions faced by traditional 3D technologies. This article presents our work as part of the 3D Vivant project. 3D Holoscopic Content GenerationThe 3D holoscopic imaging technique creates and represents a true volume spatial optical model of the objec...
The Standard Plenoptic Camera (SPC) is an innovation in photography, allowing for acquiring two-dimensional images focused at different depths, from a single exposure. Contrary to conventional cameras, the SPC consists of a micro lens array and a main lens projecting virtual lenses into object space. For the first time, the present research provides an approach to estimate the distance and depth of refocused images extracted from captures obtained by an SPC. Furthermore, estimates for the position and baseline of virtual lenses which correspond to an equivalent camera array are derived. On the basis of paraxial approximation, a ray tracing model employing linear equations has been developed and implemented using Matlab. The optics simulation tool Zemax is utilized for validation purposes. By designing a realistic SPC, experiments demonstrate that a predicted image refocusing distance at 3.5 m deviates by less than 11% from the simulation in Zemax, whereas baseline estimations indicate no significant difference. Applying the proposed methodology will enable an alternative to the traditional depth map acquisition by disparity analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.