Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human, social and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level of realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. This research proposed a lip syncing method of realistic expressive 3D face model. Animated lips requires a 3D face model capable of representing the myriad shapes the human face experiences during speech and a method to produce the correct lip shape at the correct time. The paper presented a 3D face model designed to support lip syncing that align with input audio file. It deforms using Raised Cosine Deformation (RCD) function that is grafted onto the input facial geometry. The face model was based on MPEG-4 Facial Animation (FA) Standard. This paper proposed a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. The proposed research integrated emotions by the consideration of Ekman model and Plutchik's wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language (EEMML) to produce realistic 3D face model.
In this study, we attempt to provide healthcare service to the pilgrims. This study describes how a multimedia courseware can be used in making the pilgrims aware of the common diseases that are present in Saudi Arabia during the pilgrimage. The multimedia courseware will also be used in providing some information about the symptoms of these diseases, and how each of them can be treated. The multimedia courseware contains a virtual representation of a hospital, some videos of actual cases of patients, and authentic learning activities intended to enhance health competencies during the pilgrimage. An examination of the courseware was conducted so as to study the manner in which the elements of the courseware are applied in real-time learning. More so, in this research, a discussion on the most dangerous diseases which may occur during the season of pilgrimage is provided. The use of the multimedia course is able to effectively and efficiently provide information to the pilgrims about these diseases. This technology performs this task by using the knowledge that has been accumulated from past experience, particularly in the field of disease diagnosis, medicine and treatment. The courseware has been created using an authoring tool known as ToolBook instructor to provide pilgrims with quality service.
Cloth simulation and animation has been the topic of research since the mid-80's in the field of computer graphics. Enforcing incompressible is very important in real time simulation. Although, there are great achievements in this regard, it still suffers from unnecessary time consumption in certain steps that is common in real time applications. This research develops a real-time cloth simulator for a virtual human character (VHC) with wearable clothing. This research achieves success in cloth simulation on the VHC through enhancing the position-based dynamics (PBD) framework by computing a series of positional constraints which implement constant densities. Also, the self-collision and collision with moving capsules is implemented to achieve realistic behavior cloth modelled on animated characters. This is to enable comparable incompressibility and convergence to raised cosine deformation (RCD) function solvers. On implementation, this research achieves optimized collision between clothes, syncing of the animation with the cloth simulation and setting the properties of the cloth to get the best results possible. Therefore, a real-time cloth simulation, with believable output, on animated VHC is achieved. This research perceives our proposed method can serve as a completion to the game assets clothing pipeline.
<span>In today’s world, Diabetes is one of these diseases and is now a big growing health problem. The techniques of data mining have been widely applied to extract knowledge from medical databases. In this work, a Medical Diagnosis system of Diabetes is proposed for the diagnosis of diabetes in a manner that is rapid and cost-effective. three stages are involved in the proposed diabetes diagnosis system (DDS) including: dataset constructing, preprocessing and classification algorithm using traditional Naïve Bayesian (TNB) and modified Naïve Bayesian (MNB)). MNB Classifier is a modified NB that is used to enhance the accuracy of diagnosis, by adding a proposed modest model to help separate the overlapping diagnosis classes. The outcome showed that the accuracy of MNB classifier is generally higher than that of TNB classifier for all sets of features. An accuracy of about (63%) was achieved for the TNB model, whereas that of the MNB model is (100%). The experimental results showed that the MNB is better than the traditional NB in both two cases of constructed medical datasets; the first case of filling the missing values by experiences and the second case of filling missing values by K-nearest neighbor (KNN) algorithm.</span>
Speech is one of the most important interaction methods between the humans. Therefore, most of avatar researches focus on this area with significant attention. Creating animated speech requires a facial model capable of representing the myriad shapes the human face expressions during speech. Moreover, a method to produce the correct shape at the correct time is also in order. One of the main challenges is to create precise lip movements of the avatar and synchronize it with a recorded audio. This paper proposes a new lip synchronization algorithm for realistic applications, which can be employed to generate synchronized facial movements among the audio generated from natural speech or through a text-to-speech engine. This method requires an animator to construct animations using a canonical set of visemes for all pair wise combination of a reduced phoneme set. These animations are then stitched together smoothly to construct the final animation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.