Singlet oxygen is a primary cytotoxic agent in photodynamic therapy. We show that CeF3 nanoparticles, pure as well as conjugated through electrostatic interaction with the photosensitizer verteporfin, are able to generate singlet oxygen as a result of UV light and 8 keV X-ray irradiation. The X-ray stimulated singlet oxygen quantum yield was determined to be 0.79 ± 0.05 for the conjugate with 31 verteporfin molecules per CeF3 nanoparticle, the highest conjugation level used. From this result we estimate the singlet oxygen dose generated from CeF3-verteporfin conjugates for a therapeutic dose of 60 Gy of ionizing radiation at energies of 6 MeV and 30 keV to be (1.2 ± 0.7) × 108 and (2.0 ± 0.1) × 109 singlet oxygen molecules per cell, respectively. These are comparable with cytotoxic doses of 5 × 107–2 × 109 singlet oxygen molecules per cell reported in the literature for photodynamic therapy using light activation. We confirmed that the CeF3-VP conjugates enhanced cell killing with 6 MeV radiation. This work confirms the feasibility of using X- or γ- ray activated nanoparticle-photosensitizer conjugates, either to supplement the radiation treatment of cancer, or as an independent treatment modality.
While the primary focus of affective computing has been on constructing efficient and reliable models of affect, the vast majority of such models are limited to a specific task and domain. This paper, instead, investigates how computational models of affect can be general across dissimilar tasks; in particular, in modeling the experience of playing very different video games. We use three dissimilar games whose players annotated their arousal levels on video recordings of their own playthroughs. We construct models mapping ranks of arousal to skin conductance and gameplay logs via preference learning and we use a form of cross-game validation to test the generality of the obtained models on unseen games. Our initial results comparing between absolute and relative measures of the arousal annotation values indicate that we can obtain more general models of player affect if we process the model output in an ordinal fashion.
Quantum walks exhibit many unique characteristics compared to classical random walks. In the classical setting, self-avoiding random walks have been studied as a variation on the usual classical random walk. Here the walker has memory of its previous locations and preferentially avoids stepping back to locations where it has previously resided. Classical self-avoiding random walks have found numerous algorithmic applications, most notably in the modelling of protein folding. We consider the analogous problem in the quantum setting – a quantum walk in one dimension with tunable levels of self-avoidance. We complement a quantum walk with a memory register that records where the walker has previously resided. The walker is then able to avoid returning back to previously visited sites or apply more general memory conditioned operations to control the walk. We characterise this walk by examining the variance of the walker's distribution against time, the standard metric for quantifying how quantum or classical a walk is. We parameterise the strength of the memory recording and the strength of the memory back-action on the walker, and investigate their effect on the dynamics of the walk. We find that by manipulating these parameters, which dictate the degree of self-avoidance, the walk can be made to reproduce ideal quantum or classical random walk statistics, or a plethora of more elaborate diffusive phenomena. In some parameter regimes we observe a close correspondence between classical self-avoiding random walks and the quantum self-avoiding walk.
Traditional search tasks have taught us much about vision and attention. Recently, several groups have begun to use multiple-target search to explore more complex and temporally extended "foraging" behaviour. Many of these new foraging tasks, however, maintain the simplified 2D displays and response demands associated with traditional, single-target visual search. In this respect, they may fail to capture important aspects of real-world search or foraging behaviour. In the current paper, we present a serious game for mobile platforms, developed in Unity3D, in which human participants play the role of an animal foraging for food in a simulated 3D environment. Game settings can be adjusted, so that, for example, custom target and distractor items can be uploaded, and task parameters, such as the number of target categories or target/distractor ratio are all easy to modify. We are also making the Unity3D project available, so that further modifications can also be made. We demonstrate how the app can be used to address specific research questions by conducting two human foraging experiments. Our results indicate that in this 3D environment, a standard feature/conjunction manipulation does not lead to a reduction in foraging runs, as it is known to do in simple, 2D foraging tasks.
Abstract-Player believability is often defined as the ability of a game playing character to convince an observer that it is being controlled by a human. The agent's behavior is often assumed to be the main contributor to the character's believability. In this paper we reframe this core assumption and instead focus on the impact of the game environment and aspects of game design (such as level design) on the believability of the game character. To investigate the relationship between game content and believability we crowdsource rank-based annotations from subjects that view playthrough videos of various AI and human controlled agents in platformer levels of dissimilar characteristics. For this initial study we use a variant of the well-known Super Mario Bros game. We build support vector machine models of reported believability based on gameplay and level features which are extracted from the videos. The highest performing model predicts perceived player believability of a character with an accuracy of 73.31%, on average, and implies a direct relationship between level features and player believability.
Traditional search tasks have taught us much about vision and attention. Recently, several groups have begun to use multiple-target search to explore more complex and temporally extended “foraging” behaviour. Many of these new foraging tasks, however, maintain the simplified 2D displays and response demands associated with traditional, single-target visual search. In this respect, they may fail to capture important aspects of real-world search or foraging behaviour. In the current paper, we present a serious game for mobile platforms in which human participants play the role of an animal foraging for food in a simulated 3D environment. Game settings can be adjusted, so that, for example, custom target and distractor items can be uploaded, and task parameters, such as the number of target categories or target/distractor ratio are all easy to modify. We demonstrate how the app can be used to address specific research questions by conducting two human foraging experiments. Our results indicate that in this 3D environment, a standard feature/conjunction manipulation does not lead to a reduction in foraging runs, as it is known to do in simple, 2D foraging tasks. Differences in foraging behaviour are discussed in terms of environment structure, task demands and attentional constraints.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.