International Electrotechnical Commission (IEC) established the framework for the use of exposure index (EI) for evaluating the exposure conditions with various digital systems. In this study, we investigated the feasibility of EI, as per the IEC, by comparing the EIs obtained through manual calculated and that displayed on the console of two computed radiography (CR) and digital radiography (DR) systems with radiation beam qualities of RQA3,5,7 and 9. As a result, both two systems indicated an uncertainty of less than 20% for both calculated and displayed EI with all beam qualities except displayed EI obtained by RQA3. However, the displayed EI values were different even under the same exposure conditions because of the characteristics of the imaging receptor materials, such as BaFI or CsI, of two systems. Therefore, when an operator attempts to introduce displayed EI for managing radiation dose, it is essential to understand the characteristics of the digital system.
This paper presents a deep reinforcement learning-based path planning algorithm for the multi-arm robot manipulator when there are both fixed and moving obstacles in the workspace. Considering the problem properties such as high dimensionality and continuous action, the proposed algorithm employs the SAC (soft actor-critic). Moreover, in order to predict explicitly the future position of the moving obstacle, LSTM (long short-term memory) is used. The SAC-based path planning algorithm is developed using the LSTM. In order to show the performance of the proposed algorithm, simulation results using GAZEBO and experimental results using real manipulators are presented. The simulation and experiment results show that the success ratio of path generation for arbitrary starting and goal points converges to 100%. It is also confirmed that the LSTM successfully predicts the future position of the obstacle.
The International Electrotechnical Commission introduced the concepts of exposure index (EI), target exposure index (EIT) and deviation index (DI) to manage and optimize patient dose in real time. In this study, we have proposed an appropriate method for setting the EIT based on the Korean national diagnostic reference levels (DRLs). Furthermore, we evaluated the use of clinical EI, EIT and DI as tools for patient dose optimization in clinical environments by observing the changes in DI with those in EIT. According to the Korean national exposure conditions, we conducted experiments on three representative radiographic examinations (chest posterior–anterior, lateral and abdomen anterior–posterior) of clinical environments. As the exposure conditions and DRLs varied, the clinical EI, EIT and DI also varied. These results reveal that the clinical EI, EIT and DI can be used as tools for optimizing the patient dose if EIT is periodically and properly updated.
Reinforcement learning (RL) trains an agent by maximizing the sum of a discounted reward. Since the discount factor has a critical effect on the learning performance of the RL agent, it is important to choose the discount factor properly. When uncertainties are involved in the training, the learning performance with a constant discount factor can be limited. For the purpose of obtaining acceptable learning performance consistently, this paper proposes an adaptive rule for the discount factor based on the advantage function. Additionally, how to use the advantage function in both on-policy and off-policy algorithms is presented. To demonstrate the performance of the proposed adaptive rule, it is applied to PPO (Proximal Policy Optimization) for Tetris in order to validate the on-policy case, and to SAC (Soft Actor-Critic) for the motion planning of a robot manipulator to validate the off-policy case. In both cases, the proposed method results in a better or similar performance compared with cases using the best constant discount factors found by exhaustive search. Hence, the proposed adaptive discount factor automatically finds a discount factor that leads to comparable training performance, and that can be applied to representative deep reinforcement learning problems.
This paper provides a control strategy for battery energy storage system (BESS) in suppressing the photovoltaic (PV) output fluctuation so that the smoothed power delivered to the grid satisfies the maximum ramp requirement of the system. As one of robust control solutions, model predictive control (MPC) is proposed so the control actions of battery charging and discharging can be planned by considering some constraints such as battery power and state of charge (SoC) limitations. Moving average filter (MAF) is used to generate the output reference for the combined system of PV and BESS where the time window is selected by considering the maximum ramp provided to the system. The simulation results show that the proposed control strategy is able to track the prescribed reference effectively so the smoothing strategy meets the fluctuations limit while satisfying the battery constraints.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.