Autopilot systems are typically composed of an "inner loop" providing stability and control, while an "outer loop" is responsible for mission-level objectives, e.g. way-point navigation. Autopilot systems for UAVs are predominately implemented using Proportional, Integral Derivative (PID) control systems, which have demonstrated exceptional performance in stable environments. However more sophisticated control is required to operate in unpredictable, and harsh environments. Intelligent flight control systems is an active area of research addressing limitations of PID control most recently through the use of reinforcement learning (RL) which has had success in other applications such as robotics. However previous work has focused primarily on using RL at the mission-level controller. In this work, we investigate the performance and accuracy of the inner control loop providing attitude control when using intelligent flight control systems trained with the state-of-theart RL algorithms, Deep Deterministic Gradient Policy (DDGP), Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO). To investigate these unknowns we first developed an open-source high-fidelity simulation environment to train a flight controller attitude control of a quadrotor through RL. We then use our environment to compare their performance to that of a PID controller to identify if using RL is appropriate in high-precision, time-critical flight control.
The purpose of the present research was to develop general guidelines to assist practitioners in setting up operational computerized adaptive testing (CAT) systems based on the graded response model. Simulated data were used to investigate the effects of systematic manipulation of various aspects of the CAT procedures for the model. The effects of three major variables were examined: item pool size, the stepsize used along the trait continuum until maximum likelihood estimation could be calculated, and the stopping rule employed. The findings suggest three guidelines for graded response CAT procedures: (1) item pools with as few as 30 items may be adequate for CAT; (2) the variable-stepsize method is more useful than the fixedstepsize methods; and (3) the minimum-standard-error stopping rule will yield fewer cases of nonconvergence, administer fewer items, and produce higher correlations of CAT &thetas; estimates with full-scale estimates and the known &thetas;s than the minimum-information stopping rule. The implications of these findings for psychological assessment are discussed. Index terms: computerized adaptive testing, graded response model, item response theory, polychotomous scoring.
We introduce a Markov-model-based framework for Moving Target Defense (MTD) analysis. The framework allows modeling of broad range of MTD strategies, provides general theorems about how the probability of a successful adversary defeating an MTD strategy is related to the amount of time/cost spent by the adversary, and shows how a multi-level composition of MTD strategies can be analyzed by a straightforward combination of the analysis for each one of these strategies. Within the proposed framework we define the concept of security capacity which measures the strength or effectiveness of an MTD strategy: the security capacity depends on MTD specific parameters and more general system parameters. We apply our framework to two concrete MTD strategies.
Simulated datasets were used to research the effects of the systematic variation of three major variables on the performance of computerized adaptive testing (CAT) procedures for the partial credit model. The three variables studied were the stopping rule for terminating the CATs, item pool size, and the distribution of the difficulty of the items in the pool. Results indicated that the standard error stopping rule performed better across the variety of CAT conditions than the minimum information stopping rule. In addition it was found that item pools that consisted of as few as 30 items were adequate for CAT provided that the item pool was of medium difficulty. The implications of these findings for implementing CAT systems based on the partial credit model are discussed.
The two-parameter graded response latent trait model was applied to data collected from a conventionally constructed Likert-type attitude scale. Comparisons were made of both the person latent trait estimates and the item parameter estimates with their counterparts from the conventional scaling method. Also studied were the goodness of fit of the graded response model and the information function feature of the model indicating the precision of measurement at each level of the attitude trait continuum. The results demonstrated that the graded response model could be successfully used to perform attitude measurement for Likert scales.
Simulated data were used to investigate systematically the impact of various orderings of step difficulties on the distribution of item information for the partial credit model. It was found that the distribution of information for an item was a function of (1) the range of the step difficulty values, (2) the number of step difficulties that were out of sequential order, and (3) the distance between the step values that were out of order. Also, by using relative efficiency comparisons, the relationship between the step estimates and the distribution of item information was used to demonstrate the effects of various test revisions (through the addition and/or deletion of items with specific step characteristics) on the resulting test's precision of measurement. The usefulness of item and test information functions for specific measurement applications of the partial credit model is also discussed.During the last decade, developments in item theory (IRT) have offered new approaches for solving many practical measurement problems. o Birnbaum's (1968) conceptualization of information functions for individual items and tests has been used in many of IRT. The primary benefit of information functions is that they allow selection of items for inclusion in a test such that the precision of measurement for the test is maximized at the specific trait (0) level that is of interest to the examiner. Another benefit is that information functions for two tests can be compared in terms s of relative efficiency, which can aid in the selection of the best test for a given measurement situation.Information functions have also been used effectively to determine item selection for computerized adaptive testing.For dichotomously scored items, information functions have primarily been used with the threeparameter IRT model rather than the one-parameter Rasch model. The information an item provides is by definition the square of the ratio of the slope of the item characteristic curve to the conditional standard error of measurement (Lord, 1980). For the three-parameter model, item information functions for items in a test differ from one another because items are to vary in terms of discriminations, difficulties, and the lower asymptotes of the item characteristic curves. For the Rasch model, all item information functions yield the same maximum amount of information; they differ from one another only in terms of the 0 level for which the maximum information is provided. This is because the Rasch model assumes that items have equal discriminations and lower asymptotes of 0 for the item characteristic curves. Thus the use of information functions with the simple Rasch model usually provides no additional information beyond the difficulty level of the items. Samejima (1969) extended Birnbaum's formulation of information functions to the case where items are polychotomously scored. By comparing the information yielded by items scored with opat UNIVERSITE DE MONTREAL on August 4, 2015 apm.sagepub.com Downloaded from
Auditory discrimination abilities of children with and without attention deficits were investigated to measure the variability due to different response modes (verbal [NU-6] and picture pointing [GFW]) and competing messages (GFW). Results showed no differences between response modes in quiet, but significant differences in noise between groups with children having ADD showing poorer speech discrimination. Additionally, differential effects between types of competing messages for the same task were not found in the ADD group. These results are discussed in relation to the clinical use of these tests, the relationships seen between results, and implications for educational management.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.