We investigated why violations to the constant-ratio rule, an assumption of the generalized matching law, occur in procedures that arrange frequent changes to reinforcer ratios. Our investigation produced steady-state data and compared them with data from equivalent, frequently changing procedures. Six pigeons responded in a four-alternative concurrent-schedule experiment with an arranged reinforcer-rate ratio of 27:9:3:1. The same four variable-interval schedules were used in every condition, for 50 sessions, and the physical location of each schedule was changed across conditions. The experiment was a steady-state version of a frequently changing procedure in which the locations of four VI schedules were changed every 10 reinforcers. We found that subjects' responding was consistent with the constant-ratio rule in the steady-state procedure. Additionally, local analyses showed that preference after reinforcement was towards the alternative that was likely to produce the next reinforcer, instead of being towards the just-reinforced alternative as in frequently changing procedures. This suggests that the effect of a reinforcer on preference is fundamentally different in rapidly changing and steady-state environments. Comparing this finding to the existing literature suggests that choice is more influenced by reinforcer-generated signals when the reinforcement contingencies often change.
Behavior reduced as a consequence of extinction or intervention can relapse. According to behavioral momentum theory, the extent to which behavior persists and relapses once it has been eliminated depends on the relative training reinforcement rate among discriminative stimuli. In addition, studies of context renewal reveal that relapse depends on the similarity between the training stimulus context and the test stimulus context following disruption by extinction. In the present experiments with pigeons, we arranged different reinforcement rates in the presence of distinct discriminative stimuli across components of a multiple schedule. Following extinction, we attempted to reinstate responding in the presence of those target components with response-independent food presentations. Importantly, we arranged the reinstating food presentations either within the target components or in separate components, either paired with extinction (Experiment 1) or reinforcement (Experiment 2) during baseline. Reinstatement increased with greater training reinforcement rates when the reinstating food presentations were arranged in the target components and the separate components paired with reinforcement during training. Reinstatement was smaller and was not systematically related to training reinforcement rates in the target components when reinstating food presentation occurred in separate components paired with extinction. These findings suggest that relapse depends on the history of reinforcement associated with the discriminative stimuli in which the relapse-inducing event occurs.
of effect models and choice between many alternatives", Journal of the Experimental Analysis of Behavior 100 2 (2013) 222-256. Law of effect models and choice between many alternatives AbstractData from five experiments on choice between more than two variable-interval schedules were modeled with different equations for the Law of Effect. Navakatikyan's (2007) component-functions models with three, four and five free parameters were compared with Stevens' (1957), Herrnstein's (1970) and Davison and Hunter's (1976) equations. These latter models are consistent with the generalized-matching principle, whereas Navakatikyan's models are not. Navakatikyan's models performed better or on par with their competitors, especially in predicting residence-time data and generalized-matching sensitivities for time allocation. The models described well an observed decrease, in several of these data sets, in generalized-matching sensitivity between two alternatives when reinforcer rate increased on the other alternatives. Models built on the generalized-matching principle cannot do this. Navakatikyan's models also performed better, though to a lesser extent, than their competitors for data sets that are not obviously inconsistent with generalized matching. LAW OF EFFECT MODELS AND CHOICE BETWEEN MANY ALTERNATIVES Date of submission: Nov 26, 2012Multi-alternative models for the law of effect Page 2 Abstract Data from 5 experiments on choice between more than 2 variable-interval schedules were modeled
IntroductionDevelopments in Artificial Intelligence (AI) are adopted widely in healthcare. However, the introduction and use of AI may come with biases and disparities, resulting in concerns about healthcare access and outcomes for underrepresented indigenous populations. In New Zealand, Māori experience significant inequities in health compared to the non-Indigenous population. This research explores equity concepts and fairness measures concerning AI for healthcare in New Zealand.MethodsThis research considers data and model bias in NZ-based electronic health records (EHRs). Two very distinct NZ datasets are used in this research, one obtained from one hospital and another from multiple GP practices, where clinicians obtain both datasets. To ensure research equality and fair inclusion of Māori, we combine expertise in Artificial Intelligence (AI), New Zealand clinical context, and te ao Māori. The mitigation of inequity needs to be addressed in data collection, model development, and model deployment. In this paper, we analyze data and algorithmic bias concerning data collection and model development, training and testing using health data collected by experts. We use fairness measures such as disparate impact scores, equal opportunities and equalized odds to analyze tabular data. Furthermore, token frequencies, statistical significance testing and fairness measures for word embeddings, such as WEAT and WEFE frameworks, are used to analyze bias in free-form medical text. The AI model predictions are also explained using SHAP and LIME.ResultsThis research analyzed fairness metrics for NZ EHRs while considering data and algorithmic bias. We show evidence of bias due to the changes made in algorithmic design. Furthermore, we observe unintentional bias due to the underlying pre-trained models used to represent text data. This research addresses some vital issues while opening up the need and opportunity for future research.DiscussionsThis research takes early steps toward developing a model of socially responsible and fair AI for New Zealand's population. We provided an overview of reproducible concepts that can be adopted toward any NZ population data. Furthermore, we discuss the gaps and future research avenues that will enable more focused development of fairness measures suitable for the New Zealand population's needs and social structure. One of the primary focuses of this research was ensuring fair inclusions. As such, we combine expertise in AI, clinical knowledge, and the representation of indigenous populations. This inclusion of experts will be vital moving forward, proving a stepping stone toward the integration of AI for better outcomes in healthcare.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.