Despite its dominance over the past three decades, model-centric AI has recently come under heavy criticism in favor of data-centric AI. Indeed, both promise to improve the performance of AI systems, yet with converse points of focus. While the former successively upgrades a devised model (algorithm/code), holding the amount and type of data used in model training fixed, the latter enhances the quality of deployed data continuously, paying less attention to further model upgrades. Rather than favoring either of the two approaches, this paper reconciles data-centric AI with model-centric AI. In so doing, we connect current AI to the field of cybersecurity and natural language inference, and through the phenomena of ‘adversarial samples’ and ‘hypothesis-only biases’, respectively, showcase the limitations of model-centric AI in terms of algorithmic stability and robustness. Further, we argue that overcoming the alleged limitations of model-centric AI may well require paying extra attention to the alternative data-centric approach. However, this should not result in reducing interest in model-centric AI. Our position is supported by the notion that successful ‘problem solving’ requires considering both the way we act upon things (algorithm) as well as harnessing the knowledge derived from data of their states and properties.
BackgroundWe investigated how temporal context affects the learning of arbitrary visuo-motor associations. Human observers viewed highly distinguishable, fractal objects and learned to choose for each object the one motor response (of four) that was rewarded. Some objects were consistently preceded by specific other objects, while other objects lacked this task-irrelevant but predictive context.ResultsThe results of five experiments showed that predictive context consistently and significantly accelerated associative learning. A simple model of reinforcement learning, in which three successive objects informed response selection, reproduced our behavioral results.ConclusionsOur results imply that not just the representation of a current event, but also the representations of past events, are reinforced during conditional associative learning. In addition, these findings are broadly consistent with the prediction of attractor network models of associative learning and their prophecy of a persistent representation of past objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.