Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Recent advances in the field of artificial intelligence have revived longstanding debates about the interaction between humans and technology. These debates have tended to center around the ability of computers to exceed the capacities and understandings of human decisionmakers, and the resulting effects on the future of labor, inequality, and society more generally. These questions have found particular resonance in finance, where computers already play a dominant role. High-frequency traders, quantitative (or "quant") hedge funds, and robo-advisors all represent, to a greater or lesser degree, real-world instantiations of the impact that artificial intelligence is having on the field. This Article, however, takes a somewhat contrarian position. It argues that the primary danger of artificial intelligence in finance is not so much that it will surpass human intelligence, but rather that it will exacerbate human error. It will do so in three ways. First, because current artificial intelligence techniques rely heavily on identifying patterns in historical data, use of these techniques will tend to lead to results that perpetuate the status quo (a status quo that exhibits all the features and failings of the external market). Second, because some of the most "accurate" artificial intelligence strategies are the least transparent or explainable ones, decisionmakers may well give more weight to the results of these algorithms than they are due. Finally, because much of the financial industry depends not just on predicting what will happen in the world, but also on predicting what other people will predict will happen in the world, it is likely that small errors in applying artificial intelligence (either in data, programming, or execution) will have outsized effects on markets. This is not to say that artificial intelligence has no place in the financial industry, or even that it is bad for the industry. It clearly is here to stay, and, what is more, has much to offer in terms of efficiency, speed, and cost. But as governments and regulators begin to take stock of the technology, it is worthwhile to consider artificial intelligence's realworld limitations.
Recent advances in the field of artificial intelligence have revived longstanding debates about the interaction between humans and technology. These debates have tended to center around the ability of computers to exceed the capacities and understandings of human decisionmakers, and the resulting effects on the future of labor, inequality, and society more generally. These questions have found particular resonance in finance, where computers already play a dominant role. High-frequency traders, quantitative (or "quant") hedge funds, and robo-advisors all represent, to a greater or lesser degree, real-world instantiations of the impact that artificial intelligence is having on the field. This Article, however, takes a somewhat contrarian position. It argues that the primary danger of artificial intelligence in finance is not so much that it will surpass human intelligence, but rather that it will exacerbate human error. It will do so in three ways. First, because current artificial intelligence techniques rely heavily on identifying patterns in historical data, use of these techniques will tend to lead to results that perpetuate the status quo (a status quo that exhibits all the features and failings of the external market). Second, because some of the most "accurate" artificial intelligence strategies are the least transparent or explainable ones, decisionmakers may well give more weight to the results of these algorithms than they are due. Finally, because much of the financial industry depends not just on predicting what will happen in the world, but also on predicting what other people will predict will happen in the world, it is likely that small errors in applying artificial intelligence (either in data, programming, or execution) will have outsized effects on markets. This is not to say that artificial intelligence has no place in the financial industry, or even that it is bad for the industry. It clearly is here to stay, and, what is more, has much to offer in terms of efficiency, speed, and cost. But as governments and regulators begin to take stock of the technology, it is worthwhile to consider artificial intelligence's realworld limitations.
Trust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers' trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers' trust
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.