“…Nor can it apply the rigorous disciplinary frame of the ontology-we add this in our ontology mapping application. But it can help offer feedback in support of the development of a well-formed clinical narrative to support an educational and medical objective that we would characterize as "critical clinical thinking" (Cope et al, 2022).…”
This paper analyzes the scope of Artificial Intelligence (AI) from the perspective of a multimodal grammar. Its focal point is Generative AI, a technology that puts so-called Large Language Models to work. The first part of the paper analyzes Generative AI, based as it is on the statistical probability of one token (a word or part of a word) following another. If the relation of tokens is meaningful, this is circumstantial and no more, because its mechanisms of statistical analysis eschew any theory of meaning. This is the case not only for the written text that Generative AI leverages, but by extension image and multimodal forms of meaning that it can generate. The AI can only work with non-textual forms of meaning after applying language labels, and to that extent is captive not only to the limits of probabilistic statistics but the limits of written language as well. While acknowledging gains arising from the brute statistical power of Generative AI, in its second part the paper goes on to map what is lost in its statistical and text-bound approaches to multimodal meaning-making. Our measure of these gains and losses is guided by the concept of grammar, defined here as a theory of the elemental patterns of meaning in the world—not just written text and speech, but also image, space, object, body, and sound. Ironically, a good deal of what is lost by Generative AI is computable. The third and final part of the paper briefly discusses educational applications of Generative AI. Given both its power and intrinsic limitations, we have been experimenting with the application of Generative AI in educational settings and the ways it might be put to pedagogical use. How does a grammatical analysis help us to identify the scope of worthwhile application? Finally, if more of human experience is computable than can be captured in text-bound AI, how might it be possible at the level of code to create a synthesis in which grammatical and multimodal approaches complement Generative AI?
“…Nor can it apply the rigorous disciplinary frame of the ontology-we add this in our ontology mapping application. But it can help offer feedback in support of the development of a well-formed clinical narrative to support an educational and medical objective that we would characterize as "critical clinical thinking" (Cope et al, 2022).…”
This paper analyzes the scope of Artificial Intelligence (AI) from the perspective of a multimodal grammar. Its focal point is Generative AI, a technology that puts so-called Large Language Models to work. The first part of the paper analyzes Generative AI, based as it is on the statistical probability of one token (a word or part of a word) following another. If the relation of tokens is meaningful, this is circumstantial and no more, because its mechanisms of statistical analysis eschew any theory of meaning. This is the case not only for the written text that Generative AI leverages, but by extension image and multimodal forms of meaning that it can generate. The AI can only work with non-textual forms of meaning after applying language labels, and to that extent is captive not only to the limits of probabilistic statistics but the limits of written language as well. While acknowledging gains arising from the brute statistical power of Generative AI, in its second part the paper goes on to map what is lost in its statistical and text-bound approaches to multimodal meaning-making. Our measure of these gains and losses is guided by the concept of grammar, defined here as a theory of the elemental patterns of meaning in the world—not just written text and speech, but also image, space, object, body, and sound. Ironically, a good deal of what is lost by Generative AI is computable. The third and final part of the paper briefly discusses educational applications of Generative AI. Given both its power and intrinsic limitations, we have been experimenting with the application of Generative AI in educational settings and the ways it might be put to pedagogical use. How does a grammatical analysis help us to identify the scope of worthwhile application? Finally, if more of human experience is computable than can be captured in text-bound AI, how might it be possible at the level of code to create a synthesis in which grammatical and multimodal approaches complement Generative AI?
“…When the medical profession talks about our bodies, it uses the shared rubric of the International Statistical Classification of Diseases and Related Health Problems. Jobs are classified by occupation (SIC: Standard Occupational Classification) [30]. Products for sale are identified by article numbers (IAN: International Article Number) and sorted into kinds of product.…”
Artificial intelligence (AI) is emerging as a defining technology of our time, a source of fear as often as inspiration. Immersed in its practicalities, rarely do we get to ask the question, what is it? How does it impact our lives? How does it extend our human capacities? What are its risks? What are its limits? This paper is a theoretical and historical overview of the nature of binary computing that underpins AI and its relations with human intelligence. It also considers some philosophical questions about the semiotic or sense-creating work of computers. Our argument proceeds in five steps. We begin with an historical background: since Ada Lovelace, we have wondered about the intelligence of machines capable of computation, and the ways in which machine intelligence can extend human intelligence. Second, we ask, in what ways does binary computing extend human intelligence and delimit the scope of AI? Third, we propose a grammar with which to parse the practical meanings that are enabled with and through binary computing. Through this discussion, we raise the question of ontology as a counter-balance to what we will argue has been an over-emphasis on the instrumental reasoning processes of the algorithm. Fourth, we situate binary computing in the context of broad developments in modern societies which we characterize as a series of systems transitions: from industrial, to informational, to a new phase that we term “cyber-social.” Finally, we explore the risks inherent in a pervasively cyber-social system. These are narrowly captured in the technical domain, “cybersecurity.” We set out to reconceive this problem framework as the location for a potential solution, supplementing analyses of cybersecurity risk with a program of cyber-social trust.
“…Such critical epistemological questioning can be more easily incorporated within the social sciences and humanities, but is also essential in technical, hardscience fields, as the chapter 'Maps of Medical Reason: Applying Knowledge Graphs and Artificial Intelligence in Medical Education and Practice' (Cope et al 2022) discusses. Utilizing both philosophies of ontology and more positivistic technical aspects of biodigital technologies, the authors map the possibilities of using AI within medical professions with essential humanistic groundings which, unfortunately, are absent.…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.