This review article summarizes the history the Hungarian Scientific Cloud Infrastructure project. This research infrastructure was launched officially on 1 October 2016, funded by the Hungarian Academy of Sciences. With the support of ELKH, the infrastructure’s capacity has been substantially boosted; the features and workflows that it offers to scientists were significantly expanded to celebrate the arrival of the year 2022. The article reviews the types of work Hungarian researchers implemented on the infrastructure, thereby providing an overview of the state of cloud-computing enabled science in Hungary.
This paper investigates the current wave of Artificial Intelligence Ethics GUidelines (AIGUs). The goal is not to provide a broad survey of the details of such efforts; instead, the reasons for the proliferation of such guidelines is investigated. Two main research questions are pursued. First, what is the justification for the proliferation of AIGUs, and what are the reasonable goals and limitations of such projects? Second, what are the specific concerns of AI that are so unique that general technology regulation cannot cover them? The paper reveals that the development of AI guidelines is part of a decades-long trend of an ever-increasing express need for stronger social control of technology, and that many of the concerns of the AIGUs are not specific to the technology itself, but are rather about transparency and human oversight. Nevertheless, the positive potential of the situation is that the intense world-wide focus on AIGUs will yield such profound guidelines that the regulation of other technologies may want to follow suite.
This paper elaborates on the connection between the AI regulation fever and the generic concept of Social Control of Technology. According to this analysis, the amplitude of the regulatory efforts may reflect the lock-in potential of the technology in question. Technological lock-in refers to the ability of a limited set of actors to force subsequent generations onto a certain technological trajectory, hence evoking a new interpretation of Technological Determinism. The nature of digital machines amplifies their lock-in potential as the multiplication and reuse of such technology is typically almost cost-free. I sketch out how AI takes this to a new level because it can be software and an autonomous agent simultaneously.
This paper takes stock of all the various factors that cause the design-time opacity of autonomous systems behaviour. The factors include embodiment effects, design-time knowledge gap, human factors, emergent behaviour and tacit knowledge. This situation is contrasted with the usual representation of moral dilemmas that assume perfect information. Since perfect information is not achievable, the traditional moral dilemma representations are not valid and the whole problem of ethical autonomous systems design proves to be way more empirical than previously understood.
If you see Wikipedia as a main place where the knowledge of mankind is concentrated, then DBpedia-which is extracted from Wikipedia-is the best place to find the machine representation of that knowledge. DBpedia constitutes a major part of the semantic data on the web. Its sheer size and wide coverage enables you to use it in many kind of mashups: it contains biographical, geographical, bibliographical data; as well as discographies, movie metadata, technical specifications, and links to social media profiles and much more. Just like Wikipedia, DBpedia is a truly cross-language effort, e.g., it provides descriptions and other information in various languages. In this chapter we introduce its structure, contents, and its connections to outside resources. We describe how the structured information in DBpedia is gathered, what you can expect from it and what are its characteristics and limitations. We analyze how other mashups exploit DBpedia and present best practices of its usage. In particular, we describe how Sztakipedia-an intelligent writing aid based on DBpedia-can help Wikipedia contributors to improve the quality and integrity of articles. DBpedia offers a myriad of ways to accessing the information it contains, ranging from SPARQL to bulk download. We compare the pros and cons of these methods. We conclude that DBpedia is an unavoidable resource for applications dealing with commonly known entities like notable persons, places; and for others looking for a rich hub connecting other semantic resources. IntroductionIn this section, we take a closer look at Wikipedia itself, then we examine the process by which DBpedia extracts information from it. WikipediaBy now, Wikipedia is a big ubiquitous collaborative encyclopedia counting over 10 million articles in over 200 languages. Readers are very active: Wikipedia receives over 10 billion page views per month and over 200 thousand edits per day. However, growth in article count and number of contributions no longer seems to be exponential for the largest English language edition. 1 For our purposes, contrasting Wikipedia to traditional printed works is not essential, but it allows us to draw attention to some of its key characteristics. Wikipedia is not governed by a formal editorial board, but instead by the community and its self-imposed guidelines, decision making and escalation processes. Unavoidably, the coverage of articles in a given language edition is biased towards public interest of the Wikipedians speaking the language. The English language Wikipedia has been found to be on a par in accuracy with Encyclopaedia Britannica [12], and with peer reviewed medical journals [25]. Furthermore, Wikipedia has the unmatched ability to cover current events and incorporate changes in near real time.Also, Wikipedia is free to download and hack for everyone. As all digital documents, it has structural elements, like lists and tables. Like encyclopedias, it also has a category system. Furthermore, it contains many infoboxes-structured schemas that communica...
All papers Publised in Információs Társadalom are Licensed to Creative Commons BY-NC-SA This article investigates controversial online marketing techniques that involve buying hundreds or even thousands of fake social media items, such as likes on Face-book, Twitter and Instagram followers, Reddit upvotes, mailing list subscriptions, and YouTube subscribers and likes. The findings presented here are based on an analysis of 7,426 "campaigns" posted on the crowdsourcing platform microworkers.com over a 365 day (i.e., year-long) period. These campaigns contained a combined 1,856,316 microtasks with a net budget of USD 208,466.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.