1Agent-Based Computing is a diverse research domain concerned with the building of intelligent software based on the concept of "agents". In this paper, we use Scientometric analysis to analyze all sub-domains of agent-based computing. Our data consists of 1,064 journal articles indexed in the ISI web of knowledge published during a twenty year period: 1990-2010. These were retrieved using a topic search with various keywords commonly used in sub-domains of agent-based computing. In our proposed approach, we have employed a combination of two applications for analysis, namely Network Workbench and CiteSpace -wherein Network Workbench allowed for the analysis of complex network aspects of the domain, detailed visualization-based analysis of the bibliographic data was performed using CiteSpace. Our results include the identification of the largest cluster based on keywords, the timeline of publication of index terms, the core journals and key subject categories. We also identify the core authors, top countries of origin of the manuscripts along with core research institutes. Finally, our results have interestingly revealed the strong presence of agentbased computing in a number of non-computing related scientific domains including Life Sciences, Ecological Sciences and Social Sciences.
BackgroundLiving systems are associated with Social networks — networks made up of nodes, some of which may be more important in various aspects as compared to others. While different quantitative measures labeled as “centralities” have previously been used in the network analysis community to find out influential nodes in a network, it is debatable how valid the centrality measures actually are. In other words, the research question that remains unanswered is: how exactly do these measures perform in the real world? So, as an example, if a centrality of a particular node identifies it to be important, is the node actually important?PurposeThe goal of this paper is not just to perform a traditional social network analysis but rather to evaluate different centrality measures by conducting an empirical study analyzing exactly how do network centralities correlate with data from published multidisciplinary network data sets.MethodWe take standard published network data sets while using a random network to establish a baseline. These data sets included the Zachary's Karate Club network, dolphin social network and a neural network of nematode Caenorhabditis elegans. Each of the data sets was analyzed in terms of different centrality measures and compared with existing knowledge from associated published articles to review the role of each centrality measure in the determination of influential nodes.ResultsOur empirical analysis demonstrates that in the chosen network data sets, nodes which had a high Closeness Centrality also had a high Eccentricity Centrality. Likewise high Degree Centrality also correlated closely with a high Eigenvector Centrality. Whereas Betweenness Centrality varied according to network topology and did not demonstrate any noticeable pattern. In terms of identification of key nodes, we discovered that as compared with other centrality measures, Eigenvector and Eccentricity Centralities were better able to identify important nodes.
Agent-based modeling and simulation tools provide a mature platform for development of complex simulations. They however, have not been applied much in the domain of mainstream modeling and simulation of computer networks. In this article, we evaluate how and if these tools can offer any value-addition in the modeling & simulation of complex networks such as pervasive computing, large-scale peer-to-peer systems, and networks involving considerable environment and human/animal/habitat interaction. Specifically, we demonstrate the effectiveness of NetLogo — a tool that has been widely used in the area of agent-based social simulation
The Lemaitre's continuum damage model is well known in the field of damage mechanics. The anisotropic damage model given by Lemaitre is relatively simple, applicable to nonproportional loads and uses only four damage parameters. The hypothesis of strain equivalence is used to map the effective stress to the nominal stress. Both the isotropic and anisotropic damage models from Lemaitre are implemented in an in-house implicit finite element code. The damage model is coupled with an elasto-plastic material model using anisotropic plasticity (Hill-48 yield criterion) and strain-rate dependent isotropic hardening. The Lemaitre continuum damage model is based on the small strain assumption; therefore, the model is implemented in an incremental co-rotational framework to make it applicable for large strains. The damage dissipation potential was slightly adapted to incorporate a different damage evolution behavior under compression and tension. A tensile test and a low-cycle fatigue test were used to determine the damage parameters. The damage evolution was modified to incorporate strain rate sensitivity by making two of the damage parameters a function of strain rate. The model is applied to predict failure in a cross-die deep drawing process, which is well known for having a wide variety of strains and strain path changes. The failure predictions obtained from the anisotropic damage models are in good agreement with the experimental results, whereas the predictions obtained from the isotropic damage model are slightly conservative. The anisotropic damage model predicts the crack direction more accurately compared to the predictions based on principal stress directions using the isotropic damage model. The set of damage parameters, determined in a uniaxial condition, gives a good failure prediction under other triaxiality conditions.
BackgroundComputer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems.PurposeIt is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem.MethodWe use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy.ResultsThe conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
We live in a time where electronic gadgets and integrated sensors are all around usfrom versatile Smartphones and tablets to portable PCs, and from indoor temperature regulators to microwave ovens. We live in a new world-a world of smart*-where intelligence and connectivity is added to every conceivable object. The vision of the internet of things (IoT) by Ashton (2009) appears to have manifested itself-albeit in unexpected ways. This emergence of the IoT in our everyday lives obviously has numerous implications resulting in a very different environment and society. Considering that the IoT concept is itself quite new, it is understandable that it is difficult to model. Researchers from the communication systems area often focus primarily Abstract Sensors, coupled with transceivers, have quickly evolved from technologies purely confined to laboratory test beds to workable solutions used across the globe. These mobile and connected devices form the nuts and bolts required to fulfill the vision of the so-called internet of things (IoT). This idea has evolved as a result of proliferation of electronic gadgets fitted with sensors and often being uniquely identifiable (possible with technological solutions such as the use of Radio Frequency Identifiers). While there is a growing need for comprehensive modeling paradigms as well as example case studies for the IoT, currently there is no standard methodology available for modeling such real-world complex IoT-based scenarios. Here, using a combination of complex networks-based and agent-based modeling approaches, we present a novel approach to modeling the IoT. Specifically, the proposed approach uses the Cognitive Agent-Based Computing (CABC) framework to simulate complex IoT networks. We demonstrate modeling of several standard complex network topologies such as lattice, random, small-world, and scale-free networks. To further demonstrate the effectiveness of the proposed approach, we also present a case study and a novel algorithm for autonomous monitoring of power consumption in networked IoT devices. We also discuss and compare the presented approach with previous approaches to modeling. Extensive simulation experiments using several network configurations demonstrate the effectiveness and viability of the proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.