This paper introduces Ciw, an open-source library for conducting discrete event simulations that has been developed in Python. The strengths of the library are illustrated in terms of best practice and reproducibility for computational research. An analysis of Ciw's performance and comparison to several alternative discrete event simulation frameworks is presented. ARTICLE HISTORY
The Axelrod library is an open source Python package that allows for reproducible game theoretic research into the Iterated Prisoner's Dilemma. This area of research began in the 1980s but suffers from a lack of documentation and test code. The goal of the library is to provide such a resource, with facilities for the design of new strategies and interactions between them, as well as conducting tournaments and ecological simulations for populations of strategies.With a growing collection of 139 strategies, the library is a also a platform for an original tournament that, in itself, is of interest to the game theoretic community. This paper describes the Iterated Prisoner's Dilemma, the Axelrod library and its development, and insights gained from some novel research.
Aim We aim to compare machine learning with neural network performance in predicting R0 resection (R0), length of stay > 14 days (LOS), major complication rates at 30 days postoperatively (COMP) and survival greater than 1 year (SURV) for patients having pelvic exenteration for locally advanced and recurrent rectal cancer. Method A deep learning computer was built and the programming environment was established. The PelvEx Collaborative database was used which contains anonymized data on patients who underwent pelvic exenteration for locally advanced or locally recurrent colorectal cancer between 2004 and 2014. Logistic regression, a support vector machine and an artificial neural network (ANN) were trained. Twenty per cent of the data were used as a test set for calculating prediction accuracy for R0, LOS, COMP and SURV. Model performance was measured by plotting receiver operating characteristic (ROC) curves and calculating the area under the ROC curve (AUROC). Results Machine learning models and ANNs were trained on 1147 cases. The AUROC for all outcome predictions ranged from 0.608 to 0.793 indicating modest to moderate predictive ability. The models performed best at predicting LOS > 14 days with an AUROC of 0.793 using preoperative and operative data. Visualized logistic regression model weights indicate a varying impact of variables on the outcome in question. Conclusion This paper highlights the potential for predictive modelling of large international databases. Current data allow moderate predictive ability of both complex ANNs and more classic methods.
Cross-lingual embeddings are vector space representations where word translations tend to be co-located. These representations enable learning transfer across languages, thus bridging the gap between data-rich languages such as English and others. In this paper, we present and evaluate a suite of cross-lingual embeddings for the English–Welsh language pair. To train the bilingual embeddings, a Welsh corpus of approximately 145 M words was combined with an English Wikipedia corpus. We used a bilingual dictionary to frame the problem of learning bilingual mappings as a supervised machine learning task, where a word vector space is first learned independently on a monolingual corpus, after which a linear alignment strategy is applied to map the monolingual embeddings to a common bilingual vector space. Two approaches were used to learn monolingual embeddings, including word2vec and fastText. Three cross-language alignment strategies were explored, including cosine similarity, inverted softmax and cross-domain similarity local scaling (CSLS). We evaluated different combinations of these approaches using two tasks, bilingual dictionary induction, and cross-lingual sentiment analysis. The best results were achieved using monolingual fastText embeddings and the CSLS metric. We also demonstrated that by including a few automatically translated training documents, the performance of a cross-lingual text classifier for Welsh can increase by approximately 20 percent points.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.