La elevación en los Andes es un factor definitivo en la estructura de los ensamblajes de peces y los ecosistemas en que habitan. Los ensamblajes en el río Magdalena se estructuran mediante la interacción entre el tipo de ecosistema, la elevación y el ciclo de lluvias. En este capítulo se analiza la información ecológica disponible en diferentes fuentes sobre los peces del río Magdalena y se presentan algunas características de los ensamblajes de peces. La mayor riqueza de especies se encuentra en las zonas bajas de la cuenca del río Magdalena y, en particular, en los ecosistemas de corrientes (ríos y quebradas); mientras que la mayor diferencia en la composición de especies entre cuerpos de agua se observa en las zonas arriba de los 1.600 m s.n.m. Esta característica les confiere particularidades a los ensamblajes en términos de la riqueza de especies en cada uno de los gremios tróficos, de las estrategias de vida y de los nichos funcionales en los que se estructuran los patrones de la ictiofauna en los Andes nor-occidentales. En la cuenca se reconocen cuatro grandes gremios tróficos: carnívoros, detritívoros, omnívoros y planctófagos. El gremio de los carnívoros es el más rico en especies. La estrategia de vida más diversa en esta cuenca andina es la oportunista. Estas características en la riqueza de especies por ecosistema, por gremio trófico y por estrategia de vida, causa que el ensamblaje del río Magdalena finalmente se estructure en 25 nichos funcionales y, de nuevo, el mayor número de nichos se encuentra en las zonas debajo de los 1.200 m s.n.m. El conocimiento de la ecología de los peces en el río Magdalena aún es incipiente, pero es posible aportar a partir de la información disponible para la toma de decisiones por parte de los diferentes actores de la cuenca.
Programmers use various software development artifacts in their work, such as programming environments, design documents, and programming codes. These software artifacts can be studied and improved based on usability and User eXperience (UX) factors. In this paper, we consider programmers to be a specific case of users and analyze different elements that influence their experience in this specific context. We conducted a systematic literature review of papers published over the last ten years related to 1) the definition of the Programmer eXperience (PX); 2) the PX, UX, and usability factors regarding the programming environments, design documents, and programming codes; and 3) sets of heuristics to evaluate the software development artifacts mentioned before. We analyzed 73 articles, and the results obtained show that: 1) the important elements that influence the PX are the motivation of programmers and the choice of tools they use in their work, such as programming environments; 2) most of the identified studies (59%) aimed to evaluate the influence of the PX, UX, and usability on programming environments; 3) the majority of the studies (70%) used methods such as usability tests and/or heuristic evaluation methods; and 4) four sets of heuristics are used to evaluate software development artifacts in relation to programming environments, programming languages, and application programming interfaces. The results suggest that further research in this area is necessary to better understand and evaluate the concept of the PX.
ResumenEste artículo presenta una comparativa del rendimiento de las herramientas Hadoop y Giraph para de procesamiento de grandes volúmenes de información o Big Data con el fin mostrar su utilidad para el procesamiento de Big Graph. El análisis y procesamiento de grandes volúmenes de información representa un verdadero desafío en la actualidad. Ya existen metodologías y herramientas libres para el procesamiento de Big Data como las mencionadas: Hadoop para el procesamiento de grandes volúmenes de datos, principalmente no estructurados, y Giraph para el procesamiento de grandes grafos o Big Graph. En esta comparativa, este trabajo presenta un análisis del costo en tiempo de ejecución práctico de la implementación del algoritmo PageRank, el cual permite clasificar páginas Web según su relevancia, y de algoritmos para encontrar un árbol de expansión mínima en un grafo. Los experimentos muestran que el uso de Giraph para el procesamiento de Big Graph reduce el tiempo de ejecución en un 25% respecto a los resultados con el uso de Hadoop. Applicability of Giraph and Hadoop for the Processing of Big Graph AbstractThis article presents a comparison of the performance of the tools Hadoop y Giraph for the analysis and processing of large volumes of information or Big Data, with the aim of showing their usefulness for Big Graph processing. The analysis and processing of large volumes of information represents a real challenge nowadays. There already exist Big Data methodologies and free processing tools such as those mentioned above: Hadoop for processing large volumes of data, mainly non-related data, and recently Giraph for processing large graphs or Big Graph. In this comparison, this paper presents an analysis of the execution time cost for the practical implementation of the PageRank algorithm, which classifies Web pages according to their relevance, and of algorithms to find the minimum spanning tree in a graph. Experiments show that the use of Giraph for processing Big Graphs reduces the execution time by 25% with respect to the results obtained using Hadoop.
Worldwide, the coronavirus has intensified the management problems of health services, significantly harming patients. Some of the most affected processes have been cancer patients’ prevention, diagnosis, and treatment. Breast cancer is the most affected, with more than 20 million cases and at least 10 million deaths by 2020. Various studies have been carried out to support the management of this disease globally. This paper presents a decision support strategy for health teams based on machine learning (ML) tools and explainability algorithms (XAI). The main methodological contributions are: first, the evaluation of different ML algorithms that allow classifying patients with and without cancer from the available dataset; and second, an ML methodology mixed with an XAI algorithm, which makes it possible to predict the disease and interpret the variables and how they affect the health of patients. The results show that first, the XGBoost Algorithm has a better predictive capacity, with an accuracy of 0.813 for the train data and 0.81 for the test data; and second, with the SHAP algorithm, it is possible to know the relevant variables and their level of significance in the prediction, and to quantify the impact on the clinical condition of the patients, which will allow health teams to offer early and personalized alerts for each patient.
In Chile and the world, the supply of medical hours to provide care has been reduced due to the health crisis caused by COVID-19. As of December 2021, the outlook has been critical in Chile, both in medical and surgical care, where 1.7 million people wait for care, and the wait for surgery has risen from 348 to 525 days on average. This occurs mainly when the demand for care exceeds the supply available in the public system, which has caused serious problems in patients who will remain on hold and health teams have implemented management measures through prioritization measures so that patients are treated on time. In this paper, we propose a methodology to work in net for predicting the prioritization of patients on surgical waiting lists (SWL) embodied with a machine learning scheme for a high complexity hospital (HCH) in Chile. That is linked to the risk of each waiting patient. The work presents the following contributions; The first contribution is a network method that predicts the priority order of anonymous patients entering the SWL. The second contribution is a dynamic quantification of the risk of waiting patients. The third contribution is a patient selection protocol based on a dynamic update of the SWL based on the components of prioritization, risk, and clinical criteria. The optimization of the process was measured by a simulation of the total times of the system in HCH. The prioritization strategy proposed savings of medical hours allowing 20% additional surgeries to be performed, thus reducing SWL by 10%. The risk of waiting patients could drop by up to 8% annually. We hope to implement this methodology in real health care units.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.