Deep learning is one of the most unexpected machine learning techniques which is being used in many applications like image classification, image analysis, clinical archives and object recognition. With an extensive utilization of digital images as information in the hospitals, the archives of medical images are growing exponentially. Digital images play a vigorous role in predicting the patient disease intensity and there are vast applications of medical images in diagnosis and investigation. Due to recent developments in imaging technology, classifying medical images in an automatic way is an open research problem for researchers of computer vision. For classifying the medical images according to their relevant classes a most suitable classifier is most important. Image classification is beneficial to predict the appropriate class or category of unknown images. The less discriminating ability and domain-specific categorization are the main drawbacks of low-level features. A semantic gap that exists between features of low-level as machine understanding and features of human understanding as high-level perception. In this research, a novel image representation method is proposed where the algorithm is trained for classifying medical images by deep learning technique. A pre-trained deep convolution neural network method with the fine-tuned approach is applied to the last three layers of deep neural network. The results of the experiment exhibit that our method is best suited to classify various medical images for various body organs. In this manner, data can sum up to other medical classification applications which supports radiologist's efforts for improving diagnosis.
Nonfunctional requirements are ignored because functional requirements are considered to be more important than nonfunctional requirements, lack of knowledge and methodologies to accomplish nonfunctional requirement elicitation and the nature of agile software development process. This ignorance of nonfunctional requirements become the cause to failure of projects. Furthermore, cloud computing helps to practice twelve (12) agile principles including nonfunctional requirement elicitation. This study proposed a semi-automated methodology for eliciting nonfunctional requirement in agile development and cloud computing environment. The methodology used an NLP based automatic NFR extraction approach to fast the NFR elicitation process. The methodology is evaluated by applying on eProcurement dataset. The results are improved as compared to existing studies.
Waxy corn starch has been utilized as a substitute of 1,4-BDO at different moles ratio. Reaction of starch in the PU was monitored by FTIR spectroscopy. Structural characterization was done by using NMR spectroscopy. GPC was used to confirm the role of starch as a chain extender. Thermal degradation behavior of PUs was influenced by using starch.
In natural language processing, text summarization is an important application used to extract desired information by reducing large text. Existing studies use keyword-based algorithms for grouping text, which do not give the documents' actual theme. Our proposed dynamic corpus creation mechanism combines metadata with summarized extracted text. The proposed approach analyzes the mesh of multiple unstructured documents and generates a linked set of multiple weighted nodes by applying multistage Clustering. We have generated adjacency graphs to link the clusters of various collections of documents. This approach comprises of ten steps: pre-processing, making multiple corpuses, first stage clustering, creating sub-corpuses, interlinking sub-corpuses, creating page rank keyword dictionary of each sub-corpus, second stage clustering, path creation among clusters of sub-corpuses, text processing by forward and backward propagation for results generation. The outcome of this technique consists of interlinked subcorpuses through clusters. We have applied our approach to a News dataset, and this interlinked corpus processing follows step by step clustering to search the most relevant parts of the corpus with less cost, time, and improve content detection. We have applied six different metadata processing combinations over multiple text queries to compare results during our experimentation. The comparison results of text satisfaction show that Page-Rank keywords give 38% related text, single-stage Clustering gives 46%, twostage Clustering gives 54%, and the proposed technique gives 67% associated text. Furthermore, this approach covers/searches the relevant data with a range of most to less relevant content. It provides the systematic query-relevant corpus processing mechanism, which automatically selects the most relevant subcorpus through dynamic path selection. We used the SHAP model to evaluate the proposed technique, and our evaluation results proved that the proposed mechanism improved text processing. Moreover, combining text summarization features, shown satisfactory results compared to the summaries generated by general models of abstractive & extractive summarization.
The success of data mining learned rules highly depends on its actionability: how useful it is to perform suitable actions in any real business environment. To improve rule actionability, different researchers have initially presented various Data Mining (DM) frameworks by focusing on different factors only from the business domain dataset. Afterward, different Domain-Driven Data Mining (D3M) frameworks were introduced by focusing on domain knowledge factors from the context of the overall business environment. Despite considering these several dataset factors and domain knowledge factors in different phases of their frameworks, the learned rules still lacked actionability. The objective of our research is to improve the learned rules' actionability. For this purpose, we have analyzed: (1) what overall actions or tasks are being performed in the overall business process, (2) in which sequence different tasks are being performed, (3) under what certain conditions these tasks are being performed, (4) by whom the tasks are being performed (5) what data is provided and produced in performing these tasks. We observed that the inclusion of rule learning factors only from dataset or from domain knowledge is not sufficient. Our Process-based Domain-Driven Data Mining-Actionable Knowledge Discovery (PD3M-AKD) framework explains its different phases to consider and include additional factors from five perspectives of the business process. This PD3M-AKD framework is also in line with the existing phases of current DM and D3M frameworks for considering and including dataset and domain knowledge accordingly. Finally, we evaluated and validated our case study results from different real-life scenarios from education, engineering, and business process domains at the end. INDEX TERMS Actionable knowledge, business process, data mining, data mining framework, domain-driven data mining framework, data privacy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.