“…The current state-of-the-art approaches are 9 – 14 for research article classification, employed conventional statistical measures like one hot Encoding, BOW, and TFIDF etc. Due to which they have not considered semantic and context due to which classification decision may affect.…”
Section: Methodsmentioning
confidence: 99%
“…In this approach, they combine the structural information (title, abstract) with citation of research paper for some big achievement in document classification. Sajid et al 14 proposed fuzzy logic-based classifier for the classification of research paper in Computer Science domain. For experimental purpose they select the JUCS datasets due to the coverage of all areas of Computer Science domain.…”
Section: Literaturementioning
confidence: 99%
“…As is the case with some re-known journals like ACM and IEEE have not made the entire articles publicly available. In such scenarios, some scholars have turned to metadata as an alternate method of categorizing research papers 12 – 14 . Metadata of the research articles like title, keywords, key terms, authors, and categories are freely available online.…”
Section: Introductionmentioning
confidence: 99%
“…The goal of text representation is to numerically convert unstructured text data into mathematically quantifiable documents. The current state-of-the-art approaches use the traditional statistical measures such as Term Frequency (TF), Bag of Word (BOW), and Term Frequency and Inverse Document Frequency (TFIDF) 9 – 14 . As a result, they have overlooked the semantic and contextual information of keywords, potentially leading to the incorrect categorization of research publications.…”
Section: Introductionmentioning
confidence: 99%
“…In the present state-of-the-art 12 – 14 , researchers first chose the strategy of asking domain experts for similarity threshold values or setting arbitrary values and then ensure it on the dataset through trial and error, which is a time-consuming operation. Dependence on domain specialists or arbitrary values is insufficient for the goal.…”
Every year, around 28,100 journals publish 2.5 million research publications. Search engines, digital libraries, and citation indexes are used extensively to search these publications. When a user submits a query, it generates a large number of documents among which just a few are relevant. Due to inadequate indexing, the resultant documents are largely unstructured. Publicly known systems mostly index the research papers using keywords rather than using subject hierarchy. Numerous methods reported for performing single-label classification (SLC) or multi-label classification (MLC) are based on content and metadata features. Content-based techniques offer higher outcomes due to the extreme richness of features. But the drawback of content-based techniques is the unavailability of full text in most cases. The use of metadata-based parameters, such as title, keywords, and general terms, acts as an alternative to content. However, existing metadata-based techniques indicate low accuracy due to the use of traditional statistical measures to express textual properties in quantitative form, such as BOW, TF, and TFIDF. These measures may not establish the semantic context of the words. The existing MLC techniques require a specified threshold value to map articles into predetermined categories for which domain knowledge is necessary. The objective of this paper is to get over the limitations of SLC and MLC techniques. To capture the semantic and contextual information of words, the suggested approach leverages the Word2Vec paradigm for textual representation. The suggested model determines threshold values using rigorous data analysis, obviating the necessity for domain expertise. Experimentation is carried out on two datasets from the field of computer science (JUCS and ACM). In comparison to current state-of-the-art methodologies, the proposed model performed well. Experiments yielded average accuracy of 0.86 and 0.84 for JUCS and ACM for SLC, and 0.81 and 0.80 for JUCS and ACM for MLC. On both datasets, the proposed SLC model improved the accuracy up to 4%, while the proposed MLC model increased the accuracy up to 3%.
“…The current state-of-the-art approaches are 9 – 14 for research article classification, employed conventional statistical measures like one hot Encoding, BOW, and TFIDF etc. Due to which they have not considered semantic and context due to which classification decision may affect.…”
Section: Methodsmentioning
confidence: 99%
“…In this approach, they combine the structural information (title, abstract) with citation of research paper for some big achievement in document classification. Sajid et al 14 proposed fuzzy logic-based classifier for the classification of research paper in Computer Science domain. For experimental purpose they select the JUCS datasets due to the coverage of all areas of Computer Science domain.…”
Section: Literaturementioning
confidence: 99%
“…As is the case with some re-known journals like ACM and IEEE have not made the entire articles publicly available. In such scenarios, some scholars have turned to metadata as an alternate method of categorizing research papers 12 – 14 . Metadata of the research articles like title, keywords, key terms, authors, and categories are freely available online.…”
Section: Introductionmentioning
confidence: 99%
“…The goal of text representation is to numerically convert unstructured text data into mathematically quantifiable documents. The current state-of-the-art approaches use the traditional statistical measures such as Term Frequency (TF), Bag of Word (BOW), and Term Frequency and Inverse Document Frequency (TFIDF) 9 – 14 . As a result, they have overlooked the semantic and contextual information of keywords, potentially leading to the incorrect categorization of research publications.…”
Section: Introductionmentioning
confidence: 99%
“…In the present state-of-the-art 12 – 14 , researchers first chose the strategy of asking domain experts for similarity threshold values or setting arbitrary values and then ensure it on the dataset through trial and error, which is a time-consuming operation. Dependence on domain specialists or arbitrary values is insufficient for the goal.…”
Every year, around 28,100 journals publish 2.5 million research publications. Search engines, digital libraries, and citation indexes are used extensively to search these publications. When a user submits a query, it generates a large number of documents among which just a few are relevant. Due to inadequate indexing, the resultant documents are largely unstructured. Publicly known systems mostly index the research papers using keywords rather than using subject hierarchy. Numerous methods reported for performing single-label classification (SLC) or multi-label classification (MLC) are based on content and metadata features. Content-based techniques offer higher outcomes due to the extreme richness of features. But the drawback of content-based techniques is the unavailability of full text in most cases. The use of metadata-based parameters, such as title, keywords, and general terms, acts as an alternative to content. However, existing metadata-based techniques indicate low accuracy due to the use of traditional statistical measures to express textual properties in quantitative form, such as BOW, TF, and TFIDF. These measures may not establish the semantic context of the words. The existing MLC techniques require a specified threshold value to map articles into predetermined categories for which domain knowledge is necessary. The objective of this paper is to get over the limitations of SLC and MLC techniques. To capture the semantic and contextual information of words, the suggested approach leverages the Word2Vec paradigm for textual representation. The suggested model determines threshold values using rigorous data analysis, obviating the necessity for domain expertise. Experimentation is carried out on two datasets from the field of computer science (JUCS and ACM). In comparison to current state-of-the-art methodologies, the proposed model performed well. Experiments yielded average accuracy of 0.86 and 0.84 for JUCS and ACM for SLC, and 0.81 and 0.80 for JUCS and ACM for MLC. On both datasets, the proposed SLC model improved the accuracy up to 4%, while the proposed MLC model increased the accuracy up to 3%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.