2018
DOI: 10.1007/s10619-018-7245-1
|View full text |Cite
|
Sign up to set email alerts
|

DataSynapse: A Social Data Curation Foundry

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
4

Relationship

4
4

Authors

Journals

citations
Cited by 40 publications
(34 citation statements)
references
References 13 publications
0
34
0
Order By: Relevance
“…This process scales the dataset to an interval of (−1,1). Here, we use PCA and K-means together because after applying dimensional reduction and normalization, the subspace spanning the principal direction is the same compared to the cluster centroid subspace [ 24 , 25 , 26 , 27 ]. For example, the following figure is an example of PCA analysis using VIRMOTIF on 3000 hepatitis B viruses.…”
Section: Methodsmentioning
confidence: 99%
“…This process scales the dataset to an interval of (−1,1). Here, we use PCA and K-means together because after applying dimensional reduction and normalization, the subspace spanning the principal direction is the same compared to the cluster centroid subspace [ 24 , 25 , 26 , 27 ]. For example, the following figure is an example of PCA analysis using VIRMOTIF on 3000 hepatitis B viruses.…”
Section: Methodsmentioning
confidence: 99%
“…The explosive growth in the amount of available information has created a potential challenge of curating and analyzing the information in order to find a valuable and useful insight. The process of curating the information and prepare them for analytics is important as the driven knowledge can be a vital asset for organizations and governments [14]. This knowledge may refer to a set of facts, information, and insights which are extracted and curated from raw data [98].…”
Section: Domain-specific Recommendersmentioning
confidence: 99%
“…• data-driven-which enables leveraging Artificial Intelligence and Machine Learning technologies to contextualize the Big Data generated on Open, Private and Social platforms/systems to improve the accuracy of recommendations [14]. The goal is to facilitate the use of content and collaborative filtering, and focus on the shift from statistical modeling to deep learning-based modeling (Deep Learning Recommendation Models) to improve correlations between features and attributes to generate better predictions; • knowledge-driven-which enables mimicking the knowledge of domain experts using crowdsourcing techniques [15].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, if a user retweets a tweet on Twitter, it would be helpful to understand the text of the tweet, whether it contains an image or URL, and the keywords or entities (e.g., people, organisations, locations and products) and topics mentioned. In this context, data curation [144]- [146] (i.e., the task of preparing the raw data for analytics) can help in turning raw data into contextualised data and knowledge. For example, curating a raw tweet from Twitter can tell us if the tweet contains a mention of a person named Barak Obama (using entity extraction and coreference reolution techniques [147]) who was the 44th president of the United States (using linking techniques [148] to link this entity to external knowledge sources such as Wikidata 7 ).…”
Section: Approach Time Complexitymentioning
confidence: 99%