2019
DOI: 10.1016/j.websem.2018.08.002
|View full text |Cite
|
Sign up to set email alerts
|

Decentralized Collaborative Knowledge Management Using Git

Abstract: The World Wide Web and the Semantic Web are designed as a network of distributed services and datasets. The distributed character of the Web brings manifold collaborative possibilities to interchange data. The commonly adopted collaborative solutions for RDF data are centralized (e. g. SPARQL endpoints and wiki systems). But to support distributed collaboration, a system is needed, that supports divergence of datasets, brings the possibility to conflate diverged states, and allows distributed datasets to be sy… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 18 publications
(30 citation statements)
references
References 23 publications
0
30
0
Order By: Relevance
“…Representative approaches are included in Rebele et al (2016), Calbimonte et al (2017), Narula et al (2018), Salatino et al (2018), Tommasini et al (2018) and in the recent related work of Arndt et al (2019).…”
Section: Introductionmentioning
confidence: 99%
“…Representative approaches are included in Rebele et al (2016), Calbimonte et al (2017), Narula et al (2018), Salatino et al (2018), Tommasini et al (2018) and in the recent related work of Arndt et al (2019).…”
Section: Introductionmentioning
confidence: 99%
“…It is therefore essential to extend current implementations to incorporate semantically-aware conflict detection and reconciliation between datasets with different metadata classification systems. Current state of the art relies on a supervised approach to identify and handle cases where users have made changes to parallel copies of data records (e.g., Arndt et al, 2019 ).…”
Section: A Model For Decentralized But Globally Coordinated Data Aggrmentioning
confidence: 99%
“…Inspired by big science efforts like the Human Genome Project, major biodiversity initiatives have set the goal of aggregating all data about where and when different biological entities-most typically "species" in our context-are located, in order to provide critical insight into global problems such as rapid biodiversity loss and climate change (Peterson et al, 2010;Devictor and Bensaude-Vincent, 2016;IPBES, 2019;Wagner, 2020). However, there are an exceptionally large and heterogeneous set of stakeholders for this emerging biodiversity knowledge commons (Adams et al, 2002;Strandburg et al, 2017), making effective governance a critical, ongoing challenge (Alphandéry andFortier, 2010, Turnhout et al, 2014). The present moment marks a pivotal opportunity to examine how a new, decentralized approach may better provide the "flexibility both to accommodate and to benefit from this diversity [of contributors], rather than seeking to implement a prescriptive programme of planned deliverables" (Hobern et al, 2019, p. 9)-as recommended by a recent report from the second Global Biodiversity Informatics Conference.…”
Section: Introductionmentioning
confidence: 99%
“…The semantic diff of Archivo based on (OWL) axiom diffs goes a step further. Quit [ 2 ] implements an RDF versioning and collaboration system on top of Git. It provides unified access via SPARQL 1.1 on each version of an ontology and the versioning history.…”
Section: Related Workmentioning
confidence: 99%