2014
DOI: 10.1007/s11192-014-1503-4
|View full text |Cite
|
Sign up to set email alerts
|

Errors in DOI indexing by bibliometric databases

Abstract: DOI-i.e., Digital Object Identifier-is a character string, which univocally identifies entities that are object of intellectual property. In bibliometrics, DOIs are used for univocally identifying scientific papers. The aim of this short communication is to raise the reader's awareness of bibliometric database errors in DOI indexing, in particular, the incorrect assignment of a single DOI to multiple papers. This error is quite interesting since DOI is commonly regarded as an effective means to identify scient… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
27
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 45 publications
(29 citation statements)
references
References 6 publications
(7 reference statements)
1
27
0
Order By: Relevance
“…Based on samples of duplicates identified in our study and in a recent study by Franceschini, Maisano, and Mastrogiacomo (2015), who found non-duplicate records with the same DOI in Scopus, it seems possible that Scopus is failing to check DOIs (see Figs. 2 and 3).…”
Section: Discussionmentioning
confidence: 59%
“…Based on samples of duplicates identified in our study and in a recent study by Franceschini, Maisano, and Mastrogiacomo (2015), who found non-duplicate records with the same DOI in Scopus, it seems possible that Scopus is failing to check DOIs (see Figs. 2 and 3).…”
Section: Discussionmentioning
confidence: 59%
“…The final reader count was the sum of the reader counts of all correctly matching articles (for more details, see : Thelwall & Wilson, 2016). DOIs are not universal in citation databases (Gorraiz, Melero-Fuentes, Gumpenberger, & Valderrama-Zurián, 2016) and are usually correct (Franceschini, Maisano, & Mastrogiacomo, 2015).…”
Section: Datamentioning
confidence: 99%
“…Preliminary testing had found that both Scopus and Microsoft Academic records contained some errors in author names, journal names and publication years (as previously found for Scopus : Franceschini, Maisano, & Mastrogiacomo, 2015b), which accounts for the higher recall for the title-only searches despite using approximate match queries (single equals signs in the queries). Some title differences are also likely between Microsoft Academic and Scopus because titles are not always recorded consistently.…”
Section: Reasons For Queries Returning Incorrect or No Matchesmentioning
confidence: 62%