2016
DOI: 10.1016/j.ufug.2016.08.001
|View full text |Cite
|
Sign up to set email alerts
|

Public open space desktop auditing tool—Establishing appropriateness for use in Australian regional and urban settings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…Another possible reason that leads to the differences between greenness (NDVI) and the percentage of green space may be partly due to differences in the quality of green space. Green space quality can be assessed subjectively [ 30 ] or via objective assessment [ 65 , 66 ]. Quality can include a variety of aspects of green space, of which vegetation is one, and it is possible that NDVI captures an aspect of quality.…”
Section: Discussionmentioning
confidence: 99%
“…Another possible reason that leads to the differences between greenness (NDVI) and the percentage of green space may be partly due to differences in the quality of green space. Green space quality can be assessed subjectively [ 30 ] or via objective assessment [ 65 , 66 ]. Quality can include a variety of aspects of green space, of which vegetation is one, and it is possible that NDVI captures an aspect of quality.…”
Section: Discussionmentioning
confidence: 99%
“…The images in the present study were on average more than 5 years old, and certain images from smaller roads and more rural areas were 10 years old. Previous research has indicated that non-arterial streets are more likely to lack photos or have outdated images, compared to more urban areas [ 19 , 73 ]. However, this problem could be partially mitigated since rural environments are thought to be more consistent than urban environments [ 51 ].…”
Section: Strength and Weaknessesmentioning
confidence: 99%
“…Few studies have reported process information necessary to calculate a summary measure of rating time. 9,16,24,[31][32][33][34][35][36][37][38][39] Previously reported rating times per item range widely from an average of 7.5 seconds 32 to 135 seconds, 31 with the majority of studies reporting ≤15 seconds per item. 16,24,34,[36][37][38][39] Assuming these reporting times are representative of all GSV studies, the observed 7.3 seconds per item measured via the dropand-spin protocol is among the fastest virtual neighborhood audit methods.…”
Section: Discussionmentioning
confidence: 99%