2021
DOI: 10.55599/ejssm.v10i3.60
|View full text |Cite
|
Sign up to set email alerts
|

The Characteristics of United States Hail Reports: 1955-2014

Abstract: The United States hail observation dataset maintained and updated annually by the Storm Prediction Center is one of the largest currently available worldwide and spans the period 1955-present. Despite its length, climatology of this dataset is nontrivial because of numerous characteristics that are nonmeteorological in origin. Here, the main features and limitations of the dataset are explored, including the implications of an increasing frequency in the time series, approaches to spatial smoothing of observat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(59 citation statements)
references
References 24 publications
(54 reference statements)
0
58
0
1
Order By: Relevance
“…While I agree with this interpretation [re: lack of separation of the reflectivity profiles with respect to different environments], a third potential hypothesis here (and I suspect it is a combination of all three) is that the non-reliability of size observations in storm data (noted by Blair et al 2017), the quantization of the size report data (Allen and Tippett 2015), and the relatively small sample size in SHAVE may mean that meaningful environmental discrimination is not possible-there are only a small number of observations for large hail sizes, especially in SHAVE. The analysis by Johnson and Sudgen (2014) suggests that there is potential for better discrimination given sufficient observations of the desired threshold.…”
Section: Substantive Commentsmentioning
confidence: 75%
See 2 more Smart Citations
“…While I agree with this interpretation [re: lack of separation of the reflectivity profiles with respect to different environments], a third potential hypothesis here (and I suspect it is a combination of all three) is that the non-reliability of size observations in storm data (noted by Blair et al 2017), the quantization of the size report data (Allen and Tippett 2015), and the relatively small sample size in SHAVE may mean that meaningful environmental discrimination is not possible-there are only a small number of observations for large hail sizes, especially in SHAVE. The analysis by Johnson and Sudgen (2014) suggests that there is potential for better discrimination given sufficient observations of the desired threshold.…”
Section: Substantive Commentsmentioning
confidence: 75%
“… Reporting sufficiency (Hales and Kelly 1985, Hales 1993, Amburn and Wolf 1997, Trapp et al 2006,  Biases due to population and infrastructure, reporting sources, and report collection procedures (Hales 1993, Wyatt and Witt 1997, Davis and LaDue 2004, Dobur 2005, Hocker and Basara 2008, Allen and Tippett 2015,  Hail-size accuracy (Schaefer et al 2004, Jewell and Brimelow 2009, Blair et al 2017, and  Other, inexplicable inhomogeneities (Doswell et al 2005). Witt et al (1998b) reported on the lack of null or nonsevere 1 reports within Storm Data, and problems using Storm Data as an algorithm verification database due to these missing data.…”
Section: Surface Hail Databasesmentioning
confidence: 99%
See 1 more Smart Citation
“…It is well known that AI algorithms tend to reinforce and solidify unintentional biases in data (O'Neil 2016;Benjamin 2019). Given that we know there are existing unintentional biases in weather data, such as the population biases shown in hail and tornado reports (Allen and Tippett 2015;Potvin et al 2019), one of the goals of AI2ES is to ensure that AI developers for weather, climate, and ocean applications have the knowledge and tools to create AI that can counteract these effects, to make the AI both ethical and responsible and to minimize bias. For example, we aim to develop a tool that would identify potential biases in data automatically to facilitate the developer counteracting these biases when training the AI model.…”
Section: Leveraging Physics-based Ai and Explainable Aimentioning
confidence: 99%
“…2, the predictions have high confidence, but instances of unrepresentative uncertainty [e.g., the probability of point (X 1 , X 2 ) 5 (2, 2.5) is 50%, but should be 100%]. It is well-documented that all three severe storms hazards suffer from significant reporting biases (Trapp et al 2006;Allen and Tippett 2015;Potvin et al 2019). The resulting misclassified storms coupled with poorly sampled phase spaces in our training dataset plausibly explain why the tree-based methods produce fewer higher-confidence forecasts than do the logistic regression models.…”
Section: Performance Diagramsmentioning
confidence: 99%