“…For example, Candille and Talagrand (2008) use the quadratic divergence, ( f − g ) 2 , in the case of forecasting a binary event (also Santos and Ghelli, 2012), Pappenberger et al (2009) use the Kullback–Leibler divergence (or relative entropy), , and Friederichs and Thorarinsdottir (2012) propose the integrated quadratic distance, . Thorarinsdottir et al (2013) list several other divergences, including the sub‐class of ‘score divergences’ that are formed from proper scoring rules, s , in the following way: where y ∼ g . Score divergences thus differ from expected (proper) scoring rules by subtracting an amount, E y { s ( g , y )}, which is independent of the forecast.…”