2019
DOI: 10.48550/arxiv.1912.11060
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Pricing and hedging American-style options with deep learning

Sebastian Becker,
Patrick Cheridito,
Arnulf Jentzen

Abstract: This paper describes a deep learning method for pricing and hedging American-style options. It first computes a candidate optimal stopping policy. From there it derives a lower bound for the price. Then it calculates an upper bound, a point estimate and confidence intervals. Finally, it constructs an approximate dynamic hedging strategy. We test the approach on different specifications of a Bermudan max-call option. In all cases it produces highly accurate prices and dynamic hedging strategies yielding small h… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 25 publications
(48 reference statements)
0
4
0
Order By: Relevance
“…This makes the phenomenon difficult to analyze since the effect may lead underestimated exposure, unchanged exposure (from offsetting effects), or overestimated exposure. In addition to the classical regression methods, with e.g., polynomial basis functions, which are cited above there are several papers in which a neural network take the roll as a basis functions, see e.g., [21], [22], [23] and [24].…”
Section: The Distribution Of E Bermentioning
confidence: 99%
See 2 more Smart Citations
“…This makes the phenomenon difficult to analyze since the effect may lead underestimated exposure, unchanged exposure (from offsetting effects), or overestimated exposure. In addition to the classical regression methods, with e.g., polynomial basis functions, which are cited above there are several papers in which a neural network take the roll as a basis functions, see e.g., [21], [22], [23] and [24].…”
Section: The Distribution Of E Bermentioning
confidence: 99%
“…The minus sign in the loss-function transforms the problem from a maximization to minimization, which is the standard formulation in the machine learning community. Note the straightforward relationship between the loss function and the average cashflows in (22). In practice, the data is often divided into mini-batches, for which the loss-function is minimized consecutively.…”
Section: Training Phasementioning
confidence: 99%
See 1 more Smart Citation
“…The main difference is that the optimization problem in (8), which is used to determine the evaluation points in (9), only depends on the random variable W whereas the optimization problem (see ( 4)) to determine the evaluation points in (6) in the LRV strategy depends on the random variable W and the function φ.…”
Section: Introductionmentioning
confidence: 99%