2021
DOI: 10.1093/mnras/stab1368
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchically modelling Kepler dwarfs and subgiants to improve inference of stellar properties with asteroseismology

Abstract: With recent advances in modelling stars using high-precision asteroseismology, the systematic effects associated with our assumptions of stellar helium abundance (Y) and the mixing-length theory parameter (αMLT) are becoming more important. We apply a new method to improve the inference of stellar parameters for a sample of Kepler dwarfs and subgiants across a narrow mass range ($0.8 < M < 1.2\, \rm M_\odot$). In this method, we include a statistical treatment of Y and the αMLT. We develop a hier… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 106 publications
(126 reference statements)
0
13
0
Order By: Relevance
“…Hence, we apply a neural network mean function which is flexible enough to manage both simple and complex features to accelerate the training. We adopt an architecture based on that of Lyttle et al (2021) comprising 6 hidden layers and 128 nodes per layer. All layers are fully-connected and the output of each layer, except for the last, is transformed by the Exponential Linear Unit (ELU) activation function (Clevert et al 2015).…”
Section: A1 Mean Functionmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence, we apply a neural network mean function which is flexible enough to manage both simple and complex features to accelerate the training. We adopt an architecture based on that of Lyttle et al (2021) comprising 6 hidden layers and 128 nodes per layer. All layers are fully-connected and the output of each layer, except for the last, is transformed by the Exponential Linear Unit (ELU) activation function (Clevert et al 2015).…”
Section: A1 Mean Functionmentioning
confidence: 99%
“…This approach offers flexibility to prior fundamental parameters in the sampling. For instance, Lyttle et al (2021) determined initial helium fraction and mixing-length parameters for a sample of Kepler dwarfs and subgiants with an artificial neural network to provide the generative model. This allowed them to prescribe prior distributions over the fundamental stellar parameters and, by extension, over populationlevel parameters such as a helium enrichment law.…”
Section: Introductionmentioning
confidence: 99%
“…This now resembles Bayes' theorem where plays the role of a population-informed prior (which also incorporates selection effects) for the parameters of event j. Crucially, this expression relies on the leave-one-out posterior P pop (λ|{d i =j }), thus avoiding double-counting the event j in Eq. (11). Given the properties of the N obs − 1 events we have observed so far, and what we know about the sensitivity of my instrument, the population-informed prior quantifies what we expect the N th obs event to look like.…”
Section: Population-informed Single-event Inferencementioning
confidence: 99%
“…Hierarchical Bayesian models have been successfully applied to many astronomical data sets, including spectroscopic data for the determination of stellar ages [7], light curve [8] and radial velocity [9] data for the determination of exoplanet obliquities and eccentricities respectively, and astroseismic data for the determination of stellar inclinations [10] and helium enrichment [11]. One benefit of hierarchical Bayesian models is that one can obtain improved measurements of the parameters of individual events by exploiting the fact that they are part of a large catalog-an approach that can be described as using a population-informed prior.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation