2022
DOI: 10.1051/0004-6361/202243135
|View full text |Cite
|
Sign up to set email alerts
|

Photometric redshift-aided classification using ensemble learning

Abstract: We present SHEEP, a new machine learning approach to the classic problem of astronomical source classification, which combines the outputs from the XGBoost, LightGBM, and CatBoost learning algorithms to create stronger classifiers. A novel step in our pipeline is that prior to performing the classification, SHEEP first estimates photometric redshifts, which are then placed into the dataset as an additional feature for classification model training; this results in significant improvement in the subsequent clas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 51 publications
0
10
0
Order By: Relevance
“…In comparison to similar studies, our classification approach demonstrates better performance. Cunha & Humphrey (2022) obtained an average F1 98.13 per cent using XGBoost, LightGBM, and CatBoost classifiers and Chaini et al (2022) used a combination of photometric information and images from SDSS to achieve the best averaged F1-score of 93.3 per cent through the use of artificial neural networks (ANN) and conventional neural networks (CNN).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In comparison to similar studies, our classification approach demonstrates better performance. Cunha & Humphrey (2022) obtained an average F1 98.13 per cent using XGBoost, LightGBM, and CatBoost classifiers and Chaini et al (2022) used a combination of photometric information and images from SDSS to achieve the best averaged F1-score of 93.3 per cent through the use of artificial neural networks (ANN) and conventional neural networks (CNN).…”
Section: Discussionmentioning
confidence: 99%
“…A majority of the literature on these two surveys in the case of photometric classification using machine learning, beginning from Suchkov et al (2005), have focused on broad classifications or specific subtypes of astronomical objects. For instance, some studies have examined separating stars from galaxies (Ball et al 2006;Vasconcellos et al 2011;Kovacs & Szapudi 2015), distinguishing star/galaxy/QSO (Krakowski et al 2016;Kurcz et al 2016;Nakoneczny et al 2019Nakoneczny et al , 2021Clarke et al 2020;Cunha & Humphrey 2022;Chaini et al 2022), examining ELGs (AGN/non-AGN, Seyfert I/Sefert II) (Cavuoti et al 2014), as well as classifying AGNs (X-ray AGNs, IRAGNs, radio-selected AGNs) (Chang et al 2021). Despite significant progress in the field, a gap remains in the literature regarding the detailed classification of all astronomical objects using solely photometric data.…”
Section: Introductionmentioning
confidence: 99%
“…This workflow has potential far beyond neuropathology. Indeed we believe it can provide utility in the various pathology subfields, such as cancer and nephrology, and even fields as far flung as Astronomy, which similarly processes exceptionally large and complex imaging data [ 67 , 68 ]. Though even just within neuropathology, this work is likely only the beginning, and we hope the dataset generated and discussed herein, which has been made public, will be the foundation from which many workflows may be created.…”
Section: Discussionmentioning
confidence: 99%
“…Some typical methods are Polynomial Function Fitting (Connolly et al, 1995), Bayesian model (Benítez, 2000), Support Vector Machine (Wadadekar, 2005), K-Nearest Neighbor (Beck et al, 2016), Kernel Regression (Wang et al, 2008), Random Forest (Carrasco and Brunner, 2013), Active Learning (Han et al, 2015), Artificial Neural Network (Firth et al, 2003) and so on. The latest methods are Ensemble Learning (Cunha and Humphrey, 2022), Deep Capsule Network (Dey et al, 2022) and so on.…”
Section: Machine Learningmentioning
confidence: 99%