2020 IEEE 23rd International Conference on Information Fusion (FUSION) 2020
DOI: 10.23919/fusion45008.2020.9190462
|View full text |Cite
|
Sign up to set email alerts
|

A rules-based and Transfer Learning approach for deriving the Hubble type of a galaxy from the Galaxy Zoo data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 4 publications
0
3
0
Order By: Relevance
“…Variawa et al (2020) [30] trained a ResNet50 model to predict the 37-class vectors in the Galaxy Zoo 1 dataset, achieving a RMSE of 0.0942 on the unseen test set. A decision tree, available in the original Galaxy Zoo 1 publication [3] was used to define a set of rules mapping the 37-class response vectors to Hubble types.…”
Section: A the Galaxy Zoo Projectmentioning
confidence: 99%
See 1 more Smart Citation
“…Variawa et al (2020) [30] trained a ResNet50 model to predict the 37-class vectors in the Galaxy Zoo 1 dataset, achieving a RMSE of 0.0942 on the unseen test set. A decision tree, available in the original Galaxy Zoo 1 publication [3] was used to define a set of rules mapping the 37-class response vectors to Hubble types.…”
Section: A the Galaxy Zoo Projectmentioning
confidence: 99%
“…Transfer learning was employed wherein the model trained to predict the 37-class response vectors served as initialisation to train a model using the RSA catalogue to predict the Hubble labels. Variawa et al (2020) [30] reported that neither rulesbased nor transfer learning approaches could reliably predict the Hubble types of galaxies in the RSA catalogue [30].…”
Section: A the Galaxy Zoo Projectmentioning
confidence: 99%
“…Transfer learning [13,14,15,16,17] allows for knowledge derived from tasks rich in data to be applied to tasks, languages, or domains where data is limited. It consist of two steps, pretraining on one task or domain (source) and domain adaptation where the learned representations are used in a different task, domain or language (target).…”
Section: Introductionmentioning
confidence: 99%