2008
DOI: 10.1111/j.1365-2966.2008.13792.x
|View full text |Cite|
|
Sign up to set email alerts
|

The Combined NVSS-FIRST Galaxies (CoNFIG) sample - I. Sample definition, classification and evolution

Abstract: The CoNFIG (Combined NVSS–FIRST Galaxies) sample is a new sample of 274 bright radio sources at 1.4 GHz. It was defined by selecting all sources with S1.4 GHz≥ 1.3 Jy from the NRAO Very Large Array (VLA) Sky Survey (NVSS) in the north field of the Faint Images of the Radio Sky at Twenty centimetres (FIRST) survey. New radio observations obtained with the VLA for 31 of the sources are presented. The sample has complete Fanaroff–Riley (FRI)/FRII morphology identification; optical identifications and redshifts ar… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(18 citation statements)
references
References 126 publications
0
18
0
Order By: Relevance
“…The vast majority of radio sources brighter than ∼1mJy at ∼GHz frequencies are powerful AGN that lie at far greater distance than the Fornax cluster (e.g. Magliocchetti et al 2000;Gendre & Wall 2008;de Zotti et al 2010). Nevertheless, it is desirable to confirm this for the sources used in our particular RM grid experiment, for reasons described in Section 1.…”
Section: Cluster-relative Los Source Positions From Redshift Catalogumentioning
confidence: 95%
“…The vast majority of radio sources brighter than ∼1mJy at ∼GHz frequencies are powerful AGN that lie at far greater distance than the Fornax cluster (e.g. Magliocchetti et al 2000;Gendre & Wall 2008;de Zotti et al 2010). Nevertheless, it is desirable to confirm this for the sources used in our particular RM grid experiment, for reasons described in Section 1.…”
Section: Cluster-relative Los Source Positions From Redshift Catalogumentioning
confidence: 95%
“…Others have built upon the FR scheme and used more sophisticated criteria to classify the RGs, such as the presence of jets or "hot spots" toward the edge of the lobes (e.g., OL89; Leahy 1993;Gendre & Wall 2008). In the analyses presented below, we will seek the best way to distinguish various populations of RGs; in doing so we will find ourselves defining several subsets of class a (a 0.9 , a <0.8 , a maj , and a 0.9,em ; see Table 2).…”
mentioning
confidence: 99%
“…We construct the LRG sample by combining the CoNFIG catalog (Gendre & Wall 2008;Gendre et al 2010), the FR0CAT catalog (Baldi et al 2018), the FRICAT catalog (Capetti et al 2017b), the FRIICAT catalog (Capetti et al 2017a), and the catalogs of Cheung (2007) and Proctor (2011) (Table 1 and Table 2). To be specific, we first include all 392 compacts, 284 FRIs, 587 FRIIs, and 430 BTs labeled in the CoN -FIG, FR0CAT, FRICAT, and FRIICAT catalogs in the LRG sample.…”
Section: Classification Tree and Sample Selectionmentioning
confidence: 99%
“…The primary motivation of this paper is to develop an automatic open-source tool that can classify a large sample of radio galaxies based on morphological representation learning (LeCun et al 2015) with high computational efficiency, small resource consumption, and powerful predictive capabilities, so as to enable relevant studies on portable computers. In order to do this we have designed a convolutional autoencoder (CAE) based on a deep CNN, unsupervisedly pre-trained the CAE with 14,245 unlabeled images of radio AGNs in the Best-Heckman sample (Best & Heckman 2012, BH12 hereafter), and supervisedly fine-tuned the CAE with images of 1442 radio AGNs that have already been labeled in the literature (Gendre & Wall 2008;Gendre et al 2010;Capetti et al 2017b,a;Baldi et al 2018;Cheung 2007;Proctor 2011). Given that currently no large labeled radio galaxy (LRG) sample is available for neural network training, this approach (i.e., unsupervisedly pre-training the network with a large unlabeled sample and supervisedly fine-tuning it with a relatively small labeled sample) can best extract information hidden in the images and make the classification robust to morphological variability (e.g., Erhan et al 2010;Hinton et al 2012;Christodoulidis et al 2016; see also LeCun et al 2015 for a review).…”
Section: Introductionmentioning
confidence: 99%