The LOFAR Two-metre Sky Survey (LoTSS) is an ongoing sensitive, high-resolution 120–168 MHz survey of the entire northern sky for which observations are now 20% complete. We present our first full-quality public data release. For this data release 424 square degrees, or 2% of the eventual coverage, in the region of the HETDEX Spring Field (right ascension 10h45m00s to 15h30m00s and declination 45°00′00″ to 57°00′00″) were mapped using a fully automated direction-dependent calibration and imaging pipeline that we developed. A total of 325 694 sources are detected with a signal of at least five times the noise, and the source density is a factor of ∼10 higher than the most sensitive existing very wide-area radio-continuum surveys. The median sensitivity is S144 MHz = 71 μJy beam−1 and the point-source completeness is 90% at an integrated flux density of 0.45 mJy. The resolution of the images is 6″ and the positional accuracy is within 0.2″. This data release consists of a catalogue containing location, flux, and shape estimates together with 58 mosaic images that cover the catalogued area. In this paper we provide an overview of the data release with a focus on the processing of the LOFAR data and the characteristics of the resulting images. In two accompanying papers we provide the radio source associations and deblending and, where possible, the optical identifications of the radio sources together with the photometric redshifts and properties of the host galaxies. These data release papers are published together with a further ∼20 articles that highlight the scientific potential of LoTSS.
The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the directiondependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (DDFacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, positiondependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of DDFacet presented here can account for any externally defined Jones matrices and/or beam patterns.
We present MeerKAT 1.28 GHz total-intensity, polarization, and spectral-index images covering the giant (projected length l ≈ 1.57 Mpc) X-shaped radio source PKS 2014−55 with an unprecedented combination of brightness sensitivity and angular resolution. They show the clear ‘double boomerang’ morphology of hydrodynamical backflows from the straight main jets deflected by the large and oblique hot-gas halo of the host galaxy PGC 064440. The magnetic field orientation in PKS 2014−55 follows the flow lines from the jets through the secondary wings. The radio source is embedded in faint ($T_\mathrm{b} \approx 0.5 \mathrm{\, K}$) cocoons having the uniform brightness temperature and sharp outer edges characteristic of subsonic expansion into the ambient intragroup medium. The position angle of the much smaller (l ∼ 25 kpc) restarted central source is within 5° of the main jets, ruling out models that invoke jet re-orientation or two independent jets. Compression and turbulence in the backflows probably produce the irregular and low polarization bright region behind the apex of each boomerang as well as several features in the flow with bright heads and dark tails.
In radio interferometry, observed visibilities are intrinsically sampled at some interval in time and frequency. Modern interferometers are capable of producing data at very high time and frequency resolution; practical limits on storage and computation costs require that some form of data compression be imposed. The traditional form of compression is a simple averaging of the visibilities over coarser time and frequency bins. This has an undesired side effect: the resulting averaged visibilities "decorrelate", and do so differently depending on the baseline length and averaging interval. This translates into a non-trivial signature in the image domain known as "smearing", which manifests itself as an attenuation in amplitude towards off-centre sources. With the increasing fields of view and/or longer baselines employed in modern and future instruments, the trade-off between data rate and smearing becomes increasingly unfavourable. In this work we investigate alternative approaches to lowloss data compression. We show that averaging of the visibility data can be treated as a form of convolution by a boxcar-like window function, and that by employing alternative baseline-dependent window functions a more optimal interferometer smearing response may be induced. In particular, we show improved amplitude response over a chosen field of interest, and better attenuation of sources outside the field of interest. The main cost of this technique is a reduction in nominal sensitivity; we investigate the smearing vs. sensitivity trade-off, and show that in certain regimes a favourable compromise can be achieved. We show the application of this technique to simulated data from the Karl G. Jansky Very Large Array (VLA) and the European Very-longbaseline interferometry Network (EVN).
Deep learning has been successfully showing promising results in plant disease detection, fruit counting, yield estimation, and gaining an increasing interest in agriculture. Deep learning models are generally based on several millions of parameters that generate exceptionally large weight matrices. The latter requires large memory and computational power for training, testing, and deploying. Unfortunately, these requirements make it difficult to deploy on low-cost devices with limited resources that are present at the fieldwork. In addition, the lack or the bad quality of connectivity in farms does not allow remote computation. An approach that has been used to save memory and speed up the processing is to compress the models. In this work, we tackle the challenges related to the resource limitation by compressing some state-of-the-art models very often used in image classification. For this we apply model pruning and quantization to LeNet5, VGG16, and AlexNet. Original and compressed models were applied to the benchmark of plant seedling classification (V2 Plant Seedlings Dataset) and Flavia database. Results reveal that it is possible to compress the size of these models by a factor of 38 and to reduce the FLOPs of VGG16 by a factor of 99 without considerable loss of accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.