We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the solar system, exploring the transient optical sky, and mapping the Milky Way. LSST will be a large, wide-field ground-based system designed to obtain repeated images covering the sky visible from Cerro Pachón in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg 2 field of view, a 3.2-gigapixel camera, and six filters (ugrizy) covering the wavelength range 320-1050 nm. The project is in the construction phase and will begin regular survey operations by 2022. About 90% of the observing time will be devoted to a deep-wide-fast survey mode that will uniformly observe a 18,000 deg 2 region about 800 times (summed over all six bands) during the anticipated 10 yr of operations and will yield a co-added map to r∼27.5. These data will result in databases including about 32 trillion observations of 20 billion galaxies and a similar number of stars, and they will serve the majority of the primary science programs. The remaining 10% of the observing time will be allocated to special projects such as Very Deep and Very Fast time domain surveys, whose details are currently under discussion. We illustrate how the LSST science drivers led to these choices of system parameters, and we describe the expected data products and their characteristics.
The Galileo spacecraft carries a 1500‐nm focal length camera with a 800 × 800 CCD detector that will provide images with a spatial resolution of 10 μrad/pixel. The spacecraft will fly by Io at the time of Jupiter Orbit Insertion (JOI) and, subsequently, while in Jupiter's orbit, will have a total of 10 close passes by Europa, Ganymede, and Callisto. These close passes, together with more distant encounters, will be used by the imaging experiment primarily to obtain high‐resolution coverage of selected targets, to fill gaps left in the Voyager coverage, to extend global color coverage of each satellite, and to follow changes in the volcanic activity of Io. The roughly 390 Mbit allocated for imaging during the tour will be distributed among several hundred frames compressed by factors that range from 1 to possibly as high as 50. After obtaining high‐resolution samples during the initial Io encounter at JOI, roughly 10% of imaging resources are devoted to near‐terminator mapping of Io's topography at 2‐ to 10‐km resolution, monitoring color and albedo changes of the Ionian surface, and monitoring plume activity. Observations of Europa range in resolution from several kilometers per pixel to 10 m/pixel. The objectives of Europa are (1) to determine the nature, origin, and age of the tectonic features, (2) to determine the nature, rates, and sequence of resurfacing events, (3) to assess the satellite's cratering history, and (4) to map variations in spectral and photometric properties. Europa was poorly imaged by Voyager, so the plan includes a mix of high‐ and low‐resolution sequences to provide context. The imaging objectives at Ganymede are (1) to characterize any volcanism, (2) to determine the nature and timing of any tectonic activity, (3) to determine the history of formation and degradation of impact craters, and (4) to determine the nature of the surface materials. Because Ganymede was well imaged by Voyager, most of the resources at Ganymede are devoted to high‐resolution observations. The Callisto observations will be directed mostly toward (1) filling Voyager gaps, (2) acquiring high‐resolution samples of typical cratered terrain and components of the Valhalla and Asgaard basins, (3) acquiring global color, and (4) determining the photometric properties of the surface. A small number of frames will be used to better characterize the small inner satellites of Jupiter, Thebe, Amalthea, Metis, and Adrastea.
The Tupelo semantic content management middleware implements Knowledge Spaces that enable scientists to integrate information into a comprehensive research record as they work with existing tools and domainspecific applications. Knowledge Spaces combine approaches that have demonstrated success in automating parts of this integration activity, including content management systems for domain-neutral management of data, workflow technologies for management of computation and analysis, and semantic web technologies for extensible, portable, citable management of descriptive information and other metadata. Tupelo's 'Context' facility and its associated semantic operations both allow existing data representations and tools to be plugged in, and also provide a semantic 'glue' of important associative relationships that span the research record, such as provenance, social networks, and annotation. Tupelo has enabled the recent work creating e-Science cyberenvironments to serve distributed, active scientific communities, allowing researchers to develop, coordinate and share datasets, documents, and computational models, while preserving process documentation and other contextual information needed to produce an integrated research record suitable for distribution and archiving. J. FUTRELLE ET AL.information and knowledge management. Knowledge Spaces include data management capabilities based on the domain-neutral approaches developed in digital libraries and content management systems (CMS), which helps address the problem of having generalized data management functions implemented in many incompatible ways in specialized scientific tools. Knowledge Spaces also include workflow provenance, so that scientists can keep track of associations between data products and the analysis process that produced them without having to manually create and manage those associations. And finally, Knowledge Spaces use semantic web technologies to provide 'schema-less' metadata management with global identification, so that Knowledge Spaces' integrative facilities (e.g. organizing, tagging, linking, annotating, following chains of derivation) can be extended to distributed systems and new domains without having to develop new database schemas or document types, along with code to interpret them.By implementing Knowledge Spaces in the Tupelo middleware framework, we have been able to develop a suite of interoperable, context-aware tools, including the CyberIntegrator provenanceaware exploratory workflow tool, the CyberCollaboratory web-based collaboration tool, the Digital Synthesis Framework [4] for publishing interactive datasets, and the Medici multimedia environment [5]. These tools have been deployed to create Knowledge Spaces supporting environmental and other sciences, science education, and digital humanities, as well as providing provenance support for a growing collection of workflow projects in collaboration with the Provenance Challenge workshop series [6,7], which has brought together developers of workflow systems, such as Kep...
The ECHO DEPository project is a digital preservation research and development project funded by the National Digital Information Infrastructure and Preservation Program (NDIIPP) and administered by the Library of Congress. A key goal of this project is to investigate both practical solutions for supporting digital preservation activities today, and the more fundamental research questions underlying the development of the next generation of digital preservation systems. To support on-the-ground preservation efforts in existing technical and organizational environments, we have developed tools to help curators collect and manage Web-based digital resources, such as the Web Archives Workbench (Kaczmarek et al., 2008), and to enhance existing repositories' support for interoperability and emerging preservation standards, such as the Hub and Spoke Tool Suite (Habing et al., 2008). In the longer term, however, we recognize that successful digital preservation activities will require a more precise and complete account of the meaning of relationships within and among digital objects. This article describes project efforts to identify the core underlying semantic issues affecting long-term digital preservation, and to model how semantic inference may help next-generation archives head off long-term preservation risks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.