Rephotography is the process of recapturing the photograph of a location from the same perspective in which it was captured earlier. A rephotographed image is the best presentation to visualize and study the social changes of a location over time. Traditionally, only expert artists and photographers are capable of generating the rephotograph of any specific location. Manual editing or human eye judgment that is considered for generating rephotographs normally requires a lot of precision, effort and is not always accurate. In the era of computer science and deep learning, computer vision techniques make it easier and faster to perform precise operations to an image. Until now many research methodologies have been proposed for rephotography but none of them is fully automatic. Some of these techniques require manual input by the user or need multiple images of the same location with 3D point cloud data while others are only suggestions to the user to perform rephotography. In historical records/archives most of the time we can find only one 2D image of a certain location. Computational rephotography is a challenge in the case of using only one image of a location captured at different timestamps because it is difficult to find the accurate perspective of a single 2D historical image. Moreover, in the case of building rephotography, it is required to maintain the alignments and regular shape. The features of a building may change over time and in most of the cases, it is not possible to use a features detection algorithm to detect the key features. In this research paper, we propose a methodology to rephotograph house images by combining deep learning and traditional computer vision techniques. The purpose of this research is to rephotograph an image of the past based on a single image. This research will be helpful not only for computer scientists but also for history and cultural heritage research scholars to study the social changes of a location during a specific time period, and it will allow users to go back in time to see how a specific place looked in the past. We have achieved good, fully automatic rephotographed results based on façade segmentation using only a single image.
Abstract-Normally, the inspection process is seemed to be just finding defects in software during software development process lifecycle. Software inspection is considered as a most cost effective technique, but if these defects are not properly corrected or handled it would cost you more than double later in the project. This paper focus on the last phase of inspection meeting process showing the importance of Follow-Up Stage in software inspection meeting process. This paper also suggests a set of activities that should be performed during the Rework and Follow-Up Stages so to get inspection meeting results productive and efficient. In this paper we focus on the over the shoulder reviews so to ensure the software quality having less impact on the total software cost.
Newspapers contain a wealth of historical information in the form of articles and illustrations. Libraries and cultural heritage institutions have been digitizing their collections for decades to enable web-based access to and retrieval of information. A number of challenges arise when dealing with digitized collections, such as those of KBR, the Royal Library of Brussels (used in this study), which contain only page-level metadata, making it difficult to extract information from specific contexts. A context-aware search relies heavily on metadata enhancement. Therefore, when using metadata at the page level, it is even more challenging to geolocalize less-known landmarks. To overcome this challenge, we have developed a pipeline for geolocalization and visualization of historical photographs. The first step of this pipeline consists of converting page-level metadata to article-level metadata. In the next step, all articles with building images were classified based on image classification algorithms. Moreover, to correctly geolocalize historical photographs, we propose a hybrid approach that uses both textual metadata and image features. We conclude this research paper by addressing the challenge of visualizing historical content in a way that adds value to humanities research. It is noteworthy that a number of historical urban scenes are visualized using rephotography, which is notoriously challenging to get right. This study serves as an important step towards enriching historical metadata and facilitating cross-collection linkages, geolocalization, and the visualization of historical newspaper images. Furthermore, the proposed methodology is generic and can be used to process untagged photographs from social media, including Flickr and Instagram.
PurposeHistorical newspaper collections provide a wealth of information about the past. Although the digitization of these collections significantly improves their accessibility, a large portion of digitized historical newspaper collections, such as those of KBR, the Royal Library of Belgium, are not yet searchable at article-level. However, recent developments in AI-based research methods, such as document layout analysis, have the potential for further enriching the metadata to improve the searchability of these historical newspaper collections. This paper aims to discuss the aforementioned issue.Design/methodology/approachIn this paper, the authors explore how existing computer vision and machine learning approaches can be used to improve access to digitized historical newspapers. To do this, the authors propose a workflow, using computer vision and machine learning approaches to (1) provide article-level access to digitized historical newspaper collections using document layout analysis, (2) extract specific types of articles (e.g. feuilletons – literary supplements from Le Peuple from 1938), (3) conduct image similarity analysis using (un)supervised classification methods and (4) perform named entity recognition (NER) to link the extracted information to open data.FindingsThe results show that the proposed workflow improves the accessibility and searchability of digitized historical newspapers, and also contributes to the building of corpora for digital humanities research. The AI-based methods enable automatic extraction of feuilletons, clustering of similar images and dynamic linking of related articles.Originality/valueThe proposed workflow enables automatic extraction of articles, including detection of a specific type of article, such as a feuilleton or literary supplement. This is particularly valuable for humanities researchers as it improves the searchability of these collections and enables corpora to be built around specific themes. Article-level access to, and improved searchability of, KBR's digitized newspapers are demonstrated through the online tool (https://tw06v072.ugent.be/kbr/).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.