We investigate the impact of shell growth on the carrier dynamics and exciton-phonon coupling in CdSe-CdS core-shell nanoplatelets with varying shell thickness. We observe that the recombination dynamics can be prolonged by more than one order of magnitude, and analyze the results in a global rate model as well as with simulations including strain and excitonic effects. We reveal that type I band alignment in the hetero platelets is maintained at least up to three monolayers of CdS, resulting in approximately constant radiative rates. Hence, observed changes of decay dynamics are not the result of an increasingly different electron and hole exciton wave function delocalization as often assumed, but an increasingly better passivation of nonradiative surface defects by the shell. Based on a global analysis of time-resolved and time-integrated data, we recover and model the temperature dependent quantum yield of these nanostructures and show that CdS shell growth leads to a strong enhancement of the photoluminescence quantum yield. Our results explain, for example, the very high lasing gain observed in CdSe-CdS nanoplatelets due to the type I band alignment that also makes them interesting as solar energy concentrators. Further, we reveal that the exciton-LO-phonon coupling is strongly tunable by the CdS shell thickness, enabling emission line width and coherence length control.
Modelling urban systems has interested planners and modellers for decades. Different models have been achieved relying on mathematics, cellular automation, complexity, and scaling. While most of these models tend to be a simplification of reality, today within the paradigm shifts of artificial intelligence across the different fields of science, the applications of computer vision show promising potential in understanding the realistic dynamics of cities. While cities are complex by nature, computer vision shows progress in tackling a variety of complex physical and non-physical visual tasks. In this article, we review the tasks and algorithms of computer vision and their applications in understanding cities. We attempt to subdivide computer vision algorithms into tasks, and cities into layers to show evidence of where computer vision is intensively applied and where further research is needed. We focus on highlighting the potential role of computer vision in understanding urban systems related to the built environment, natural environment, human interaction, transportation, and infrastructure. After showing the diversity of computer vision algorithms and applications, the challenges that remain in understanding the integration between these different layers of cities and their interactions with one another relying on deep learning and computer vision. We also show recommendations for practice and policy-making towards reaching AI-generated urban policies.
In recent years, deep learning and computer vision have been applied to solve complex problems across many domains. In urban studies, these technologies have been instrumental in the development of smart cities and autonomous vehicles. However, a knowledge gap is present when it comes to informal urban regions in less developed countries. How can deep learning and artificial intelligence untangle the complexities of informality to advance urban modelling? In this paper, we introduce a framework for multipurpose realistic-dynamic urban modelling using deep convolutional neural networks. The purpose of the framework is twofold: (1) to sense and detect informality and slums in urban scenes from aerial and street-level images and (2) to detect pedestrian and transport modes. The model has been trained on images of urban scenes in cities across the globe. The framework shows strong validation performance in the identification of planned and unplanned regions, despite broad variations in the classified images. The algorithms of the URBAN-i model are coded in Python and the trained models can be applied to images of any urban setting, including informal settlements and slum regions.
Extracting information related to weather and visual conditions at a given time and space is indispensable for scene awareness, which strongly impacts our behaviours, from simply walking in a city to riding a bike, driving a car, or autonomous drive-assistance. Despite the significance of this subject, it has still not been fully addressed by the machine intelligence relying on deep learning and computer vision to detect the multi-labels of weather and visual conditions with a unified method that can be easily used in practice. What has been achieved to-date are rather sectorial models that address a limited number of labels that do not cover the wide spectrum of weather and visual conditions. Nonetheless, weather and visual conditions are often addressed individually. In this paper, we introduce a novel framework to automatically extract this information from street-level images relying on deep learning and computer vision using a unified method without any pre-defined constraints in the processed images. A pipeline of four deep convolutional neural network (CNN) models, so-called WeatherNet, is trained, relying on residual learning using ResNet50 architecture, to extract various weather and visual conditions such as dawn/dusk, day and night for time detection, glare for lighting conditions, and clear, rainy, snowy, and foggy for weather conditions. WeatherNet shows strong performance in extracting this information from user-defined images or video streams that can be used but are not limited to autonomous vehicles and drive-assistance systems, tracking behaviours, safety-related research, or even for better understanding cities through images for policy-makers. environment due to precipitation including clear, rainy, foggy, or snowy weather. They represent crucial factors for many urban studies including transport, behaviour, and safety-related research [5]. For example, walking, cycling, or driving in rainy weather is associated with a higher risk of experiencing an incident than in clear weather [5,6]. Fog, snow, and glare have also been found to increase risk [6,7]. Importantly, it is not only the inherent risk that different weather and visual conditions pose to human life that is of interest to researchers. Scene awareness for autonomous navigation in cities is highly influenced by the dynamics of weather and visual conditions and it is imperative for any vision system to cope with them simultaneously [8]. For example, object detection algorithms must perform well in fog and glare as well as in clear conditions, in order to be reliable. Accordingly, finding an automatic approach to extract this information from images or video streams is in high demand for computer scientists, planners, and policy-makers.While there are different methods that are used to understand the dynamics of weather and visual conditions, a knowledge gap appears when addressing this subject. To date, these two crucial domains-weather and visual conditions-have been studied individually, ignoring the importance of understanding the dy...
Modelling the spread of coronavirus globally while learning trends at global and country levels remains crucial for tackling the pandemic. We introduce a novel variational LSTM-Autoencoder model to predict the spread of coronavirus for each country across the globe. This deep spatio-temporal model does not only rely on historical data of the virus spread but also includes factors related to urban characteristics represented in locational and demographic data (such as population density, urban population, and fertility rate), an index that represent the governmental measures and response amid toward mitigating the outbreak (includes 13 measures such as: 1) school closing, 2) workplace closing, 3) cancelling public events, 4) close public transport, 5) public information campaigns, 6) restrictions on internal movements, 7) international travel controls, 8) fiscal measures, 9) monetary measures, 10) emergency investment in health care, 11) investment in vaccines, 12) virus testing framework, and 13) contact tracing). In addition, the introduced method learns to generate graph to adjust the spatial dependences among different countries while forecasting the spread. We trained two models for short and long-term forecasts. The first one is trained to output one step in future with three previous timestamps of all features across the globe, whereas the second model is trained to output 10 steps in future. Overall, the trained models show high validation for forecasting the spread for each country for short and long-term forecasts, which makes the introduce method a useful tool to assist decision and policymaking for the different corners of the globe.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.