Text-to-picture alludes to the conversion of a textual description into a semantically similar image.The automatic synthesis of top-quality pictures from text portrayals is both exciting and useful at the same time.Current AI systems have shown significant advances in the field,but the work is still far from complete. Recent advances in the field of Deep Learning have resulted in the introduction of generative models that are capable of generating realistic images when trained appropriately.In this paper,authors will review the advancements in architectures for solving the problem of image synthesis using a text description.They begin by studying the concepts of the standard GAN, how the DCGAN has been used for the task at hand is followed by the StackGAN with uses a stack of two GANs to generate an image through iterative refinement & StackGAN++ which uses multiple GANs in a tree-like structure making the task of generating images from the text more generalized. They look at the AttnGAN which uses an attentional model to generate sub-regions of an image based on the description.
Today, social networks and media have become an integral part of everyone's daily existence. The rising popularity of social media has increased tenfold during the times of COVID-19 when people were forced to isolate following social distancing norms. Between July 2020 and July 2021, active social users grew to 520 million. The COVID-19 crisis has resulted in the usage of digital platforms not only for entertainment purposes but also for educational and corporate reasons. Hence, the spread of information has increased excessively on every social media platform. This has resulted in an equal rise of false information. The term infodemic was widely introduced during COVID-19 to explain the harmful effects of misinformation through social media. The chapter, hence, argues that the advantages of social media surpasses the dangers of misinformation. It discusses the role of COVID-19 in digitalization and how social media has helped in provision of various industries.
In recent years, there has been widespread improvement in communication technologies. Social media applications like Twitter have made it much easier for people to send and receive information. A direct application of this can be seen in the cases of disaster prediction and crisis. With people being able to share their observations, they can help spread the message of caution. However, the identification of warnings and analyzing the seriousness of text is not an easy task. Natural language processing (NLP) is one way that can be used to analyze various tweets for the same. Over the years, various NLP models have been developed that are capable of providing high accuracy when it comes to data prediction. In the chapter, the authors will analyze various NLP models like logistic regression, naive bayes, XGBoost, LSTM, and word embedding technologies like GloVe and transformer encoder like BERT for the purpose of predicting disaster warnings from the scrapped tweets. The authors focus on finding the best disaster prediction model that can help in warning people and the government.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.