A large fraction of data generated via human activities such as online purchases, health records, spatial mobility etc. can be represented as continuous-time event sequences (CTES) i.e. sequences of discrete events over a continuous time. Learning neural models over CTES is a non-trivial task as it involves modeling the ever-increasing event timestamps, inter-event time gaps, event types, and the influences between different events within and across different sequences. Moreover, existing sequence modeling techniques consider a complete observation scenario i.e. the event sequence being modeled is completely observed with no missing events – an ideal setting that is rarely applicable in real-world applications. In this paper, we highlight our approach[8] for modeling CTES with intermittent observations. Buoyed by the recent success of neural marked temporal point processes (MTPP) for modeling the generative distribution of CTES, we provide a novel unsupervised model and inference method for learning MTPP in presence of event sequences with missing events. Specifically, we first model the generative processes of observed events and missing events using two MTPP, where the missing events are represented as latent random variables. Then, we devise an unsupervised training method that jointly learns both the MTPP using variational inference. Experiments across real-world datasets show that our modeling framework outperforms state-of-the-art techniques for future event prediction and imputation. This work appeared in AISTATS 2021.