Spatial-temporal trajectory data contains rich information about moving objects and phenomena, hence have been widely used for a great number of real-world applications. However, the ubiquity and complexity of spatial-temporal trajectory data has made it challenging to efficiently store, process, and query such data. Furthermore, the increasing number of users also challenges the ability of trajectory-based services and analytics to handle the query workload and response to multiple requests in a satisfactory time. Over the last few years, a new class of systems has emerged to handle large amounts of data in an efficient manner, referred as distributed in-memory database systems. These systems were designed to overcome the difficulties to scale traditional structured and unstructured data loads that some applications have to handle. Spark has became the framework of choice for large-scale low-latency data processing using distributed in-memory computation. However, Spark-based systems still lack the ability to handle several trajectory database tasks in a memory-wise manner. Some desirable feature of trajectory database systems include, data preparation and preprocessing, large-scale data storage and retrieval, and multiuser concurrent query processing. Providing a full-fledged system architecture supporting these features is challenging, and yet an issue. Firstly, trajectories are unstructured data types, coupled with spatial and temporal attributes, and organized in a sequential manner, which is hard to fit into traditional relational and spatial database systems; furthermore, trajectory data is available in a myriad of formats, each of which contains its own data schema and attributes. Moreover, trajectory datasets are highly skew and inaccurate, due to hotspots, transmission errors, and collecting devices inaccuracy, for instance. In addition, since Spark is a distributed parallel framework, we must account for data partitioning and load-balancing. In spatial and spatial-temporal databases, balanced data partitioning structures are built in a dynamic fashion as the data is consumed, nevertheless, Spark provides a read-only data structure that does not directly support adaptive partitioning after the partitioning model is constructed. Finally, data storage and query processing on top of Spark should be memory-wise, since the datasets may be too large to comfortably fit in the cluster memory; moreover, memory space may be wasted by storing unnecessary data partitions. Optimizing load-balancing and memory usage are essential to a good Spark application. Therefore, driven by the increasing interest in scalable and efficient systems for trajectory-based analytics, we propose a distributed in-memory database system for memory-wise storage and scalable processing of spatial-temporal trajectory data, with low query latency and high throughput. We build our system on top of the Spark MapReduce framework, which provides an in-memory and fault-tolerant environment for distributed parallel processing of large-scale data....