23Marker tracking is a major bottleneck in studies involving X-ray Reconstruction of 24Moving Morphology (XROMM). Here, we tested whether DeepLabCut, a new deep 25 learning package built for markerless tracking, could be applied to videoradiographic 26 data to improve data processing throughput. Our novel workflow integrates XMALab, 27 the existing XROMM marker tracking software, and DeepLabCut while retaining each 28 program's utility. XMALab is used for generating training datasets, error correction, and 29 3D reconstruction, whereas the majority of marker tracking is transferred to 30 DeepLabCut for automatic batch processing. In the two case studies that involved an in 31 vivo behavior, our workflow achieved a 6 to 13-fold increase in data throughput. In the 32 third case study, which involved an acyclic, post mortem manipulation, DeepLabCut 33 struggled to generalize to the range of novel poses and did not surpass the throughput 34 of XMALab alone. Deployed in the proper context, this new workflow facilitates large 35 scale XROMM studies that were previously precluded by software constraints. 36 37 Data processing in kinematics workflows can be a time-consuming and laborious 38 task, especially when three-dimensional (3D) reconstruction requires the integration of 39 data from multiple cameras. In marker-based XROMM (X-ray Reconstruction of Moving 40 Morphology; Brainerd et al., 2010), every radiopaque marker in every frame of two X-41 ray videos must be accurately tracked. This step has been streamlined by the open-42 source program XMALab (Knörlein et al., 2016), which offers a suite of features for 43 marker detection, visualization, and tracking. Marker tracking remains a major 44bottleneck in the XROMM workflow, however, limiting the feasibility of studies that 45 require large sample sizes across multiple individuals or species (cf. Gintof et al., 2010; 46 Granatosky et al., 2019; Iriarte-Diaz et al., 2017; Martinez et al., 2018). 47 126
Network training and analysis 127Once the training dataset is generated, the standard DeepLabCut workflow is 128 followed. The functions create_training_dataset and train_network were used to train a 129 single neural network whose weights were optimized for both camera-1 and camera-2 130 videos. In all cases, we used ResNet-101. We allowed training to run until 131DeepLabCut's native cross-entropy loss function plateaued, typically between 200,000 132 and 500,000 iterations. DLCTools supports the use of separate neural networks for 133 each camera plane, if the user chooses. This would double the amount of training but 134 may improve performance. The DLCTools function analyze_xromm_videos calls the 135 native analyze_videos function to predict points for new trials. It automatically detects