Objective: To automate identification of postural point-features from colour videos of children with neuromotor disability, during clinical assessment. The automatic identification of 13 points of interest (2, 6, 2, 3 points on the head, trunk, pelvis, arm respectively) is required to estimate the location and orientation of head, trunk, and arm segments, from videos of the clinical test "Segmental Assessment of Trunk Control" (SATCo) which is a test of seated postural control. Methods: Three expert operators manually annotated 13 point-features in every fourth image of 177 short (5-10 second) videos (25 Hz) showing 12 children with cerebral palsy (ages: 4.52 ±2.4, male: 9), participating in SATCo testing. Linear interpolation for the remaining images resulted in 30,825 annotated images. Meanpooling and max-pooling convolutional neural networks were trained with cross-validation, giving held-out test results for all children. Results: The point-features were estimated with error 4.40 ±3.75 pixels (mean-pooling), and 4.49 ±4.45 pixels (maxpooling), at approximately 100 images per second. Trunk segment angles (head, neck, 6 thoraco-lumbar-pelvic segments) were estimated with error 6.4° ±2.8° allowing accurate classification (F1 > 80%) of deviation from a reference posture at thresholds up to 3°, 3°, 2° respectively. Contact between arm point features (elbow, wrist) and supporting surface was classified at F1 = 80.5%. Conclusion and Significance: This study demonstrates, for the first time, a technical solution to automate identification of i) a sitting segmental-posture including individual trunk segments, ii) changes away from that posture, and iii) support from the upper limb, required for the clinical SATCo.