Warfighters will distrust AI-enabled systems that perform complex and high stakes work but do not effectively interact with humans. In many cases, this distrust in systems lacking effective Human-Machine Teaming (HMT) will be warranted, as poor teaming is often hazardous. Assurance is needed throughout development and fielding of AI-enabled systems to provide justified confidence that relational aspects of these systems are safe and effective. Particularly, developmental testing of HMT is critical because it enables HMT risks to be mitigated through AI-enabled system design. We present practical guidance for assessing HMT during developmental testing, including techniques, tools, and approaches that have the potential for immediate impact across the DoD Test and Evaluation (T&E) community. To further the adoption of HMT assessment in AI-enabled system assurance, we discuss DoD T&E domain-specific considerations which may support the development of guidance as well as future research.