Human-robot cooperation is vital for optimising powered assist of lower limb exoskeletons (LLEs). Robotic capacity to intelligently adapt to human force, however, demands a fusion of data from exoskeleton and user state for smooth human-robot synergy. Muscle activity, mapped through electromyography (EMG) or mechanomyography (MMG) is widely acknowledged as usable sensor input that precedes the onset of human joint torque. However, competing and complementary information between such physiological feedback is yet to be exploited, or even assessed, for predictive LLE control. We investigate complementary and competing benefits of EMG and MMG sensing modalities as a means of calculating human torque input for assist-as-needed (AAN) LLE control. Three biomechanically agnostic machine learning approaches, linear regression, polynomial regression, and neural networks, are implemented for joint torque prediction during human-exoskeleton interaction experiments. Results demonstrate MMG predicts human joint torque with slightly lower accuracy than EMG for isometric humanexoskeleton interaction. Performance is comparable for dynamic exercise. Neural network models achieve the best performance for both MMG and EMG (94.8±0.7% with MMG and 97.6±0.8% with EMG (Mean±SD)) at the expense of training time and implementation complexity. This investigation represents the first MMG human joint torque models for LLEs and their first comparison with EMG. We provide our implementations for future investigations (https://github.com/cic12/ieee appx).