Federated machine learning (FML) has proved a useful technique for training of artificial intelligence and machine learning (AI/ML) models, using data that is distributed among different constituents of a network which may be geographically dispersed. Typically, the data privacy of individual constituents should be preserved, and it may also be desirable to protect the integrity and secrecy of the algorithms and trained models deployed within the network. Demonstrating the privacyenhancing technology of Confidential Computing, we present the results obtained using a novel solution for FML implementation that supports model training within a distributed network of data providers.Based upon recent research on the use of FML for distributed spectrum sensing in communication networks, we demonstrate the application of the proposed solution for distributed model training within a simulated sensor network of arbitrary topology. The presented solution provides for graph-based network configuration and model convergence within decentralized network applications. Cross-domain adaptation of the proposed solution and characteristics of confidential computing that support a zero-trust architecture are discussed, along with the integrated model integrity protection provided by attestation of trusted execution environments (TEEs).We conclude by looking ahead to the application of our solution to model training within distributed communications networks and sensor arrays, characterized by devices with limited electrical and computational power. We consider the use of physical unclonable functions (PUFs) to encrypt raw data before processing within a layered hierarchy secured with Confidential Computing technology.