P( |T) (nm) Unknown Narrow-Band Light arXiv:1912.11751v3 [physics.app-ph] 3 Jan 2020 term reliability can be equal or more important considerations. Although various machine learning (ML) tools are frequently used on sensor and detector networks to address these considerations and dramatically enhance their functionalities, nonetheless, their effectiveness on nanomaterials-based sensors has not been explored. Here, we show that the best choice of ML algorithm in a cyber-nanomaterial detector is largely determined by the specific useconsiderations, including accuracy, computational cost, speed, and resilience against drifts and long-term ageing effects. When sufficient data and computing resources are provided, the highest sensing accuracy can be achieved by the k-nearest neighbors (kNN) and Bayesian inference algorithms, however, these algorithms can be computationally expensive for real-time applications. In contrast, artificial neural networks (ANN) are computationally expensive to train (off-line), but they provide the fastest result under testing conditions (on-line) while remaining reasonably accurate. When access to data is limited, support vector machines (SVMs) can perform well even with small training sample sizes, while other algorithms show considerable reduction in accuracy if data is scarce, hence, setting a lower limit on the size of required training data. We also show by tracking and modeling the long-term drifts of the detector performance over large (i.e. one year) time-frame, it is possible to dramatically improve the predictive accuracy without the need for any recalibration. Our research shows for the first time that if the ML algorithm is chosen specific to the use-case, lowcost solution-processed cyber-nanomaterial detectors can be practically implemented under diverse operational requirements, despite their inherent variabilities.