Tracheal sound represents an easily acquired signal, particularly popular in the evolution of smartphone-based systems for Sleep Apnea Syndrome diagnosis. The syndrome is characterized by partial or complete breath cessation for at least 10 s. The developed algorithms mainly rely on neural networks focusing on the extraction of the apneic episodes' count per sleeping hour, defined as the Apnea/Hypopnea Index. Though reported highly accurate, neural networks may be severely affected by the inter-and intra-patient breathing sound variability. Alternatively, breathing detection algorithms can contribute in identifying the dominant sound patterns within the apnea events. In this work, we employ zero-crossing rate, signal power, Tsallis entropy and Shannon information to discriminate breathing from silent frames. These features are extracted independently by tracheal sound recordings from 178 patients undergoing a hospital sleep study. Apneas correspond to silent periods detected by at least one of the four features, while hypopneas correspond to periods of reduced signal power. The algorithm presents increased sensitivity (80.45%) in identifying apnea/hypopnea events (32,824 out of 40,800). Despite the non-negligible number of false positive detections, the proposed algorithm proves the dominance of the described sound pattern during apnea/hypopnea episodes.