Image segmentation stands as a pivotal pillar in the vast realm of computer vision. Nonetheless, formidable challenges endure, particularly when confronted with tasks necessitating precision in delineating diminutive objects, as evidenced in the context of medical imaging for minute targets. Existing algorithms primarily cater to larger or medium-sized objects with specific dimensions or proportions. The persistent conundrum arises from the trifecta of minuscule target dimensions, feeble distinguishing features, and resultant subpar segmentation performance for diminutive objects. In response to these hurdles, our study introduces a pioneering solution christened You only look one segment network (YOSEG), adeptly crafted to tackle these issues head-on. Our comprehensive investigations involve two datasets: a proprietary collection comprising CT intracranial aneurysm data from multiple medical centers and the publicly available CBIS-DDSM dataset, which stands for Curated Breast Imaging Subset of DDSM and contains breast mass data. The outcomes of our approach substantiate remarkable strides in the precision of small object segmentation, marking a significant advancement in this critical field.