Efficient and accurate segmentation of the rectum in images acquired with a low-field (58-74mT), prostate Magnetic Resonance Imaging (MRI) scanner may be advantageous for MRI-guided prostate biopsy and focal treatment guidance. However, automated rectum segmentation on low-field MRI images is challenging due to spatial resolution and signalto-noise ratio (SNR) constraints. This study aims to develop a deep learning model to automatically segment the rectum in a low-field MRI prostate image. 132, 3D images from 10 patients were assembled. A 3D, U-Net model with the input matrix size 120×120×40 voxels was trained to detect and segment the rectum. The 3D U-Net can learn and integrate the relative information between adjacent MRI slices, which can enforce 3D patterns such as rectal wall smoothness and thus compensate for slice-to-slice variability in SNR and rectal boundary fuzziness [0]. Contrast stretching, histogram equalization, and brightness enhancement were also investigated and applied to normalize intra-and inter-image intensity heterogeneity. Data augmentation methods such as elastic deformation, flipping, rotation, and scaling were also applied to reduce the risk of overfitting in model training. The model was trained and tested using a 4-fold crossvalidation method with 3:1:2 split for training, validation, and testing. Study results show that the mean intersection over union score (IOUs) is 0.63 for the rectum on the testing dataset. Additionally, visual examination suggests that the displacement between the centroids of the ground truth and inferred volumetric segmentations is less than 3mm. Thus, this study demonstrates that (1) a 3D U-Net model can effectively segment the rectum on low-field MRI scans and (2) applying image processing and data augmentation can boost model performance.