The increasing demand for surveillance systems has resulted in an unprecedented rise in the volume of video data being generated daily. The volume and frequency of the generation of video streams make it both impractical as well as inefficient to manually monitor them to keep track of abnormal events as they occur infrequently. To alleviate these difficulties through intelligent surveillance systems, several vision-based methods have appeared in the literature to detect abnormal events or behaviors. In this area, convolutional neural networks (CNNs) have also been frequently applied due to their prevalence in the related domain of general action recognition and classification. Although the existing approaches have achieved high detection rates for specific abnormal behaviors, more inclusive methods are expected. This paper presents a CNN-based approach that efficiently detects and classifies if a video involves the abnormal human behaviors of falling, loitering, and violence within uncrowded scenes. The approach implements a two-stream architecture using two separate 3D CNNs to accept a video and an optical flow stream as input to enhance the prediction performance. After applying transfer learning, the model was trained on a specialized dataset corresponding to each abnormal behavior. The experiments have shown that the proposed approach can detect falling, loitering, and violence with an accuracy of up to 99%, 97%, and 98%, respectively. The model achieved state-of-the-art results and outperformed the existing approaches.