Sign language is utilised by deaf-mute to communicate through hand movements, body postures and facial emotions. The motions in sign language comprise a range of distinct hand and finger articulations that are occasionally synchronized with the head, face and body. Automatic sign language recognition is a highly challenging area and still remains in its infancy compared to speech recognition after almost three decades of research. Current wearable and vision-based systems for sign language recognition are intrusive, suffer from limitations of ambient lighting and privacy concerns. To the best of our knowledge, our work proposes the first contactless British Sign Language (BSL) recognition system using radar and Deep Learning (DL) algorithms. Our proposed system extracts two dimensional spatiotemporal features from the radar data and applies state-of-theart DL models to classify spatio-temporal features from BSL signs to different verbs and emotions, such as help, drink, eat, happy, hate and sad etc. We collected and annotated a largescale benchmark BSL dataset covering 15 different types of BSL signs. Our proposed system demonstrates highest classification performance with a multiclass accuracy of up to 90.07% at a distance of 141 cm from the subject using the VGGNet model.