The Sign Language Recognition System (SLRS) is a cutting-edge technology that aims to enhance communication accessibility for the deaf community in India by replacing the traditional approach of using human interpreters. However, the existing SLRS for Indian Sign Language (ISL) do not focus on some major problems including occlusion, similar hand gesture, multi viewing angle problem and inefficiency due to extracting features from a large sequence of frame that contains redundant and unnecessary information. Therefore, in this research paper an occlusion robust SLRS named Multi Featured Deep Network (MF-DNet) is proposed for recognizing ISL words. The suggested MF-DNet uses a histogram difference based keyframe selection technique to remove redundant frames. To resolve occlusion, similar hand gesture, and multi viewing angle problem the suggested MF-DNet incorporates pose features with Convolution Neural Network (CNN) features. For classification the proposed system uses Bi Directional Long Shor Term Memory (BiLSTM) network, which is compared with different classifier such as LSTM, ConvLSTM and stacked LSTM networks. The proposed SLRS achieved an average classification accuracy of 96.88% on the ISL dataset and 99.06% on the benchmark LSA64 dataset. The results obtained from the MF-DNet is compared with some of the existing SLRS where the proposed method outperformed the existing methods.