Sign Language is used by the deaf community all over world. Internationally, various research groups are working towards the development of an electronic sign language translator to enhance the accessibility of a signer. By employing intelligent models and wearable devices such as inertial measurement units (IMUs), continuous signs leading to the formation of a complete sentence can be recognized effectively. The work presented here proposes a novel one-dimensional deep capsule network (CapsNet) architecture for continuous Indian Sign Language recognition by means of signals obtained from a custom designed wearable IMU system. The IMU records tri-axial acceleration and turn rate, and orientation of the sensor is evaluated using a complementary filter. All the signals are used in the proposed deep learning network for learning and recognition of the signed sentences. The performance of the proposed CapsNet architecture is assessed by altering dynamic routing between capsule layers. Performance of the model is compared to that of foundational convolutional neural networks (CNNs) in terms of accuracy, loss, false predictions and learnt activations. The proposed CapsNet yields improved accuracy values of 94% (for 3 routing) and 92.50% (for 5 routings) in comparison to CNNs which yield 87.99%. Improved learning of the architecture is also validated by spatial activations depicting excited units at the predictive layer. For the purpose of evaluating relative performance and competitive nature of models, a novel non-cooperative pick game is constructed. The game presents a pick-andpredict competition between CapsNet and CNN constrained to a single strategy adoption. Both models compete with each other in order to reach their best responses. Higher value of Nash equilibrium for CapsNet as compared to CNN indicates the suitability of the proposed approach.