360-degree videos are consumed in diverse devices: some based in immersive interfaces, such as viewed through Virtual Reality headsets and some based in non-immersive interfaces, as in a computer with a pointing device or mobile devices with touchscreens. We have found, in prior work, signifcant diferences in user behavior between these devices. From a dataset of the trajectories of the users' head orientation in 775 video reproductions, we classify which kind of video was played (two values) and which of the four possible devices was used to reproduce these videos. We found that recurrent neural network models based on LSTM layers are able to classify the video type and the device used to play the video with an average accuracy of over 90% with only four seconds of trajectory. We are convinced that this knowledge can improve techniques to predict future viewports used in viewport-adaptive streaming when diverse devices are used.