Face to face communication is interactive, and involves continuous feedforward and feedback of information, thoughts, and feelings to the opposite party. To accurately assess the neural processing underlying these interactions, synchronous and simultaneous recording of the brain activity from both parties is needed, a method known as hyperscanning. Here, we investigated the neural processing underlying nonverbal face-to-face communication using a magnetoencephalographic (MEG) hyperscanning system, comprising two fiber optically connected MEGs. Eight pairs of subjects participated. Each individual in each pair viewed a combined 80 randomized 20 s trials of 40 real-time and 40 recorded (hereafter, real and simulated, respectively) videos of the opposite party's face. Non-verbal communication through actions such as gaze, eye blinks, and facial expression was intrinsically only possible during real videos. After each trial, subjects individually subjectively discriminated whether the viewed video was real or simulated. Overall subjective discrimination accuracies were slightly but significantly above chance level. Statistical analysis of brain activity revealed a significant three way interaction between theta-band rhythm amplitude, video type, and subjective discrimination response in the right frontal cortex. Additionally, when subjects responded that videos were simulated, theta activity was significantly lower for real videos compared with simulated videos (p = 0.01). This result not only demonstrates the importance of right frontal theta activity during non-verbal communication, but also indicates the existence of unconscious, semi-automated neural processing during non-verbal communication that underlies one's ability to subjectively discriminate whether or not the opposite party is real. I. INTRODUCTION During communication between two people, each person perceives the other's words, tone, and facial expressions. Each person then cognizes the others' emotion or intention, in accordance with experience, and predicts the next step of communication [1], ultimately producing some action or output. Thereby, there is a continuous mutual-feedback of output and input between both people during communication [2-3]. These mutual feedback processes, in addition to higherorder processes, are thought to involve dynamic, semiautomatic processing such as coordination of movement [4] and blink synchronization [5] that subserve conscious predictions and decision making. Thus, to truly capture the behaviors and neurophysiological correlates of communication between two people requires simultaneous and synchronous recording of the behavioral and/or neurophysiological states of both parties. However, previous research [6-7] has largely focused on simulated communication tasks using static facial stimulation and prerecorded video with single subject, despite the fact that there is no continuous and mutual feedback in these types of tasks. Consequently, the neurocorrelates underlying two-way communication remains unclarified.