This thesis results from research on human-robot cooperation within the context of the collaborative research center "Humanoid robots-learning and cooperating multimodal robots". It presents a haptic interface called "tactile language" that was designed to provide an additional non-verbal interaction modality. Furthermore, a method for proactive planning and execution of robot tasks based on the estimated human intention was devised. In order to integrate both methods into the existing simple robot control, an adequate robot architecture was conceived and implemented that comprises the existing robot control augmented by the additionally required architectural components.The main structure of the robot architecture consists of three layers, where the upper two layers consisting of task planner and execution supervisor represent the main new contributions. The job of the task planner is to select the task that should be executed next. It was designed such that the robot can receive commands from all man-machine interfaces on the one hand, and from the proactive execution module on the other hand. To that end, an algorithm was defined to select the appropriate command in any situation. After the selection of a command, the execution supervisor ensures the correct execution of the task.The tactile language developed in the course of this research allows a human to control the robot via its artificial skin or a touchpad. The processing system of the tactile language can be divided in three sections: First, it comprises the sensor interfaces and the tactile image processing; second, the recognition of the input symbols and characters of the language; and third, the interpretation of the input and the connection to the robot control. The tactile language offers the user four different input modes to choose from: The first two are a direct and an indirect control mode to move and rotate the robot's TCP and head. The third is an abstract mode where symbolic, complex robot commands can be issued by entering appropriate multifinger symbols. Furthermore, by integration of a freely available character recognition software, the user can enter handwritten alphabet characters. Lastly, there is a button mode that works like a touchscreen with active regions on the tactile surface that directly issue robot commands. The tactile language is very expressive since multiple parameters like direction, distance, and speed can be decoded from a single human finger stroke which can be used as attributes to the input symbol. Moreover, it is approximately real-time capable, thus making it possible to control the robot directly and recognize symbols instantly as well. The interpretation is done by a deterministic finite automaton that defines the tactile language and checks whether or not a tactile input makes sense. In case of a valid input the data is transmitted to the robot control.A traditional service robot is controlled exclusively by explicit commands from the human. The paradigm of "proactive execution" that is newly estab...