Individuals with disabilities and persons operating in inaccessible environments can greatly benefit from the aid of robotic manipulators in performing daily living activities and other remote tasks. Users relying on robotic manipulators to interact with their environment are restricted by the lack of sensory information available through traditional operator interfaces. These interfaces deprive users of somatosensory feedback that would typically be available through direct contact. Multimodal sensory feedback can bridge these perceptual gaps effectively. Given a set of object properties (e.g., temperature, weight) to be conveyed and sensory modalities (e.g., visual, haptic) available, it is necessary to determine which modality should be assigned to each property for an effective interface design. The goal of this study was to develop an effective multisensory interface for robot-assisted pouring tasks, which delivers nuanced sensory feedback while permitting the high visual demand necessary for precise teleoperation. To that end, an optimization approach was employed to generate a combination of feedback properties to modality assignments that maximizes effective feedback perception and minimizes cognitive load. A set of screening experiments tested twelve possible individual assignments to form this optimal combination. The resulting perceptual accuracy, load, and user preference measures were input into a cost function. Formulating and solving as a linear assignment problem, a minimum cost combination was generated. Results from experiments evaluating efficacy in practical use cases for pouring tasks indicate that the solution was significantly more effective than no feedback and had considerable advantage over an arbitrary design.