Combined emission/absorption and reflection/transmission volume rendering is able to display poorly segmented structures from 3-d medical image sequences. Visual cues such as shading and colour let the user distinguish structures in the 3-d display that are incompletely extracted by threshold segmentation. In order to be truly helpful, analysed information needs to be quantified and transferred back into the data. We extend our previously presented scheme for such display by establishing a communication between visual analysis and the display process. The main tool is a selective 3-d picking device. For being useful on a rather rough segmentation, the device itself and the display offer facilities for object selection. Selective intersection planes let the user discard information prior to choosing a tissue of interest. Subsequently, a picking is carried out on the 2-d display by casting a ray into the volume. The picking device is made pre-selective using already existing segmentation information. Thus, objects can be picked that are visible behind semi-transparent surfaces of other structures. Information generated by a later connected-component analysis can then be integrated into the data. Data examination is continued on an improved display letting the user actively participate in the analysis process. Results of this display-andinteraction scheme proved to be very effective. The viewer's ability to extract relevant information from a complex scene is combined with the computer's ability to quantify this information. The approach introduces 3-d computer graphics methods into user-guided image analysis creating an analysis-synthesis cycle for interactive 3-d segmentation.