Within crime scene analysis, a framework providing interactive visualization and gesture based manipulation of virtual objects, while still seeing the real environment, seems a useful approach for the interpretation of cues and for instructional purposes as well. This paper presents a framework providing a collection of techniques to enhance reliability, accuracy and overall effectiveness of gesture-based interaction, applied to an interactive interpretation and evaluation of a crime scene in an augmented reality environment. The interface layout is visualized via a stereoscopic see-through capable Head Mounted Display (HMD), projecting graphics in the central region of the user's field of view, floating in a close-at-hand volume. The interaction paradigm concurrently exploits both hands to perform precise manipulation of 3D models of objects, eventually present on the crime scene, or even distance/angular measurements, allowing to formulate visual hypothesis with the lowest interaction effort. A real-time adaptation of interaction to the user's needs is performed by monitoring hands and fingers' dynamics, in order to allow both complex actions (like the above mentioned manipulation or measurement) and conventional keyboardlike operations.