This paper explores a different approach to evaluating the merits of specific technical components of computer based learning applications. A traditional double blind experimental study was implemented in a new context. A computer based Clinical Decision Simulator (CDS) system was designed and implemented incorporating an intelligent agent. This was compared to an otherwise identical system with no agent, and a group of students not using CBL systems. The results suggested that although no improvement in measurable learning outcomes could be conclusively demonstrated there was some evidence that those students using the intelligent agent system demonstrated more positive learning experiences and a deeper conceptualisation of the issues. This would suggest that a comparative multimethod experimental evaluation strategy, although complex (and not without its shortcomings) may help provide a more comprehensive analysis of students learning experience, and provide a useful picture of the student's perceptions of CBL tools. This novel approach may be of particular relevance where the justification of a specific technological aspect of an e-learning application is required. The value of developing and using an experimental strategy to evaluate a specific technological aspect of a computer based learning (CBL) application is discussed.