Abstract:In open dynamic multi-agent systems, trust is commonly considered as a critical concept to be handled and managed. Computational trust models are a kinds of formal models that have been proposed to manage trust in such situation. These models present a new form of distributed intelligence in virtual societies and collective intelligence. However, the diversity of those models makes user confused about which one to choose. Different testbeds have been established to evaluate trust and reputation models and verify their robustness and efficiency. However, a lack of flexibility to handle scenarios related to multi-context trust models arise with those testbeds. We present in this paper a framework for evaluating computational trust models that provides to users more flexibility while comparing trust models in open systems and shows analysis results in chart diagrams. The ultimate objective is to evaluate and classify available computational trust models.