Software architects use so-called software architecture design assistants to get tool-based, (semi-)automated support in engineering software systems. Compared to manual engineering, the main promise of such a support is that architects can create high-quality architectural designs more efficiently. Yet, current practice in evaluating whether this promise is kept is based on case studies conducted by the original authors of respective design assistants. The downside of such evaluations is that they are neither generalizable to thirdparty software architects nor can be used for quantitative efficiency comparisons between competing design assistants.To tackle this problem, we investigate how researchers can apply controlled experiments for evaluating the impact of software architecture design assistants on the efficiency of architects. For our investigation, we survey related controlled experiments. Based on this survey, we derive lessons learned in terms of best practices and challenges for such experiments.