Most voice-based personal assistants are suitable for simple tasks which are not conversational but single-turn question-answering. To address this limitation, we investigate the dialogue capabilities of commercial conversational systems and compare them to the standards expected by the users. We designed a set of moderately complex search tasks and used two popular personal assistants to evaluate the user-system interaction. A laboratorybased user study was conducted with twenty-five users and seventy-five search sessions to collect user-system conversational dialogues (for three search tasks). Next, we show that using a set of simple rules, which could be implemented in the immediate future, it is possible to improve the users' interaction experience and make the system more anthropomorphic. Using a conceptual prototype where a human (Wizard) played the role of the system (unknowing to the users), we demonstrate the efficacy of the guidelines and provide design recommendations for future conversational search systems.