Digital health interventions (DHI) have enormous potential as scalable tools to improve health and healthcare delivery by improving effectiveness, efficiency, accessibility, safety and personalisation. Achieving these improvements requires a cumulative knowledge base to inform development and deployment of DHI. However, evaluations of DHI present special challenges. This paper aims to examine these challenges and outline an evaluation strategy in terms of the Research Questions (RQs) needed to appraise DHIs. As DHI are at the intersection of biomedical, behavioural, computing and engineering research, methods drawn from all these disciplines are required. Relevant RQs include defining the problem and the likely benefit of the DHI, which in turn requires establishing the likely reach and uptake of the intervention, the causal model describing how the intervention will achieve its intended benefit, key components and how they interact with one another, and estimating overall benefit in terms of effectiveness, cost-effectiveness and harms. While Randomised Controlled Trials (RCTs) are important for evaluation of effectiveness and cost-effectiveness, they are best undertaken only when: a) the intervention and its delivery package are stable; b) these can be implemented with high fidelity and c) there is a reasonable likelihood that the overall benefits will be clinically meaningful (improved outcomes or equivalent outcomes at less cost). Broadening the portfolio of RQs and evaluation methods will help with developing the necessary knowledge base to inform decisions on policy, practice and research.