Checkpoints are widely used to improve the performance of computer systems and programs in the presence of failures, and significantly reduce the cost of restarting a program each time that it fails. Application level checkpointing has been proposed for programs which may execute on platforms which are prone to failures, and also to reduce the execution time of programs which are prone to internal failures. Thus we develop a mathematical model to estimate the average execution time of a program in the presence of failures, without and with application level checkpointing, and use it to estimate the optimum interval in number of instructions executed between successive checkpoints. The case of programs with loops and nested loops is also discussed. The results are illustrated with several numerical examples.