The ODE (ordinary differential equation) method has been a workhorse for algorithm design and analysis since the introduction of the stochastic approximation technique of Robbins and Monro in the early 1950s. It is now understood that convergence theory amounts to establishing robustness of Euler approximations for ODEs, while theory of rates of convergence requires finer probabilistic analysis. This paper sets out to extend this theory to quasi-stochastic approximation (QSA), based on algorithms in which the "noise" or "exploration" is based on deterministic signals, much like quasi-Monte Carlo. The main results are obtained under minimal assumptions: the usual Lipschitz conditions for ODE vector fields, and for rate results it is assumed that there is a well defined linearization near the optimal parameter θ * , with Hurwitz linearization matrix A * . Algorithm design is performed in continuous time, in anticipation of discrete-time implementation based on Euler approximations, or high-fidelity alternatives.The main contributions are summarized as follows:(i) If the algorithm gain is chosen as at = g/(1 + t) ρ with g > 0 and ρ ∈ (0, 1), then the rate of convergence of the algorithm is 1/t ρ . There is also a well defined "finite-t" approximation: