“…A variant of the two-mode technique was introduced in [3], where a covariance computation helped avoid input distribution dependency, at the cost of significantly increased digital circuit complexity. In [4], the HDC technique inserts several calibration sequences into the MDAC input, extracts the parameters of f(x) at the output of backend stages, and uses b 1 % 1=a 1 , b 3 % Àa 3 =a 1 3 to approximate the correction function g(x), but this approximation limits the degree of nonlinear errors it can handle, furthermore, the correlation used to extract a 1 and a 3 leads to a very long convergence time. In all the preceding works, a common idea is to first linearize the combined function g(f(x)), by solving b 3 , then the linear gain estimation of b 1 becomes an easy work given many existing methods [3,4,6], therefore, the way to solve b 3 is the main determinant of the technique's performance and efficiency.…”