This paper focuses on the online gradient and proximal-gradient methods for optimization and learning problems with data streams. The performance of the online gradient descent method is first examined in a setting where the cost satisfies the Polyak-Łojasiewicz (PL) inequality and inexact gradient information is available. Convergence results show that the instantaneous regret converges linearly up to an error that depends on the variability of the problem and the statistics of the gradient error; in particular, we provide bounds in expectation and in high probability (that hold iteration-wise), with the latter derived by leveraging a sub-Weibull model for the errors affecting the gradient. Similar convergence results are then provided for the online proximal-gradient method, under the assumption that the composite cost satisfies the proximal-PL condition. The convergence results are applicable to a number of data processing, learning, and feedback-optimization tasks, where the cost functions may not be strongly convex, but satisfies the PL inequity. In the case of static costs, the bounds provide a new characterization of the convergence of gradient and proximal-gradient methods with a sub-Weibull gradient error. Illustrative simulations are provided for a real-time demand response problem in the context of power systems. S. Kim and L. Madden are with the Department of Applied Mathematics at the University of Colorado Boulder; E. Dall'Anese is with the Department of Electrical, Computer and Energy Engineering and an affiliate faculty with the