The recently developed Distributed Block Proximal Method, for solving stochastic big-data convex optimization problems, is studied in this paper under the assumption of constant stepsizes and strongly convex (possibly non-smooth) local objective functions. This class of problems arises in many learning and classification problems in which, for example, strongly-convex regularizing functions are included in the objective function, the decision variable is extremely high dimensional, and large datasets are employed. The algorithm produces local estimates by means of block-wise updates and communication among the agents. The expected distance from the (global) optimum, in terms of cost value, is shown to decay linearly to a constant value which is proportional to the selected local stepsizes. A numerical example involving a classification problem corroborates the theoretical results. * This result is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 638992 -OPT4SMART). c 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.