We investigate a distributed optimization problem over a cooperative multi-agent timevarying network, where each agent has its own decision variables that should be set so as to minimize its individual objective subject to local constraints and global coupling constraints.Based on push-sum protocol and dual decomposition, we design a distributed regularized dual gradient algorithm to solve this problem, in which the algorithm is implemented in timevarying directed graphs only requiring the column stochasticity of communication matrices.By augmenting the corresponding Lagrangian function with a quadratic regularization term, we first obtain the bound of the Lagrangian multipliers which does not require constructing a compact set containing the dual optimal set when compared with most of primal-dual based methods. Then, we obtain that the convergence rate of the proposed method can achieve the order of O(ln T /T ) for strongly convex objective functions, where T is the iterations. Moreover, the explicit bound of constraint violations is also given. Finally, numerical results on the network utility maximum problem are used to demonstrate the efficiency of the proposed algorithm.problem [5], wireless and social networks [6], [7], power systems [8], [9], robotics [10], and so on. There is indeed a long history in the optimization community of this problem, see [11].Based on consensus schemes, there are mainly three categories of algorithms designed for distributed optimization in the literatures, including primal consensus distributed algorithms, dual consensus distributed algorithms and primal-dual consensus distributed algorithms, see [1,12,13,14,15,16]. In most to previous works, the communication graphs are required to be balanced, i.e., the communication weight matrices are doubly stochastic. The paper [17] considered a fixed and directed graph with the requirement of a balanced graph. The work in [18] proposed distributed subgradient based algorithms in directed and fixed topologies, in which the messages among agents are propagated by "push-sum" protocol. However, the communication protocol is required to know the number of agents or the graph. In general, push-sum protocol is attractive for implementations since it can easily operate over directed communication topologies, and thus avoids incidents of deadlock that may occur in practice when using undirected communication topologies [4]. Nedić et al. in [4] designed subgradient-push distributed method for a class of unconstrained optimization problems, in which the requirement of a balanced graph was canceled. Their proposed method has a slower convergence rate with order of O(lnT / √ T ). Later, Nedić et al. in [19] improved the convergence rate from O(lnT / √ T ) to O(ln T /T ) under the condition of strong convexity.However, they only considered unconstrained optimization problems.The methods for solving distributed optimization problems subject to equality or (and) inequality constraints have received considerable attention [20,21,22]. The authors in [14] ...