We study the question of testing structured properties (classes) of discrete distributions. Specifically, given sample access to an arbitrary distribution D over [n] and a property P, the goal is to distinguish between D ∈ P and ℓ 1 (D, P) > ε. We develop a general algorithm for this question, which applies to a large range of "shape-constrained" properties, including monotone, log-concave, t-modal, piecewise-polynomial, and Poisson Binomial distributions. Moreover, for all cases considered, our algorithm has near-optimal sample complexity with regard to the domain size and is computationally efficient. For most of these classes, we provide the first non-trivial tester in the literature. In addition, we also describe a generic method to prove lower bounds for this problem, and use it to show our upper bounds are nearly tight. Finally, we extend some of our techniques to tolerant testing, deriving nearly-tight upper and lower bounds for the corresponding questions.in Theoretical Computer Science, originating from the papers of Batu et al. [BFR + 00, BFF + 01, GR00] has also been tackling similar questions in the setting of property testing (see [Ron08,Ron10,Rub12,Can15] for surveys on this field). This very active area has seen a spate of results and breakthroughs over the past decade, culminating in very efficient (both sample and time-wise) algorithms for a wide range of distribution testing problems [BDKR05, GMV06, AAK + 07, DDS + 13, CDVV14, AD15, DKN15b]. In many cases, this led to a tight characterization of the number of samples required for these tasks as well as the development of new tools and techniques, drawing connections to learning and information theory [VV10, VV11a, VV14].In this paper, we focus on the following general property testing problem: given a class (property) of distributions P and sample access to an arbitrary distribution D, one must distinguish between the case that (a) D ∈ P, versus (b) D − D ′ 1 > ε for all D ′ ∈ P (i.e., D is either in the class, or far from it). While many of the previous works have focused on the testing of specific properties of distributions or obtained algorithms and lower bounds on a case-by-case basis, an emerging trend in distribution testing is to design general frameworks that can be applied to several property testing problems [Val11,VV11a,DKN15b,DKN15a]. This direction, the testing analog of a similar movement in distribution learning [CDSS13, CDSS14b, CDSS14a, ADLS15], aims at abstracting the minimal assumptions that are shared by a large variety of problems, and giving algorithms that can be used for any of these problems. In this work, we make significant progress in this direction by providing a unified framework for the question of testing various properties of probability distributions. More specifically, we describe a generic technique to obtain upper bounds on the sample complexity of this question, which applies to a broad range of structured classes. Our technique yields sample near-optimal and computationally efficient testers for a wide ran...