This assignment applies to all translations of the Work as well as to preliminary display/posting of the abstract of the accepted article in electronic form before publication. If any changes in authorship (order, deletions, or additions) occur after the manuscript is submitted, agreement by all authors for such changes must be on file with the Publisher. An author's name may be removed only at his/her written request. (Note: Material prepared by employees of the US government in the course of their official duties cannot be copyrighted.
To evaluate the analytical similarity between the proposed biosimilar product and the US-licensed reference product, U.S. Food and Drug Administration (FDA) statisticians collaborated with Chemistry, Manufacturing and Control (CMC) scientists at FDA in order to develop a three-tier approach. The proposed tiered approach starts with a criticality determination of quality attributes (QAs) based on their potential impact on product quality and the clinical outcomes. Those QAs characterize the biological product in terms of structural, physico-chemical, and functional properties. Then, the QAs are assigned into three tiers based on their criticality ranking. To evaluate the analytical similarity for QAs assigned to different tiers, we recommend different statistical approaches with different statistical rigors. That is, we recommend an equivalence test for the critical quality attributes (CQAs) in Tier 1, a quality range approach for QAs in Tier 2, and a side-by-side graphic comparison approach for QAs in Tier 3. In this article, we mainly focus on the development of the FDA's recommended equivalence test for Tier 1. We also provide some discussion on the statistical challenges of the proposed equivalence test in the context of analytical similarity assessment.
To evaluate the analytical similarity between the proposed biosimilar product and the US-licensed reference product, a working group at Food and Drug Administration (FDA) developed a tiered approach. This proposed tiered approach starts with a criticality determination of quality attributes (QAs) based on risk ranking of their potential impact on product quality and the clinical outcomes. Those QAs characterize biological products in terms of structural, physicochemical, and functional properties. Correspondingly, we propose three tiers of statistical approaches based on the levels of stringency in requirements. The three tiers of statistical approaches will be applied to QAs based on the criticality ranking and other factors. In this article, we discuss the statistical methods applicable to the three tiers of QA. We further provide more details for the proposed equivalence test as the Tier 1 approach. We also provide some discussion on the statistical challenges of the proposed equivalence test in the context of analytical similarity assessment.
The 11th question-and-answer document (Q&A) for ICH E5 (1998) was published in 2006. This Q&A describes points to consider for evaluating the possibility of bridging among regions by a multiregional trial. The primary objective of a multiregional bridging trial is to show the overall efficacy of a drug in all participating regions while also evaluating the possibility of applying the overall trial results to each region. To apply the overall results to a specific region, it suggested that the results in that region should be consistent with the overall results. The Japanese Ministry of Health, Labor, and Welfare (MHLW) published the "Basic Principles on Global Clinical Trials" guidance document (2007) and proposed two methods to support the bridging claims. Due to the limited sample sizes allocated to the region, the regular interaction test for treatment by region is not practical. On the other hand, the sample size requirement for the Japanese region as described in Uyama et al. (2005) and Uesaka (2009) is to satisfy an 80% or greater power for the Japanese region, conditioning on the effect of the overall global trial. Quan et al. (2010) further extended the results to trials with various endpoints. Ko, Tsou, Liu and Hsiao (2010) focused on a specific region and established statistical criteria for consistency between the region of interest and overall results. The proposed method was based on the assumption that true effect size is uniform across regions. In this article, we propose to analyze a completed multiregional trial for any specific regional effect by controlling the type I error rate adjusted for the regional sample size and the planned power of the global trial. Accordingly, in order to attain the approval for a specific region, we propose to determine the sample size requirement for the specific regions using the overall power planned and a regional acceptable type I error rate.
According to ICH Q6A (1999), a specification is defined as a list of tests, references to analytical procedures, and appropriate acceptance criteria, which are numerical limits, ranges, or other criteria for the tests described. For drug products, specifications usually consist of test methods and acceptance criteria for assay, impurities, pH, dissolution, moisture, and microbial limits, depending on the dosage forms. They are usually proposed by the manufacturers and subject to the regulatory approval for use. When the acceptance criteria in product specifications cannot be pre-defined based on prior knowledge, the conventional approach is to use data from a limited number of clinical batches during the clinical development phases. Often in time, such acceptance criterion is set as an interval bounded by the sample mean plus and minus two to four standard deviations. This interval may be revised with the accumulated data collected from released batches after drug approval. In this article, we describe and discuss the statistical issues of commonly used approaches in setting or revising specifications (usually tighten the limits), including reference interval, (Min, Max) method, tolerance interval, and confidence limit of percentiles. We also compare their performance in terms of the interval width and the intended coverage. Based on our study results and review experiences, we make some recommendations on how to select the appropriate statistical methods in setting product specifications to better ensure the product quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.