2016
DOI: 10.1080/00207543.2016.1198055
|View full text |Cite
|
Sign up to set email alerts
|

Markov chain Monte Carlo in Bayesian models for testing gamma and lognormal S-type process qualities

Abstract: The process capability index C pu is widely used to measure S-type process quality. Many researchers have presented adaptive techniques for assessing the true C pu assuming normality. However, the quality characteristic is often abnormal, and the derived techniques based on the normality assumption could mislead the manager into making uninformed decisions. Therefore, this study provides an alternative method for assessing C pu of non-normal processes. The Markov chain Monte Carlo, an emerging popular statisti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 43 publications
0
4
0
Order By: Relevance
“…Because the prior information was insufficient in this case, we followed the study of Jiang et al . and Liao to adapt the non‐informative uniform distribution as the prior information, that is π()θtrue˜ = π ( θ 1 , θ 2 , …, θ k ) = π ( θ 1 ) × π ( θ 2 ) × ⋯ × π ( θ k ) and π ( θ 1 ) = π ( θ 2 ) = ⋯ = π ( θ k ) = 1. Therefore, the joint posterior distribution of ( α , β ) is p()α,βtrue|x0.25em0.25emβαΓ()α()i=1nxiα1×normalexp()prefix−βtruetrue∑i=1nxi, and the joint posterior distribution of ( γ , η ) is p()γ,ηtrue|x0.25em0.25emγηγn×truetrue∏i=1nxiγ1×normalexp[]prefix−truetrue∑i=1nxiηγ. …”
Section: Bayesian Models For Assessing Process Quality Lossmentioning
confidence: 99%
See 3 more Smart Citations
“…Because the prior information was insufficient in this case, we followed the study of Jiang et al . and Liao to adapt the non‐informative uniform distribution as the prior information, that is π()θtrue˜ = π ( θ 1 , θ 2 , …, θ k ) = π ( θ 1 ) × π ( θ 2 ) × ⋯ × π ( θ k ) and π ( θ 1 ) = π ( θ 2 ) = ⋯ = π ( θ k ) = 1. Therefore, the joint posterior distribution of ( α , β ) is p()α,βtrue|x0.25em0.25emβαΓ()α()i=1nxiα1×normalexp()prefix−βtruetrue∑i=1nxi, and the joint posterior distribution of ( γ , η ) is p()γ,ηtrue|x0.25em0.25emγηγn×truetrue∏i=1nxiγ1×normalexp[]prefix−truetrue∑i=1nxiηγ. …”
Section: Bayesian Models For Assessing Process Quality Lossmentioning
confidence: 99%
“…In the Bayesian approach, the former distribution πθ must be given initially in order to obtain the posterior distribution pθjx , using the following function: The determination of the prior distribution is usually based on prior information about the parameters, including historical data, previous experience, expert suggestions, subjective supposition, or simply mathematical convenience. 37 Because the prior information was insufficient in this case, we followed the study of Jiang et al 28 and Liao 11 to adapt the noninformative uniform distribution as the prior information, that is πθ = π(θ 1 , θ 2 , …, θ k ) = π(θ 1 ) × π(θ 2 ) × ⋯ × π(θ k ) and π(θ 1 ) = π(θ 2 ) = ⋯ = π(θ k ) = 1. Therefore, the joint posterior distribution of (α, β) is…”
Section: Posterior Distributionsmentioning
confidence: 99%
See 2 more Smart Citations