Semantic relation extraction is a crucial step of automatically constructing a knowledge graph from unstructured biomedical text. Many real-world applications can benefit from it. As unsupervised relation extraction approaches, generative probabilistic models, Rel-LDA and Type-LDA, are receiving more attention in recent years. However, these two models inherit the bag-of-word assumption of the standard LDA model, which disable the exploitation of more distinguishable n-gram features. To overcome this limitation, two alternative models, named as Rel-TNG and Type-TNG, are proposed with the help of Topic N-Grams (TNG) model in this study, and collapsed Gibbs sampling algorithm is utilized for inference. Extensive experimental results on GENIA and EPI corpora indicate that Rel-TNG and Type-TNG models have similar performance with their unigram counterparts, but Rel-TNG and Type-TNG models outperform Rel-LDA and Type-LDA models when prior knowledge is available.