Background: Ontology, as a formal description and organization for domain-specific knowledge, is served as a significant approach of information retrieval and biological discovery in biomedical research, attracting more and more attention in computational biomedical and bioinformatics. It is quantified entities using semantic similarity based on ontology terms when using ontologies for description or annotation. More recently, machine learning approaches have enabled ontologies to describe concepts in ”computable” semantics. However, ontology semantics representation is not yet well-explored by existing learning-based methods. Results: We propose OSR2Vec, an ontology semantic representation approach that includes transforming structured ontology information into natural language and applying a pre-trained natural language model to vectorize semantics. OSR2Vec converts directly linked ontology words into sentence sequences, which are then embedded to semantics representation using the ouBioBERT model and the TSDAE method. We evaluate our method on a protein semantic similarities task using the kgsim dataset. We demonstrate the advantages of OSR2Vec by constructing sentence sequence to attain more specific semantics and confirm that performance on the protein semantic similarities task is favorable when using a combination of ouBioBERT and TSDAE according to our method. Conclusions: OSR2Vec is a method to represent ontology semantics using pre-training natural language model embedding sentences. With increasing the quality of representation, OSR2Vec provides an effective method in the ontology-based semantic task.