Empirical and phenomenological based models are used to represent biological and physiological processes. Phenomenological models are derived from the knowledge of the mechanisms that underlie the behaviour of the system under study, while empirical models are derived from analysis of data to quantify relationships between variables of interest. For studying biological systems, the phenomenological modeling approach offers the great advantage of having a structure with variables and parameters with physical meaning that enhance the interpretability of the model and its further used for decision making. The interpretability property of models, however, remains a vague concept. In this study, we tackled the interpretability property for parameters of phenomenological-based models. To our knowledge, this property has not been deeply discussed, perhaps by the implicit assumption that interpretability is inherent to the phenomenological-based models. We propose a conceptual framework to address the parameter interpretability and its implications for parameter identifiability. We use as battle horse a simple but relevant model representing the enzymatic degradation of β−casein by a Lactococcus lactis bacterium.How can we assess the capability of a mathematical model to provide mech-2 anistic insight on the system under study? That is, how the mathematical 3 structure of the model translates and captures the knowledge of the phenomena 4 taking place in the system? To what extent can we interpret mechanistically our 5 model? In biotechnology, biology, and biomedical fields two main approaches ex-6 ist to model processes of interest, namely empirical and phenomenological based 7 modeling. Empirical based models are derived from data, while phenomenolog-8 ical based models are derived from knowledge about the process. In biomedical 9 fields, phenomenological based models are more relevant than empirical based 10 models since, in addition to prediction, their parameters and variables provide 11 information that can be used to perform diagnosis, discriminate clinical risk 12 groups and guide treatment for stratifying patients by disease severity [1, 2]. In 13 spite of this, in the fields mentioned before many models have been developed 14 from an empirical point of view by using black box modeling approaches like 15 machine learning and fuzzy models. Machine learning models, for example, are 16 increasingly used in the field of medicine and healthcare but there is still an 17 inability by humans to understand how those models work and what meaning 18 their parameters have. Some approaches have been proposed to improving the 19 level of explanation and interpretability of such emprirical models, that is to 20 open the black box [3]. The deployment of the above mentioned approaches en-21 counters its first hurdle by the difficulty of formalising the definition of central 22 concepts such as transparency, explanation, and interpretability. In the present 23 work, we focus on the interpretability concept but applied to phenomeno...