2010
DOI: 10.5381/jot.2010.9.3.a3
|View full text |Cite
|
Sign up to set email alerts
|

Extracting State Models for Black-Box Software Components.

Abstract: We propose a novel black-box approach to reverse engineer the state model of software components. We assume that in different states, a component supports different subsets of its services and that the state of the component changes solely due to invocation of its services. To construct the state model of a component, we track the changes (if any) to its supported services that occur after invoking various services. Case studies carried out by us show that our approach generates state models with sufficient ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…Machine-learning models are called black-box models because the internal algorithms used are complex, making it difficult to explain the reasoning behind the resulting predictions. However, there are thousands or millions of hyperparameters inside the model that are irrelevant to the input value but affect the predictive power [22]. So many studies have been conducted to interpret these prediction processes and validate the reliability of the results.…”
Section: Machine-learning Model and Explainable Aimentioning
confidence: 99%
“…Machine-learning models are called black-box models because the internal algorithms used are complex, making it difficult to explain the reasoning behind the resulting predictions. However, there are thousands or millions of hyperparameters inside the model that are irrelevant to the input value but affect the predictive power [22]. So many studies have been conducted to interpret these prediction processes and validate the reliability of the results.…”
Section: Machine-learning Model and Explainable Aimentioning
confidence: 99%
“…AI technology can process a large amount of work in real-time, but it is difficult for users to trust it because the basis of and process for the results cannot be known [14]. An interpretation of an AI model should make it possible to overcome this limitation.…”
Section: Xaimentioning
confidence: 99%
“…Artificial intelligence technology can process a large amount of work in real time, but it is difficult for users to trust it because the basis and process for the result cannot be known [12]. Interpretation of the AI model should make it possible to overcome this limitation.…”
Section: B Explainable Artificial Intelligence (Xai)mentioning
confidence: 99%