2014
DOI: 10.1007/s13347-014-0159-6
|View full text |Cite
|
Sign up to set email alerts
|

The Problem of Justification of Empirical Hypotheses in Software Testing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 27 publications
0
10
0
Order By: Relevance
“…Model-based hypotheses concerning future computations of a represented software system are tested up to a certain time fixed by testers; they are assigned a probabilistic evaluation according to statistical estimations of future incorrect executions based on past observed failures (Littlewood and Strigini 2000). Accordingly, they assume the epistemological status of probabilistic statements that are corroborated by failed attempts of falsification (Angius 2014) .…”
Section: Discovering Empirical Theories Of Software Systemsmentioning
confidence: 99%
“…Model-based hypotheses concerning future computations of a represented software system are tested up to a certain time fixed by testers; they are assigned a probabilistic evaluation according to statistical estimations of future incorrect executions based on past observed failures (Littlewood and Strigini 2000). Accordingly, they assume the epistemological status of probabilistic statements that are corroborated by failed attempts of falsification (Angius 2014) .…”
Section: Discovering Empirical Theories Of Software Systemsmentioning
confidence: 99%
“…Thereby, the types of hypotheses formulated in digital forensics are similar to those formulated in software testing because both digital forensics and software testing are idiographically oriented towards what is unique (not towards what is universally general). For this reason, the meta-scientific methodological considerations about the problem of justification of empirical hypotheses in software testing [1] should also be studied carefully by digital forensic theoreticians.…”
Section: Related Workmentioning
confidence: 99%
“…The target paper stresses that, even in this case, there still remain executions that are not analysed. However, one is here in the similar situation in which only those behaviours of an empirical system that falsify a given hypothesis are observed: both scientific experiments and software tests are theory-laden in so far as tested behaviours are only those that are likely to falsify the hypothesis or the specification, respectively (Angius 2014). The authors suggest that software tests are not exploratory (Franklin 1989), and many executions remain untested; yet, the same happens with scientific experiments in NSIS: a scientific experiment is, by definition, a set of biased observations that are not exhaustive (Bunge 1998, pp.…”
Section: The Problem Of Induction In Software Intensive Sciencementioning
confidence: 99%
“…The probabilities involved increase or decrease as new executions are observed. The software reliability estimation process involves a Bayesian confirmation of hypotheses on softwareintensive systems which characterises common statistical approaches in science (Angius 2014).…”
Section: The Problem Of Induction In Software Intensive Sciencementioning
confidence: 99%