2010
DOI: 10.1007/s10639-010-9126-8
|View full text |Cite
|
Sign up to set email alerts
|

Selecting software tools for IS/IT curricula

Abstract: The evaluation and selection of software tools for use in an IS or IT curriculum is difficult not only because actual industry software tools are often used but also because there is no formal approach to guide the process. How does one choose between SQL Server and MySQL, or Dreamweaver and Expression Studio? IS and IT educators must periodically go through the process of assessing the most suitable tools for their courses. Given how common such decisions are and how frequently they must be made, it is surpri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2015
2015
2025
2025

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…For instance, the distributed assessment of a given software's quality may be {(Excellent, 70%), (Good, 30%), (Average, 0%), (Poor, 0%), (Worst, 0%)}, which means that the quality of the software is assessed to be Excellent with 70% of belief degree and Good with 30% of belief degree. Some of the advantages of using ER over the AHP include the following: ability to handle very large multi‐attribute decision‐making problems compared to the AHP; ability to assess newly added alternatives independently, whereas the AHP would have to repeat an assessment procedure to incorporate new alternatives; ability to produce consistent ranking after new alternatives are added into the assessment procedure, whereas the AHP would encounter problems such as rank reversal; ability to produce a ranking score as well as a distributed assessment, which provides a decision maker with a panoramic view about the diversity of the performance of an alternative. Despite the diversity of existing approaches for the evaluation and selection of OSS in the literature, these approaches are hardly ever used in practice for the following reasons: first, the lack of a situational‐based procedure to define the evaluation criteria for OSS given its varied and dynamic nature; second, the existing evaluation techniques, such as the AHP, have not coped well with uncertainty factors, thus producing misleading results that affect the quality of decisions made; and third, a significant number of existing approaches require the prototyping of alternatives being considered in order to facilitate evaluation and decision‐making . Given these challenges, there is a need for a better approach that addresses these limitations and facilitates the evaluation and decision‐making process.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, the distributed assessment of a given software's quality may be {(Excellent, 70%), (Good, 30%), (Average, 0%), (Poor, 0%), (Worst, 0%)}, which means that the quality of the software is assessed to be Excellent with 70% of belief degree and Good with 30% of belief degree. Some of the advantages of using ER over the AHP include the following: ability to handle very large multi‐attribute decision‐making problems compared to the AHP; ability to assess newly added alternatives independently, whereas the AHP would have to repeat an assessment procedure to incorporate new alternatives; ability to produce consistent ranking after new alternatives are added into the assessment procedure, whereas the AHP would encounter problems such as rank reversal; ability to produce a ranking score as well as a distributed assessment, which provides a decision maker with a panoramic view about the diversity of the performance of an alternative. Despite the diversity of existing approaches for the evaluation and selection of OSS in the literature, these approaches are hardly ever used in practice for the following reasons: first, the lack of a situational‐based procedure to define the evaluation criteria for OSS given its varied and dynamic nature; second, the existing evaluation techniques, such as the AHP, have not coped well with uncertainty factors, thus producing misleading results that affect the quality of decisions made; and third, a significant number of existing approaches require the prototyping of alternatives being considered in order to facilitate evaluation and decision‐making . Given these challenges, there is a need for a better approach that addresses these limitations and facilitates the evaluation and decision‐making process.…”
Section: Introductionmentioning
confidence: 99%