Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education 2011
DOI: 10.1145/1999747.1999791
|View full text |Cite
|
Sign up to set email alerts
|

A marking language for the oto assignment marking tool

Abstract: Marking programming assignments involves a lot of work, and with large classes, the feedback provided to students through marking is often rather limited and late.Oto is a customizable and extensible marking tool that provides support for the submission and marking of assignments. Oto aims at reducing the marking workload and, also, at providing early feedback to students.In this paper, we present Oto's new marking language and give an overview of its implementation as a Domain-Specific Language.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 9 publications
(10 reference statements)
0
3
0
Order By: Relevance
“…However, when a student performs a submission, the tool carries out a preliminary assessment of the solution, which usually consists of executing a small test suite against the student program. Tools that implement this strategy include BOSS [22,23,24], OTO [25,26,27] and PASS [28,29,30].…”
Section: -Preliminary Validationmentioning
confidence: 99%
“…However, when a student performs a submission, the tool carries out a preliminary assessment of the solution, which usually consists of executing a small test suite against the student program. Tools that implement this strategy include BOSS [22,23,24], OTO [25,26,27] and PASS [28,29,30].…”
Section: -Preliminary Validationmentioning
confidence: 99%
“…Lately there has been a move toward developing DSLs to describe assessments [17,7]. Fonte et al [7] propose a DSL they call OSSL which supports the semantic specification of expected program output.…”
Section: Related Workmentioning
confidence: 99%
“…It has been found that informally specified languages tend to become excessively complex and error prone as systems evolve so as to perform more complex assessments [17]. For this reason, researchers have been exploring the possibility of developing more robust specification languages for assessments, which can be formally verified and for which one can easily develop supporting tools.…”
Section: Introductionmentioning
confidence: 99%