This article reports on a first attempt to develop and test run an observation procedure for assessing the syntactic and morphological development of adult learners of English as a second language (ESL) as evidenced in spontaneous speech production. The procedure is based on the profile analysis approach, which was first developed by Crystal, Fletcher, and Gorman (1976) for the assessment of impaired speech (English) and later adapted to the assessment of second language development (German) by Clahsen (1985). The theoretical basis of the procedure is the multidimensional model of second language acquisition (SLA) developed by Meisel, Clahsen, and Pienemann (1981) and extended to ESL acquisition by Pienemann and Johnston (1987a). According to the model, invariant developmental stages in the acquisition of certain syntactic and morphological elements in German and English can be predicted and explained in terms of hierarchically ordered speech processing constraints.In order to assess the developmental stage of ESL learners, an observation form was drawn up, incorporating a selection of morphosyntactic features whose presence or absence in a taped sample of natural speech was monitored by assessors. The ratings made by the assessors were then compared to those assigned through a detailed linguistic analysis to test the feasibility of using a “shorthand” version of a profile analysis.Analysis of the outcomes of the test run revealed significant correlations between the assessments and the linguistic analysis. But some variation was found in the assessors' ability to apply the assessment criteria, and the extent of agreement between the assessors' observations and the linguistic analysis was less than would be acceptable in the given theoretical framework. However, the source of these problems was identified through the first test run and suggestions were made for further refining the procedure to improve its accuracy.
This article reports on an exploratory study that investigated the comparability of listening assessment tasks used to assess and report learning outcomes of adult ESL learners in Australia. The study focused on the effects of task characteristics and task conditions on learners’ performance in competency-based listening assessment tasks that require learners to demonstrate specific listening behaviours. Key variables investigated included the nature of the input and the response mode. Quantitative and qualitative analyses of test scores suggest that speech rate and item format influence task and item difficulty. However, the complexity of the interaction between text, item and response makes it difficult to isolate the effects of specific variables. Implications of these findings for assessment task validity and reliability are considered and practical consequences for assessment task design in outcomes-based systems are discussed.
Over the last two decades, research has highlighted the important role that listening plays in language acquisition (Brown and Yule 1983, Ellis, et al. 1994, Faerch and Kasper 1986, Feyten 1991, Long 1985), and listening comprehension skills have begun to receive a lot more systematic attention in language teaching classrooms. A wide range of books, articles, and materials aimed at assisting teachers to develop learners’ listening skills are now available, and a variety of comprehension-based methodologies have been proposed (see, for example, Anderson and Lynch 1988, Courchene, et al. 1992, Rost 1990; 1994, Underwood 1989). However, although many of the tasks used for teaching listening are virtually identical to those which appear in tests, assessment of listening ability has received relatively limited coverage in the language testing literature.
The implementation of outcomes-based assessment and reporting systems in educational programs has been accompanied by a range of political and technical problems, including tensions between the summative and formative purposes of assessment and doubts surrounding the validity and reliability of teacher-constructed assessment tasks. This article examines ways in which these problems have been manifested and addressed, using two recent examples from school and adult immigrant education in Australia. The first example concerns a recent controversy surrounding the use of national literacy benchmarks for primary school learners. Analysis of the issues suggests that some learner groups may be disadvantaged by the practice of reporting aggregate outcomes in terms of minimum standards, but that government policy is unlikely to change as long as the accountability function of assessment remains paramount in the public eye. The second example discusses the teacher-developed assessment tasks that are used to assess the achievement of language competencies in the Australian Adult Migrant English Program (AMEP). It is argued that problems of consistency and comparability that have been identified by research can be addressed through the development of fully-piloted task banks and the provision of appropriate forms of professional development. Greater attention needs to be given to the role of the teacher if outcomes-based assessments are to provide high quality information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.