Testing of second language pragmatic competence is an underexplored but growing area of second language assessment. Tests have focused on assessing learners' sociopragmatic and pragmalinguistic abilities but the speech act framework informing most current productive testing instruments in interlanguage pragmatics has been criticized for under-representing the construct. In particular, the assessment of learners' ability to produce extended monologic and dialogic discourse is a missing component in existing assessments. This paper reviews existing tests and argues for a discursive reorientation of pragmatics tests. Suggestions for tasks and scoring approaches to assess discursive abilities while maintaining practicality are provided, and the problematicity of native speaker benchmarking is discussed.
Despite increasing interest in interlanguage pragmatics research, research on assessment of this crucial area of second language competence still lags behind assessment of other aspects of learners’ developing second language (L2) competence. This study describes the development and validation of a 36-item web-based test of ESL pragmalinguistics, measuring learners’ offline knowledge of implicatures and routines with multiple-choice questions, and their knowledge of speech acts with discourse completion tests. The test was delivered online to 267 ESL and EFL learners, ranging in proficiency from beginner to advanced. Evidence for construct validity was collected through correlational analyses and comparisons between groups. The effect of browser familiarity was found to be negligible, and learners generally performed as previous research would suggest: their knowledge of speech acts increased with proficiency, as did their knowledge of implicature. Their knowledge of routines, however, was strongly dependent on L2 exposure. Correlations between the sections and factor analysis confirmed that the routines, implicatures, and speech act sections are related but that each has some unique variance. The test was sufficiently reliable and practical, taking an hour to administer and little time to score. Limitations and future research directions are discussed.
In the assessment of speaking, a psycholinguistically based speaking construct has predominated. In this paper, we argue for the integration of the construct of interactional competence (IC) in speaking assessments to broaden the range of defensible inferences from speaking tests. IC emphasizes the co-constructed nature of interaction and enables the rating of L2 users' ability to deploy interactional tools that lead to shared understandings. Recent work on IC shows that levels of development can be distinguished, for example, in the sequential organization of social actions such as requests and refusals. This can in turn inform interactionally specific ratings. Furthermore, an IC perspective allows a fine-grained analysis of interactions between examiners and test takers to detect effects of examiner talk. Apparent misunderstandings or disfluencies by test takers can be examiner-induced with the test taker's response actually demonstrating interactional ability rather than lack of proficiency. We argue that inclusion of IC as a construct in testing speaking opens new perspectives on oral proficiency and enhances the validity of speaking assessments.
Paediatric occupational therapists were surveyed regarding their practices in Canada and Australia. Two hundred and eighty-nine Canadian occupational therapists and 330 Australian occupational therapists participated representing response rates of 28.9% and 55% respectively. The majority of respondents were female (98%), between 30 and 49 years of age (69%), had a bachelor's degree, worked on average 10.5 years in paediatrics and spent well over 50% of their work time in direct client care. The largest client diagnostic groups in both countries were those with developmental delays, learning disabilities and neurological disorders. Diagnostic groups were used as an organizing framework to portray theory, assessment and intervention use. Overall, the theoretical models cited most frequently in both countries were: Sensory Integration, Sensory Processing/Sensory Diet, Client-Centred Practice, and Occupational Performance Model. Australian therapists employed the Occupational Performance Model (Australia) for all groups, while it was rarely utilized in Canada. Common assessment tools in both Australia and Canada were the Peabody Developmental Motor Scales, Developmental Test of Visual Motor Integration, and the Bruininks-Oseretsky Test of Motor Proficiency. Intervention methods focused on: parental/care-giver education; activities of daily living/self-care skills training; client education; environmental modification; assistive devices; sensory integration techniques; sensory stimulation and sensory diet treatment methods; and neurodevelopmental techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.