Although many college courses adopt online tools such as Q&A online discussion boards, there is no easy way to measure or evaluate their effect on learning. As a part of supporting instructional assessment of online discussions, we investigate a predictive relation between characteristics of discussion contributions and student performance. Inspired by existing work on dialogue acts, project-based learning, and instructional analysis of student-generated text in generating predictive models, we make use of dialogue roles, linguistic features, and work patterns. In particular, we model the Q&A dialog roles that participants play, emotional features covered by LIWC (Linguistic Inquiry and Word Count), cohesiveness of the dialogue, the coherence captured by Coh-Metrix, and temporal patterns of participation. We use a discussion corpus from eight semesters of a computer science course, covering conversations of 173 student groups (370 students). We first remove various noises in student discussion data and normalize the discussion data. We then apply machine learning techniques and text analysis tools for classifying dialogue features efficiently. The extracted dialogue and participation features are used as predictive variables for project grades. The correlation and regression analyses indicate that the number of answers provided to others, the number of positive emotion expressions, and how early students communicate their problems before the deadline correlate with project grades. This finding confirms the argument that in assessing student online activities, we need to capture how they interact, not just how often they participate.
This paper examines the role of sentiment in information propagation. We make use of political communication in the Twitter space, and relate emotion expressions in a message to the degrees of responses generated by the message. We also compare differences between user reply vs. retweet behavior with respect to sentiment variables. The current results indicate that that degree of emotion expressions in twitter messages can affect the number of replies generated as well as retweet rates. Due to the difference in the nature of endorsement (retweet) vs. responses (replies or conversation), some of the variables present opposite roles in explaining the degree of responses the message receives. We expect these results will help generating a predictive model of message propagation.
It is often taken for granted that Paul Hirst’s switch from emphasis on liberal education to a social practices view of education is a radical one. This depends on how we understand the relation between the two views. From the perspective of a ‘weak’ interpretation I argue that Hirst’s later position differs little from his earlier one in the light both of the relation between the forms of knowledge and social practices, and of the rationalistic character of Hirst’s conception of social practices in their connection with education.
The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.