Session 2: Optimizing Feedback to Engage Students

Day 2, Tuesday 25 June 2024

3pm-4.30pm BST//10am-11.30am EDT


Session Chair: Cloda Jenkins (Imperial College London)

Speakers:

Impact of performance feedback on student study habits and grades

Margaret Leighton (University of St Andrews) and Lavinia Kinne (DIW Berlin)

High quality performance feedback can increase effort and help individuals identify areas for personal improvement; however, if not designed and delivered appropriately it can instead demotivate and discourage. In this study we experimentally vary how feedback is provided to students on their term-time performance in a large first year undergraduate economics module. Specifically, we vary whether feedback is delivered framed by positive messaging or not, as well as whether this positive messaging is personalised. We hypothesise that positive messaging will be better received and will lead to greater effort provision than feedback without such framing, and that personalised feedback will have a greater impact. Using detailed administrative data on date, duration and nature of student interactions with online module platforms, as well as grades on the final exam, we evaluate the impact on effort and performance.

Inclusive Classrooms in Economics: Understanding Student Engagement using Mentimeter

Arpita Ghosh (University of Exeter)

Student Response Systems (SRS), like Mentimeter, are widely used to facilitate active teaching and increase student engagement, among others (Skoyles, A., and Bloxsidge, E., 2017). Especially during the pandemic educators increasingly used SRSs to allow students to participate when teaching online/hybrid, as they helped them to measure students’ participation and understanding of the core class materials (Pichardo, J., Ignacio, et al., 2021). In this project, my objective is to understand how different types of non-game-based audience polls might affect students differentially, in their academic performance and participation. I created two types of Mentimeter polls in tutorials for two first-year undergraduate Economics modules: one timed and the other un-timed. The analysis of these data with 501 observations indicates that students seem to perform slightly better in untimed quizzes, and the effect is starker for female students The underlying reason could be that students can take more time to reach the correct answer. Equally, they may feel less stressed in a non-competitive environment (Walstad and Robson, 1997; Miller et al, 1994). I define tenacity as the proportion of the number of questions attempted over the total number of questions asked in tutorials and find students seem to be quite tenacious in both types of quizzes with an overall average of 70%. However, male students tend to be less tenacious than women in both types of quizzes and they tend to quit participating in the polls more often than female students. Finally, females also tend to show more tenacity when the allocated tutors are closer to their own age group. Currently, I am at the final stage of the data collection for one of the courses to increase the sample size. This project will allow broadening the understanding of anonymous in-class student participation and indicate the ways of making classrooms more inclusive.

Cross-Course Assessments to Joyfully Align Curriculum, Students, and Professors

Courtney Ward (University of Toronto)

Via a case study from three courses at the Economics Department at the University of Toronto, we illustrate an engaging, nimble, and smaller-scale alternative to curriculum mapping. In 2021/22, two professors of second- and third-year undergraduate courses in applied econometrics developed a multiple-choice bridging quiz. It explicitly links shared foundational learning objectives across courses by assessing students’ skills in connecting course learning with applied empirical economics research from journals, working paper series, and the popular press. Students take it as a capstone quiz near the end of the second-year course: it helps reiterate key skills they must retain even as details fade. Students take it as a prerequisite quiz at the start of the third-year courses. At the upper level, it unambiguously signals pre-requisite skill expectations to students – including those streaming in from second-year statistics courses taught outside of economics – and gives all students an opportunity for active and meaningful review. Hence, the bridging quiz yields direct pedagogical benefits for students at both levels. Moreover, a bridging quiz creates a rare opportunity for professors designing courses intended to be taken sequentially. Joint question development demands meaningful communication across professors, far beyond what could be accomplished by a passive review of syllabi and bullet lists of learning objectives, and harnesses sociality and curiosity. A quiz also allows a quantitative measure of students’ skills over time and from different second-year prerequisite course combinations. We present sample quiz questions, some empirical results, and our reflections on the impacts of this bridging quiz initiative.

Nudging Procrastination Away: The use of simplification and reminders in a dissertation project

Jana Sadeh (University of Southampton)

While literature has established the detrimental effect that procrastination has on student performance, there is relatively sparse research on potential behavioural interventions to counter it. We apply a combined experimental and randomised control trial approach. We measure time and risk preferences for a cohort of students in an online experiment and then follow this with a nudge for the Treatment Group that combines the fragmentation of a large task into smaller chunks with a weekly reminder of the tasks to work on. We find that the intention to treat has no significant impact on either grades or submission time. However, the nudged students were significantly more likely to interact with the task list. These students, who consumed the treatment, received significantly higher grades than those who did not. In general we also find that students that engaged with the task list perform better than those who did not. In addition, the more tasks are completed the higher the grade received. If we look at subgroup effects we find that students who are risk averse tend to submit earlier, although they do not necessarily receive higher grades, and that treating this group leads to significantly earlier submission. Finally, the self-reported and experimental measures of procrastination are poor predictors of actual behaviour, while a direct measure of procrastination on a low stakes online test is significantly related to lower grades. These findings suggest that the task breakdown checklist can be a helpful tool for long term assessment but that weekly reminders on their own are insufficient. Combining the two increases the use of the tool and improves student outcomes. This has important implications for higher education where students are expected to carry out a greater degree of independent work. It suggests that providing bench marking forward-feedback can support self-led projects and improve student outcomes, but only if students decide to engage.