Track 1: Harnessing Technology to Improve Teaching

Premiere: Tuesday 20 June 2023, starting at 3pm BST//10am EDT


Presenters:

Gradescope: Improving Marking and Feedback in Economics Courses

Antonio Mele, London School of Economics and Political Science

Abstract

Gradescope is an online grading and feedback platform implemented in several large economics courses at the London School of Economics (LSE) to improve the speed, consistency, and quality of grading while providing more effective feedback to students. The platform allows instructors and/or students to upload assignments, exams, and other assessments to be marked electronically by multiple markers. The system is designed to improve the speed, consistency, and quality of marking, as well as to provide more effective feedback to students. The implementation of Gradescope at LSE has shown promising results in terms of faster grading, more effective feedback, and higher consistency across markers. However, the implementation of Gradescope has also presented some challenges, including the integration with LSE’s learning management system, navigating the platform for some students, and the lack of automation for some tasks. At the time of writing, several large Economics courses have started using Gradescope for their formative assessment, and the paper will document positive and negative aspects of the workflow, quantify time-saving gains, and collect feedback from markers and students. The paper will also investigate which workflow and options are best for quantitative problem set assignments.


Improving Student Comprehension Through Interactive Model Visualization

Simon Halliday, University of Bristol

Abstract

A large literature (e.g., Sherin, 2010; Larkin, McDermott, Simon and Simon, 1980; Eichenlaub and Redish, 2018) has shown that novices and experts across the STEM disciplines differ markedly in how they approach and solve problems. While experts rely on a deep conceptual understanding to select appropriate models, novices jump straight to equations, often abandoning their connection to reality. Novices therefore often miss conceptual insights provided by intermediate equations and fail to recognize when computations lead to conceptually impossible results such as negative prices or violations of resource constraints.

Many STEM education scholars (e.g., Dreyfus and Halevi, 1991; Hegedus and Kaput, 2004) find that giving students scaffolded exercises in which they work with a visualiza- tion tool can be highly effective in teaching novices to think more like experts. Using model visualization software developed by Chris Makler for EconGraphs.org, we have created more than 20 interactive exercises that span the breadth of most intermediate- level microeconomics courses. Students make predictions about the impact of a change to a model, and then use an interactive visualization to test their predictions. Finally, they compute the solution using algebra to induce a connection between the visual and mathematical representations.

We have fielded these interactive exercises in classes at Cornell University and Stanford University in Fall 2022 and plan to continue in Spring 2023, including the University of Bristol. Students complete most of the exercises outside of class, but in some cases, students work together in the classroom to solve problems using the visualizations.

Anecdotally, students have found the new exercises very helpful. However, we are also surveying students about how the exercises helped them understand the economic models and concepts compared to purely algebraic exercises. Moreover, we are collecting measures of self-efficacy and attitudes toward economics at the beginning and end of each term.


Gamification – F2F, Online, Synchronous? A Case Study Comparison

Matthew Olczak, Aston University co-presenting with Chris Wilson (Loughborough University)

Abstract

Short games and experiments are commonly used as teaching methods in economics education and have been shown to aid student learning (e.g. Emerson and English, 2016). Traditionally, these gamification activities were conducted with paper and pen within the classroom. However, platforms that enable them to be run online have been subsequently developed. More recently, as prompted by the pandemic and the move to more blended learning, the possibility of conducting such activities online in a remote, asynchronous delivery format has been discussed (e.g. Guest and Olczak 2021). However, there is little evidence on how this format compares to other delivery contexts.

To address this, our paper provides novel evidence on how the choice of delivery format for gamification impacts students. We use a teaching experiment in which participants are provided with a series of choice tasks to demonstrate behavioural biases in decision making (Bennato et al. 2020). An otherwise identical version of the game was delivered to successive cohorts under three different formats i) in-person delivery using more traditional polling software (‘non-platform-synchronous-F2F’), ii) an online platform in a remote asynchronous context (‘platform-asynchronous remote’) and iii) in-person delivery where students access the platform on their devices but in a classroom setting (‘platform-synchronous-F2F’). After each experiment, participants were asked to complete an anonymous survey.

Our findings show that in-game decision making was broadly consistent across the different methods; demonstrating that it is possible to achieve the intended learning outcomes under all three delivery methods. However, the two synchronous methods had higher participation rates and the ‘online-synchronous-F2F’ approach, in particular, had higher student engagement. Furthermore, the surveys also suggest that students better enjoyed and learnt more under the ‘online-synchronous-F2F’ approach. Hence, our results indicate that the additional advantages of using online platforms are best utilised with a synchronous in-person delivery method.


Watch via our YouTube channel on Tuesday 20 June 2023, starting at 3pm BST//10am EDT.