Improving Feedback on Programming Assignments

2002 SIGCSE Doctoral Consortium Submission
30 November 2001

Ken Yasuhara <yasuhara@cs.washington.edu>
Richard Anderson <anderson@cs.washington.edu>, Advisor
University of Washington, Dept. of Computer Science & Engineering

The overall aim of this project is to improve feedback on graded programming homeworks assigned in introductory computer programming courses. Although we use the University of Washington's introductory sequence (CSE 142, 143 [CSE]) as a context, we focus on aspects of our environment that apply to most introductory programming courses. By studying existing education research on grading and feedback and conducting studies of how students and graders work with homework feedback in our department, we intend to develop software to support more pedagogically effective grading of introductory computer programming homework. In contrast to past work on computer-based grading of programming homework [PS99, Pre97, JU97, KSIR94, GW85], the goal of reducing staff time spent on grading is secondary to the goal of encouraging feedback that has learning value for the student.

Problems with Homework Feedback in Introductory Computer Programming

The objectives of homework are two-fold: student learning and assessment of understanding [WA98]. We believe that how much and what kind of feedback students receive are key factors in its effectiveness in promoting learning. Good feedback goes beyond the all too common practice of just pointing out the existence of errors. It reciprocates student effort by commending good work, directing attention to important aspects of the work and course content, and motivating the student to improve. Most importantly, good feedback demonstrates the grader and staff's interest in the student's learning.

In introductory programming courses, homework is particularly important, because practice and critique are essential to programming mastery. Accordingly, students and staff spend a great deal of time completing and grading homework. However, the enormity of time spent grading does not guarantee that pedagogical goals are met. Extensive direct experience with our introductory programming courses and discussion with many instructors and TAs evince the following serious problems:

Flexible Computer-Assisted Grading for Better Feedback

These are complex problems that cannot be fully addressed by any single solution, but we propose that a software tool for computer-based navigation and annotation of programming homework promises substantial advantages in student learning from better feedback, as well as more efficient staff resource usage.

As in many courses in our department, students submit homework via the web, so all work is available in electronic form. However, graders currently use paper printouts, marking with pen. A software tool for reading and annotating homework on the computer could help graders provide higher quality feedback through features such as these:

The primary objective is a tool to enable and encourage better feedback, so the design process will begin with readings from education research to determine what constitutes "better feedback" and how feedback is perceived by students. To ensure that graders find the tool usable and that it presents their feedback in a clear, convenient format, we will conduct careful studies of how students regard and use homework feedback and the process by which graders go about their work. Some of these studies will be repeated both before the tool is designed and during prototyping.

Assessing Effectiveness & Possible Extensions

Recent informal trials of using the EQuill web annotation tool [EQ] (which offers only the first of the features listed above) for computer-based annotation for grading in CSE 142 have received positive feedback from students and graders alike. Preliminary user tests with a prototype yielded encouraging results. More formal methods will be needed to assess our tool's usefulness. With the cooperation of instructors, students and course staff will be surveyed about feedback and the grading process, comparing sections that are and are not using the tool.

Looking further ahead, in addition to UW CSE, at least one other institution's computer science department has already expressed interest in usage trials when the tool is ready, an invaluable opportunity to test the tool's general usefulness. Similar trials could be conducted in conjunction with UW CSE's distance learning efforts at remote community colleges. The tool might also aid in grading in other disciplines, given many of the motivations and features of the tool are not unique to computer science.

Even just at UW, however, with hundreds of students enrolled in 142 and 143 each quarter, perhaps most exciting is the wide potential impact of the tool. Most graders lack the time or interest to study what education researchers recommend about grading, but the hope is that using a tool designed according to and incorporating this knowledge can improve the quality of their feedback.

Current Stage in Program of Study

I am a fourth-year graduate student in computer science. I completed a Masters project in computational biology but have switched advisors to explore my interests in computer science pedagogy. I am currently serving as instructor for the department's CS 2 course in C++, CSE 143, a large lecture course with about 230 students.

Participation in Doctoral Consortium

Although my interests in teaching computer science are longstanding, I have only recently decided to try to formally focus my Ph.D. work in this area. I hope this experience will help me answer the following questions:

References

[CSE] CSE 142, 143 (Introduction to Computer Programming), University of Washington, Department of Computer Science & Engineering, http://www.cs.washington.edu/education/courses/cse142/, http://www.cs.washington.edu/education/courses/cse143/.
[EQ] EQuill (web annotation tool), http://www.equill.com/ (discontinued service).
[GW85] Gross, J.A. and Wolfe, J.L. "Paperless submission and grading of student assignments." Proc. of the sixteenth SIGCSE Technical Symposium on Computer Science Education, 1985.
[JU97] Jackson, D. and Usher, M. "Grading student programs using ASSYST." Proc. of the Twenty-Eighth SIGCSE Technical Symposium on Computer Science Education, 1997.
[KSIR94] Kay, D.G., Scott, T., Isaacson, P., and Reek, K.A. "Automated grading assistance for student programs." Selected papers of the Twenty-Fifth Annual SIGCSE Symposium on Computer Science Education, 1994.
[MW99] Mason, D. and Woit, D. "Providing Markup and Feedback to Students with Online Marking." Proc. of 1999 ACM Conference on Computer Science Education (SIGCSE '99), March 1999.
[Pre97] Preston, J.A. "Evaluation software: improving consistency and reliability of performance rating." Supplemental Proc. of the Conference on Integrating Technology Into Computer Science Education: Working Group Reports and Supplemental Proceedings, 1997.
[PS99] Preston, J.A. and Shackelford, R. "Improving on-line assessment: an investigation of existing marking." Proc. of the 4th annual SIGCSE/SIGCUE on Innovation and Technology in Computer Science Education, 1999.
[WA98] Walvoord, B.E. and Anderson, V.J. Effective Grading: A Tool for Learning and Assessment. Jossey-Bass, 1998.