Steve Wolfman
University of Washington
Computer Science and Engineering
However, I believe that feedback on the students' comprehension is already available from their notes — if only instructors had the means to elicit and synthesize it while teaching. If, as an instructor finished a point or an exercise, she could halt time and investigate each student's recent notes, she might be able to use this information to the class's advantage — to discover, based on the type, content, and placement of the students' annotations, not only whether students were able to follow her lecture, but what particular words, phrases, figures, and ideas caused them trouble (or excited them!).
My research project is to develop and analyze a notetaking and feedback system that exploits this potential. Each student uses a laptop or tablet PC to take notes directly on the lecture slides. The system projects an automatic high-level, real-time summary of these notes — nothing more than shading on the slides to show areas of student interest or concern — onto a private display for the instructor.
I believe this system has two key strengths. First, it provides feedback to the instructor that is highly contextualized: it is displayed on the actual slides used in the class and is updated in real time; so, the instructor can use the contexts of time and region of the slide. Second, the feedback process is essentially non-intrusive to the students; it simply takes advantage of the notetaking process they would engage in anyway.
Several other researchers have investigated classroom feedback devices, often for just these environments. Eric Brittain's classroom communicator system employs standard cellular telephones to enable student feedback but does not contextualize feedback (i.e., does not localize it in the slides or other shared material) [Brittain, 2001]. The Classtalk system [Dufresne, 1996] uses PDAs or graphing calculators hooked into a network which display problems posed by the instructor. The students work the problems in small groups and input their solutions into the devices, either as multiple choice responses or even as mathematical equations and short answers. Classtalk does not support student-initiated interactions as I hope to. However, Classtalk does support rich, complex feedback which is then automatically summarized for the instructor. Our system will build on the ideas developed in Classtalk, but will largely target student-initiated feedback.
Previous electronic notetaking systems also form a foundation for our work. Classroom 2000 [Abowd, 1999] included a student notetaking system called StuPad. Brown University has begun an ambitious, integrated hardware and software student notetaking system [Doeppner, 2000].
The student notes we are collecting are annotations from the coursepackets (slide printouts) of five volunteers from an introductory computer science course of approximately two hundred students. As such, we believe that they will strongly resemble notes from the electronic system we envision. We specifically encouraged students who take copious notes to volunteer, in order to investigate the optimum potential for our system. We will analyze these notes to see what the instructor might learn from a "perfect" feedback system. We will also use the nature of student notes to inform the design of both the student interface to the system (to support the types of notes students actually take) and the instructor interface (to support display of the information with the greatest potential benefit from student notes).
The system we are prototyping is designed to test the advantages and possibilities of contextual feedback alone, without a full student notetaking system. As shown in Figure 1, the student view shows the current slide (synchronized with the lecture and including any annotations the lecturer makes). Students can select an area of the slide using the "multi-click" metaphor: a single click selects a point, a double click selects a word, and a triple click selects a bullet. Then, they select a semantic category — a word that describes the meaning of their feedback — for their annotation from a set of categories designated by the instructor. (I imagine that the instructor will generally include at least categories like, for example, "confusing" and "important.")
Figure 1: Screenshot of the student interface to our prototype contextual feedback system. The instructor's current slide along with her annotations are displayed on the student's screen. The student is in the process of giving feedback on the instructor's annotation. He must choose between the two semantic categories provided by the instructor: "brilliant" and "boring." |
The instructor's interface is still in the design phase, but it will summarize student annotations by highlighting regions that were annotated by students; the hue of the highlighting will represent the semantic category (e.g., "confused" versus "important") while the intensity will represent its popularity (the number of agreeing annotations).
The goal of this prototype system is to study with instructors how they might use real-time, contextual feedback during a lecture. We are interested in discovering whether instructors can use this information while in the flow of their presentations, whether they might fold it into classroom activities, or what unexpected uses they might invent. In the long run, we also imagine that this sort of feedback system could be effective on more limited devices than laptops such as small PDAs.
More somewhat current information is available from my webpage.
I am also interested to hear, especially from the faculty attendees, about the available points in the tradeoff between a research and teaching career in academia.
Finally, I'm interested in generally learning how others have dealt with the difficulties of the Ph.D. process.