Something new, part 2: Screencasting
Posted by Melissa on April 20, 2012
Although winter term is long gone, things have been so busy that I still haven’t had a chance to thoroughly digest and reflect on the new pedagogical approaches I tried in electronics. I already mentioned the performance assessments I employed. Screencasting was something else that was new to me.
Why did I try screencasting?
I first heard about screencasting from Andy Rundquist at MAAPT meetings. Several conversations with Andy suggested that screencasting might address some concerns that had been rolling in the back of my mind. One particular concern was how to truly differentiate the level of student understanding of the material when students work in groups on homework. I encourage students to work in groups, and I think it is valuable, but when the group arrives at an answer to a problem, not all students come away with the same level of understanding. Some students think deeply, spend a lot of time digesting the concepts, and obtain an ownership of the material through working in groups; other students participate in the group conversations but once a solution to a problem has been found, these students are finished with the problem without necessarily reflecting on the depth of their understanding. The problem sets turned in by both types of students can look similar, although the level of understanding is different. Having students talk through their solutions in a screencast allows me to hear subtle (and not so subtle) differences in the level of understanding.
An added incentive to try screencasting was that our department had a number of discussions this fall about how we did not give students as many opportunities to develop their oral communication skills as we would like. Weekly screencasts would give students regular opportunities to practice how to effectively communicate physics concepts orally.
How did I employ screencasting?
Often faculty members, particularly those employing a flipped classroom, use screencasting to make lecture-like experiences available to students outside of class. I did make a few screencasts to provide background information before we met for class or to follow up on an idea that we didn’t have time to finish during class. However, the bulk of the screencasting in my electronics class was done by students.
I asked students to submit screencasts instead of written problem solutions. The questions that I asked students to answer via screencast were typical of the type of questions that would appear on any electronics problem set, and I had students produce between 1 and 3 screencasts per week. For some questions, the students used scanned PDFs of their problem solutions and talked me through the solution, but for other questions, student screencasts would combine calculations in Mathematica with simulations in Multisim, which made the most of the screencasting medium.
What were the logistics?
I had students use Jing to create their screencasts, and then upload them to screencast.com. Jing is freely available, and it limits screencasts to 5 minutes. This forces students to distill their responses, and I think choosing how to present the key elements of a solution to a problem in 5 minutes is a test of how well one understands the material. Key concepts have to be distilled, incorporated, and connected effectively and efficiently.
I would then evaluate the screencasts, and chose one student screencast for each problem to post on Moodle in lieu of posting my own solution. Sometimes I chose the screencast that was the most in-depth and articulate, but other times, I chose screencasts that were less polished but had a particularly unique approach or insight to the problem. By sharing model screencasts with the entire class, students were able to see peer examples of effective screencasts.
How did I evaluate screencasts?
I knew going in that I wouldn’t be able to evaluate screencasts like I evaluate problem sets. Because of the varying approaches one can take with screencasting, I could see no way to assign x points for this and y points for that. Rather I adapted the 4 point scale the Andy Rundquist uses for his SBG classroom to evaluate each screencast.
1: Doesn’t meet expectations.
- Response either lacks the detail necessary to demonstrate basic understanding or response shows a lack of understanding the concept/skills.
- Cannot articulate the main ideas involved in the problem.
- Repeatedly uses incorrect concepts or vocabulary.
2: Approaches expectations.
- Shows a general understanding of the content/skills, but there may be some confusion about important parts.
- Response may have significant information missing in the presentation.
- Articulates key concepts well, but may not be able to articulate details or make connections that are relevant.
3: Meets expectations.
- Response demonstrates an in-depth understanding of the main ideas.
- Can correctly and clearly explain the “how” and “why” of the work.
- May be a few small errors, or lacks confidence in the presentation of results.
4: Exceeds expectations.
- Response demonstrates in-depth understanding of main ideas and of related details.
- Can correctly and clearly explain the response (main ideas and details) in a manner that would be appropriate for “teaching” a peer.
- Demonstrates extension of work or connection of concepts beyond the minimum required for the problem.
What were the advantages and disadvantages?
The biggest disadvantage to this approach was grading the screencasts. Often times when I grade, I listen to music or have the TV on in the background. With a screencast, I wasn’t able to do that. I had to focus both on listening to what the student was saying and watching the screen. I have never had grading that required such complete, undivided attention, and I hated that if I zoned out for 20 seconds I had to go back and re-listen to the screencast. 2 screencasts per week x 5 minutes per screencast x 15 students meant that I spent at least 150 minutes/week watching screencasts. I found taking notes on the screencasts helped keep me focused and helped me distill common misconceptions, but the whole approach made grading more tedious than usual.
I did get a much more nuanced sense of what and how much students understood than I would have if I had only collected traditional problem sets. Some of the written solutions that students showed in their screencasts appeared similar, but listening to the students revealed huge differences in ownership and comprehension of the material. After doing this, I don’t think I’ll view problem set write-ups in the way I did before. Those write-ups don’t capture the same subtleties in understanding that a screencast does.
On the course evaluations, over half of the students said that they disliked doing the screencasts. After developing a solution to a question, students then had to spend additional time figuring out how to distill their solution into a 5 minute screencast, and they found this frustrating. However, a large minority of the students said they enjoyed screencasting and felt it was worthwhile.
As for me, I found screencasting to be an immensely valuable assessment tool. I got a different sense of what students were learning than I would have gotten from problem sets. In my mind, that alone is reason enough to continue using screencasts, but in the future, I might limit screencasting to just one question per week and have students turn in standard problem set solutions for other questions. That would hopefully cut down the student frustration with preparing screencasts and reduce the tedium of grading screencasts.
As I continue to try to sort out my own thoughts about screencasting, I’d appreciate hearing thoughts from others about this approach.