Impact on Students

The University of New Mexico

In the redesign, did students learn more, less or the same compared to the traditional format?

Improved Learning

From fall 1998 through fall 2000, the average percentage of students receiving a C or better in the traditional course was 61%. In the fall 2001 redesigned pilot course, 71% of students received a C or better. Effects were not limited to the C range; average percentage of students receiving As from fall 1998 through fall 2000 was 18%; in the pilot course, 26% of students received As.

To determine whether it was necessary to require students to take quizzes, in spring 2002, two large sections of introductory psychology were offered. The lectures, text, online quizzes, and studios were the same for both sections, but only students in section 001 were required to complete quizzes and attend studios. Students in section 002 could take quizzes and attend studios if they wished, but their course grade was based only on their in-class exam performance; few students in section 002 took quizzes or attended studios. More students in section 001 (63%) received a course grade of C or better than in section 002 (43%). There was some evidence that students required to attend studios performed better than those who did not attend, but the best predictor of student success was whether they successfully completed the batteries of weekly mastery quizzes.

In the full redesign for academic year 2002-2003, all students were required to complete quizzes, and some students—those who received a grade of C or below on Exam 1—were required to attend weekly studios. The percentage of students who received a grade of C or higher was 77% for fall 2002 and 74% for spring 2003 (versus an average of 61% for the traditional course). In addition, there were more grades of A (fall 2002: 34%; spring 2003: 31%) than found in traditionally taught sections (18%). Quiz performance was positively correlated with course grade at a statistically-significant level (p < .001).

Although results are reported in traditional "graded" terms, they are actually based on a consistent scoring methodology applied by the same instructor to the two types of courses.

Improved Retention

For the traditional course, an average of 41% of students received a C– or below (including drops, withdrawals and incompletes). This percentage was reduced in fall 2002 to 23% and in spring 2003 to 26%.

Students who did poorly on Exam 1 (C– or below) and who attended studios had a lower rate of C– and below (including drops, withdrawals and incompletes) than students who were told to attend but chose not to. This difference was statistically significant (p < .001).

Other Impacts on Students

Not only did student performance improve, it improved in a more comprehensive and arguably more difficult course than students had previously encountered. By requiring quizzes and only recording the top scores, the team provided students with a powerful incentive to keep taking quizzes until they attained a perfect score. Some students continued to take quizzes even after they have achieved a perfect score. This behavior required an appreciable investment of time from the students. At the beginning of each semester, all students liked the idea of multiple mastery quizzes. At the end of the semester, although some students reported they still liked the multiple quiz format, more students reported the course required much more time than other courses, which they did not like. Some students who complained about the work, however, also noted that their grade was probably better than it would have been otherwise.

Back

 

Program in Course Redesign Quick Links:

Program In Course Redesign Main Page...

Lessons Learned:
Round 1...
Round II...
Round III...

Savings:
Round I...
Round II...
Round III...

Project Descriptions:
Sorted by Discipline...
Sorted by Model...
Sorted by Success...
Sorted by Grant Rounds...