Improving the Quality of Student Learning

The University of New Mexico

Based on available data about learning outcomes from the course pilot, what were the impacts of re-design on learning and student development?

Fall 2001 Pilot
Our preliminary goals of reducing failure and drop rates were attained in the fall 2001 course pilot. The failure rate was reduced from previous levels of 30% to 9%, and the DWF rate fell from 42% to 19%. The number of students who received a C or higher rose from 60% to 71% (compared to fall 2000), and there were more A and B grades than recorded in previous semesters. At the same time, the course was arguably more difficult, requiring students to cover completely a high-level introductory text (Sternberg's In Search of the Human Mind, 3rd ed.), in addition to completing weekly mastery quizzes. Although the students had no way of knowing that instructors in previous semesters sometimes omitted chapters from the course, by the end of the semester all seemed aware of the increased amount of time they were spending on the course compared to other courses. Nevertheless, as judged by student feedback, when students complained about the work, they also acknowledged the value of their increased effort.

Students received credit for completing three online WebCT mastery quizzes per week for 16 weeks, which represented 25 percent of their grade. Each quiz consisted of 10-20 randomized multiple-choice questions drawn from a pool of 150-200 test bank questions per week; the total pool consisted of approximately 3,000 questions for the semester. An additional 20 percent of their grade was determined by performance on weekly WebCT quizzes compiled from the self-paced interactive CD-ROM set that accompanied the text, representing another pool of 550 questions. For all quizzes, only highest scores counted; students were encouraged to take the quizzes as many times as needed until they attained a perfect score. Additionally, they were told that questions which would comprise the four in-class exams, worth 50 percent of their grade, would be taken from mastery-quiz items. The more often they repeated quizzes, the more likely their chances of seeing actual exam questions. Quizzes closed on a weekly basis corresponding to that week's topic. Contingent upon their in-class exam performance, students were required to attend studios that focused on improving their learning skills and which provided a structured review of multimedia CD-ROM activities. TAs encourage students to work together on completing CD-ROM modules, and they promoted the use of various learning strategies. (The remaining 5 percent of a student's grade represented participation in department-run experiments or alternatives.)

Spring 2002
In consideration of the apparent beneficial effect of the redesign, a planned quasi-experimental design for two spring 2002 sections was implemented so that all students would have access to all components of the pilot course. However, although students in both large sections (350-450 students each) had access to the same instructor, text, CD-ROM, and curriculum, only students in the redesigned section were required to complete all aspects of the course; students in the comparison section sometimes chose to take quizzes, attend studios, and work with the CD-ROM, but they were not required to do so. In the spring 2002 and subsequent implementation, two weekly mastery quizzes (vs. three in the pilot) and 10 total CD-ROM quizzes were available in all sections.

Preliminary evaluation of pilot quiz-taking behavior indicated that the more times students spent taking quizzes (in terms of elapsed time per quiz as well as number of quiz attempts), and the higher their scores (highest and average scores), the better they performed on in-class exams. We believe that the major determinant of improved performance was successful completion of mastery quizzes. During the pilot, student quiz-taking behavior suggested that some students were developing strategies to increase exam performance; e.g., students would continue taking quizzes even after they had attained perfect scores because doing so would increase their chances of seeing items that might appear on the next exam. What was not apparent was whether students would invest this amount of time and effort if they were not required to complete the quizzes.

To determine whether quizzes that were mandatory (i.e., required for course credit) or voluntary (no course credit) would differentially affect exam and grade performance, students in one section received course points for completion of weekly online mastery quizzes; students in the other section were encouraged to take the mastery quizzes, but received no course points for doing so.

On in-class exams, students in the redesigned section who were required to complete quizzes for credit always outperformed students in the section where taking quizzes was voluntary. Students in the redesigned section received more As, Bs, and Cs, in addition to fewer C- or below grades, than students in the voluntary quiz section. Students took more quizzes, scored higher, and spent longer on quizzes when course credit was at stake than students in the section where quizzes were not linked to credit. Moreover, relatively few students successfully completed quizzes when credit was not a consequence; some students chose not to take quizzes at all.

The spring 2002 redesigned section, however, did not meet the standard of performance set during the course pilot. Our goal was to reduce the failure rate from 30% to 25% and the DWF rate from 42% to 37%, which we achieved in the pilot (a failure rate of 9% and a DWF rate of 19%). Results from the spring semester for the redesigned section were more modest (a failure rate of 20% and a DWF rate of 25%). And only 63% of the spring 2002 redesigned section received grades of C or better (compared to 71% from the pilot).

Despite differences in grade performance between the pilot and the spring implementation, the relationship between time spent successfully working with the mastery quizzes and eventual course grade was similar for students from both redesigned sections in which quizzes were required. On average, students who performed better spent more time taking quizzes than students who fared less well in terms of in-class exams or course grades. Variations in course content did not appear to affect quiz performance.

Importantly, for the first several weeks of the courses, quiz performance of most students, regardless of final grade outcome, was similar. Differences in quiz performance--and eventual grade--did not appear immediately, suggesting that low performance may be more an issue of process (e.g., waning motivation, conflicting demands) rather than intellectual capability. Initially most students performed similarly. But the redesign worked best for students who were able to sustain their efforts over the course of the semester. Other students who may have started out well, but who were unable to persevere in taking quizzes, tended to end up with poor grades. Our efforts for the 2002-2003 academic year, therefore, will focus on providing motivational checks and interventions designed to focus student attention on behaviors that are most likely to result in success.

December 2002 Update: In fall 2002, only one large section of 660 students was offered. Results were comparable to those obtained in the fall 2001 pilot and were better than obtained in the spring 2002 implementations: Failure rate (i.e., grade of F) was 12%; total DWF rate was 18%; and total students receiving a C- or less (including drops, withdrawals and incompletes) was 23.5%. The number of students who received a C or higher was 76.5%, and there were more As (34%) and Bs (31%) than in previous semesters. Possible reasons for these results are discussed below.

One difference between the pilot and spring 2002 implementations was the relative lack of make-up quizzes made available to students in the spring; i.e., if students missed a quiz deadline in the spring it was very difficult to make it up. In the fall pilot, make-ups were much easier to complete. By restricting make-up quizzes in the spring, we may have reduced student course point totals and impaired exam performance. For the fall 2002 implementation, make-up quizzes, which were identical to the original quizzes, were always available online; however, to still encourage students to take quizzes in a timely manner, make-up quizzes counted only 75% of the original quizzes. During the last three weeks of the semester, students were allowed to take online “amnesty quizzes” which were identical to the original quizzes and for which students received full credit for completing. Thus, at the end of the semester there were three identical sets of quizzes online: 1) three sets per week of the original weekly deadline mastery quizzes (which included two sets of questions from the text and one set of questions from the CD-ROM); 2) three sets per week of 75%-credit make-up quizzes; 3) and, available only during the last three weeks of the course, three sets per week of 100%-credit amnesty quizzes.

Another difference between the pilot and the spring 2002 implementation was the number of questions on in-class exams. In the pilot, each exam consisted of 40 questions, but in the spring, exams consisted of 50 questions. For the fall 2002 implementation, exams were returned to 40 questions, which allowed students more time during the exam periods. Also, in the fall, the number of exams was increased from four to five. The first exam, consisting of 20 questions, was given at the end of the third week of class and was used to determine which students would be required to attend weekly studios.

Experience with the pilot and the spring 2002 sections suggested that some variance in student performance might be attributed to differences in motivation. In the prior two semesters of redesign implementation, we found that students who performed below C could often be characterized by one or more of the following: 1) lacking learning skills, 2) lacking motivation, 3) having other priorities. All students in fall 2002 studios received peer-led coaching designed to help them better memorize key terms and concepts. To improve motivation and, perhaps affect prioritization decisions, students in half of the studios received motivational interviewing (MI), a non-confrontational style of interacting, which has been used successfully in a number of behavioral interventions, including addictions. Performance at 75% or lower on the first of five in-class exams, which yielded 38% of the class, was used to determine which students would be required to attend the mandatory weekly studios (any student, however, could attend). The students were divided into two groups: a motivational-interviewing group and a standard prescriptive advice group; a third group was comprised of students who were required to attend but chose not to. Compared to the third group, the other groups performed better on exams and quizzes, although preliminary analyses revealed no differences between the MI and directive groups.

Back

 

Program in Course Redesign Quick Links:

Program In Course Redesign Main Page...

Lessons Learned:
Round 1...
Round II...
Round III...

Savings:
Round I...
Round II...
Round III...

Project Descriptions:
Sorted by Discipline...
Sorted by Model...
Sorted by Success...
Sorted by Grant Rounds...