HOW TO REDESIGN A COLLEGE COURSE USING NCAT'S METHODOLOGY

VIII. How to Compare Completion Rates

Completion rates refers to the percentages of students who began the course and finished with grades of C or better. This measure—sometimes referred to as pass rates—is generally accepted in higher education to indicate student “success” in a course.

Completion rates are not the same as measures of student learning. Assessment of learning refers to direct and comparable measures of student learning outcomes; completion rates refers to final grades.

Q: How do we compare completion rates?

A: During both the pilot and full implementation terms (and in subsequent terms as well), the team should collect final grades for students in both the traditional and redesigned versions of the course utilizing NCAT’s Completion Form (see Appendix B). All students who were enrolled in the course as of the official “census” date should be counted, including drops, withdrawals, incompletes and failures. Calculate the percentage of students earning a grade of C or better in both formats and compare the results.

Q: Why are grades not comparative measures of student learning?

A: Pass rates (grades of C or better) in traditional courses are not reliable indicators of student learning and almost universally suffer from inconsistencies in grading practices. Students in traditional courses are assessed in a variety of ways that lead to overall grading differences. Inconsistencies include (1) curving, (2) failing to establish common standards for topic coverage (in some sections, entire topics are not covered, yet students pass), (3) having no clear guidelines regarding the award of partial credit, (4) allowing students to fail a required final exam yet still pass the course, and (5) failing to provide training and oversight of instructors, especially part-time ones.

NCAT has frequently observed the phenomenon of improved student learning outcomes supported by clear assessment data coupled with decreased completion rates. This phenomenon is typically due to prior grade inflation.

Examples

  • At Florida Gulf Coast University (FGCU), redesign students in fine arts succeeded at a much higher level than traditional students on module exam objective questions, which tested content knowledge (85 percent versus 72 percent), yet in a comparison of final grades, 22 percent of students in the traditional course received Ds or Fs or withdrew; in the redesigned course, 29 percent received Ds or Fs or withdrew. Upon further investigation, the FGCU team discovered that different standards for passing the course were being applied. The adjuncts who taught the traditional course curved their module exam grades—often by as much as 15 to 20 points.
  • At the State University of New York at Potsdam, average scores on comparable questions graded by the same rubric improved from 2.22 in the traditional history course to 2.58 in the redesigned course, and correct responses to common multiple-choice questions increased from 55 percent to 76 percent; yet student success rates (grades of C or better) declined from 73 percent of traditional students to 61 percent of redesign students. Because generally less-demanding adjunct faculty have been eliminated from teaching the courses and grading has become more uniform, the team believes past grades were higher because grading was easier.
  • At Alcorn State University, the average of midterm exam scores and final exam scores in College Algebra from fall 2008 traditional sections were compared with those from fall 2009 redesigned sections. Students in the redesigned course performed significantly better: the average score of the fall 2008 traditional sections was 55.89, and that of the fall 2009 redesigned sections was 66.16. Even though students received better scores on the common exams, the drop-failure-withdrawal rate of fall 2009 was higher (47 percent) than that of fall 2008 (22 percent). The reason for the conflict between improved test scores and lower completion rates was most likely that the redesigned course used uniform grading methods across sections, whereas instructors in the past had had more grading flexibility, possibly leading to grade inflation.

Q: Why would one want to look at comparative completion rates as well as comparative measures of student learning?

A: It is important that students both master the content of the course and complete the course. It is possible to demonstrate increased student learning through redesign (e.g., final exam means that increase from 50 percent to 70 percent), but if only 20 percent of students take the final exam, there’s a problem despite the demonstrated increase in student learning outcomes.

 

 

Table of Contents

Introduction
I. Essential Elements
II.Getting Ready
IIIA. Six Models
IIIB. Six Models
IV. Instructional Roles
V. Instructional Costs
VI. Small within Large
VII. Learning Assessment
VIII. Completion Rates
IX. Faculty Concerns
X. Technological Issues
XI. Student Participation
XII. Planning/Implementing
XIII. A Written Plan
XIV. Consensus

Appendices:
Assessment Planning
Assessment Reporting
Completion Reporting
Cost Planning Tool (CPT)
CPT Examples
CPT Instructions
Scope of Effort Worksheet
Scope of Effort Instructions