Improving Learning and Reducing Costs: Outcomes from
Changing the Equation

by

Carol A. Twigg

Download a pdf version

A version of this article appeared in the July/August 2013 issue of Change: The Magazine of Higher Learning.

In September 2009, the National Center for Academic Transformation (NCAT) launched a 3-year program funded by the Bill & Melinda Gates Foundation, Changing the Equation (CTE). Its purpose was to engage the nation’s community colleges in a redesign of their remedial/developmental-math sequences to improve student learning and to reduce instructional costs.

Each institution participating in CTE redesigned its entire developmental-math sequence--all sections of all developmental courses offered at the college--using NCAT's Emporium Model and commercially available instructional software. Each redesign modularized the curriculum, allowing students to progress through the developmental course sequence at a faster pace (if possible) or at a slower one (if necessary)—that is, however long it took them to master the course content.

Following a national competition, NCAT accepted 38 institutions to participate in the program. CTE matched NCAT Redesign Scholars (faculty who had led successful math redesigns) with new institutions for mentoring purposes. The projects piloted their redesigns in spring 2011 and fully implemented them in fall 2011. Collectively, the redesigns affected more than 100,000 students.

At the start, we said that if institutions followed our advice—derived from the successes achieved in past course-redesign programs in developmental and college-level mathematics— we could guarantee that they would improve student learning, increase completion of the developmental-math sequence, prepare students to succeed in college-level math, and reduce instructional costs.

And that is exactly what happened.

Not all institutions followed our advice. Despite repeated advice from both NCAT staff and the Redesign Scholars, a number of projects failed to do such things as require lab participation, award participation points as an incentive for student engagement, establish deadlines and clear expectations, monitor students’ progress and intervene when they were not meeting deadlines, and so on. Six of the original 38 institutions withdrew due to an inability to meet the program’s requirements.

Findings

What follows are the outcomes for the 32 institutions that fully implemented their redesigns.

Student Learning

Thirty-two institutions redesigned a total of 86 developmental-math courses. Using common final examination scores, common exam items, and/or gains on pre- and post-tests in the traditional and redesigned formats of the courses, they compared how much students learned in the two formats. The results:

  • 71 of the redesigned courses (83 percent) showed significant improvements over the traditional format.
  • Five courses (6 percent) showed improvements, but the differences were not significant.
  • Seven courses (8 percent) showed no significant differences.
  • One course (1 percent) showed decreased learning, but the difference was not significant.
  • Two courses (2 percent) had insufficient data to make a comparison.

Completion Rates

Course by course.  NCAT asked each institution to compare course-by-course completion rates (grades of C or better or grades of P in a P/F system) in the traditional and redesigned formats, with the following results:

  • 20 courses (23 percent) of the redesigned courses had higher completion rates than the traditional ones; six were significantly higher.
  • Five courses (6 percent) showed no significant difference in completion rates.
  • 36 of the redesigned courses (42 percent) had lower completion rates, 21 of which were significantly lower.
  • For 23 of the courses (27 percent), there was no basis to calculate comparative completion rates due to the combining of multiple courses into one. (Four institutions collapsed what had been 12 different courses into four modularized courses. Students enrolled in the redesigned courses could begin anywhere from Module 1 to Module 15, picking up where they left off in a subsequent semester.)
  • Two of the courses (2 percent) collected insufficient data to make a comparison.

NCAT conducted an extended analysis of the discrepancy between increased learning outcomes and decreased course-completion rates in CTE. We discovered that course-by-course completion comparisons are not a true measure of the success or lack of success of the program, for a variety of reasons:

  • Grade inflation. The majority of CTE teams discovered that in the traditional format grades, and hence pass rates, were inflated. Contributors to that grade inflation included 1) having no clear guidelines regarding the award of partial credit, 2) allowing students to fail the final exam yet pass the course, 3) failing to establish common standards for topic coverage (in some sections, entire topics were not covered, yet students passed), and 4) failing to provide training for and oversight of part-time instructors. Thus, the “C-or-better” rates for the traditional courses were almost universally inflated.
  • Mastery learning requirement. In the redesigns, students were required to master all of the content of all of the courses in homework assignments, practice tests and module exams. Redesign students had to pass each module independently at levels ranging from 75 to 90 percent before being able to progress to the next module. 

In the traditional format, students exited the course by simply attaining an average score of at least 70 or 75 percent on measures that differed from course to course. So they could earn a C or better if they had passed enough tests to hit that mark and mastered some, but not necessarily all, of the competencies. In traditional sections, students would often move to the next topic without having demonstrated mastery of the previous one.

Increasing the mastery level raised the cut score for a C in the redesigned courses. Students were doing more work and learning more, which often took longer. Consequently, many students did not complete a course by the end of the term. But they were able to start where they left off in the subsequent term.

Mastery learning, while sometimes taking longer to accomplish, ensures that students are well prepared to take on college-level work.

  • “Making-progress” grades. A grade of “making progress” (MP) or the equivalent was awarded in 50 of the 86 developmental-math courses that were redesigned. Students receiving an MP grade must have been making substantial progress at a high mastery level. Definitions varied from school to school (they ranged from “must have completed 86 percent of modules at 80 percent mastery” to “75 percent of modules at 80 percent mastery”). These definitions are equivalent to a grade of C or better in the traditional courses.

When adding the MP grades to the C-or-better grades, the completion picture improves significantly:

  • 37 courses (43 percent) had higher completion rates, 21 of which were significantly higher.
  • Four courses (5 percent) showed no significant difference in completion rates.
  • Nine courses (10 percent) had lower completion rates—six were significantly lower.
  • 12 courses (14 percent) did not award an MP grade and did not do a hypothetical calculation.
  • One course (1 percent) collected insufficient data to make a comparison.
  • In 23 courses (27 percent), completion rates could not be calculated due to the collapse of multiple courses into one.
Thus, of 50 courses that awarded an MP grade, 74 percent had higher completion rates than those offered in the traditional format.

The conclusion is that one cannot evaluate the success of CTE by simply comparing course-completion rates. Completion of the developmental-math sequence and success in subsequent college-level math courses are the two most important data points to use in comparing student-success rates between the traditional and redesigned formats.

Unfortunately, the time period of the program was not long enough to gather information on subsequent course taking. But some of the participating institutions have since collected preliminary data on how well students who emerged from the redesigned sequence performed in college-level courses compared with those who exited from the traditional format.

For example, at Northwest Shoals Community College (AL), the percentage of developmental-math students successfully completing a college-level math course increased from 42 percent before the 2011 redesign to 76 percent after it. At Somerset Community College (KY), the percentage of developmental-math students successfully completing college-level courses increased: In applied mathematics it went from 56 percent to 67 percent and in intermediate algebra from 37 percent to 43 percent. We believe that the other projects will replicate these results.

Cost Savings

All but one of the 32 CTE completed projects reduced their costs, some by more than their projected savings and others by less.

The average projected percentage reduction in the cost per student for the 31 institutions that reduced costs was 29 percent. Their actual percentage reduction was about 20 percent.

  • Six institutions (19 percent) reduced the cost per student by between 30 percent and 55 percent.
  • 13 institutions (42 percent) reduced the cost per student by between 15 percent and 30 percent.
  • 12 institutions (39 percent) reduced the cost per student by 15 percent or less.

There were two primary ways that the programs reduced costs: 1) by increasing section size and 2) by increasing the number of sections that full-time and adjunct faculty counted toward their loads. Neither of these strategies increased faculty workload because of the elimination of repetitive tasks such as hand-grading homework, quizzes, and exams, as well as preparing lectures and assessments.

Seven of the 32 institutions achieved their cost-savings projections, and seven additional institutions exceeded their original goals. But 17 of the 32 institutions failed to fully carry out their cost-reduction plans. What decisions did they make and what did they do that led to this shortfall?

  • Four institutions significantly increased the cost of lab tutors over their planned expenditures.
  • Three institutions increased the cost of course coordination over their original plans.
  • Five institutions increased the percentage of sections taught by full-time faculty in excess of their planned percentages.
  • Twelve institutions did not carry out their plans to increase section size and offered too many sections. (To arrive at the correct number of sections, the anticipated enrollment should be divided by the planned redesign section size, and only that number of sections should be offered. NCAT found that many projects did not schedule sections for maximum efficiency.)

The One-Room Schoolhouse

Many projects used the “one-room-schoolhouse” approach to dealing with low-enrollment sections, producing both institutional cost savings and clear benefits to students. Previously, when small sections did not fill (particularly at smaller campuses and sites or during certain class times), they had to either be cancelled (thereby interrupting student progression through the sequence and losing college revenue) or offered at a relatively high cost.

Using the one-room schoolhouse meant that these colleges offered multiple developmental-math courses in the same computer classroom or lab at the same time. Even though students were at different points in the developmental sequence, they could be in the same classroom. . Students worked with instructional software, and instructors provided help when needed.

This strategy enabled the institutions to increase course offerings and avoid cancelling classes, which reduced scheduling roadblocks for students and enabled them to complete their degree requirements sooner. Since fewer sections were needed to accommodate the same number of students, the overall cost per student was lowered.

Student Savings

Although CTE’s goal was to reduce the institutional cost of offering developmental math, the program also produced substantial savings for students. Among them were:

Saving tuition dollars. The modularization of the developmental-math sequence allowed students to move from one course to the next within the same semester. Students saved on tuition because they were allowed to complete as many courses as possible in one semester while only paying tuition for the one in which they had registered. Those who worked through all the modules could finish the entire program in one semester and pay for one course instead of two or three, as they would have done in the traditional format.

Reducing the required number of credits. Several of the participating institutions redesigned multiple courses in the developmental-math sequence to eliminate duplication and topics that were beyond the scope of developmental math. This allowed the total number of credit hours for the sequence to be decreased, which represented savings for students by decreasing the number of credit hours for which they needed to pay tuition.

Lowering the cost of course materials. Several of the projects were able to lower the cost of materials significantly, creating additional savings for students. Students only purchased one textbook and one software access code, as opposed to purchasing three different textbooks, to complete their developmental work. Several institutions developed customized textbooks that included the material for all courses in the sequence. Other projects entirely eliminated textbooks, requiring only the purchase of an access code (which included an electronic textbook at no additional cost to the student).

Accommodating life events. Community college students are juggling many responsibilities such as jobs, families, parents, etc.  As a result, they are often unable to complete courses in a single term.  Many of them may be working diligently but have a “life event” occur, preventing them from reaching their educational goals.

When life interferes in the traditional model, students must withdraw—thereby losing tuition and any progress they have made—and start over the following term. In the CTE redesigns, they could adjust their schedules instead of having to withdraw from the course. Later, they could return to the class and pick up where they left off.

Lessons Learned

Course redesign is a proven, data-driven innovation in institutional practice that makes it possible to improve student-learning outcomes while reducing costs. CTE’s basic objective was to demonstrate the feasibility of redesigning remedial/developmental-math courses on a wider scale using a set of course-redesign tools and methods developed over the previous ten years in NCAT’s prior mathematics-course redesigns.

CTE reaffirmed the lessons learned in prior NCAT work. The pedagogical techniques (active learning, online tutorials, continuous assessment and feedback, and on-demand support) that had increased student learning in the prior mathematics redesigns also did so in CTE.

Similarly, the cost-reduction techniques (online tutorials, automated assessment, course management systems, shared resources, and staffing substitutions) that reduced instructional costs in prior redesigns also did so in CTE. This validated what we had learned in prior NCAT projects—if you followed the “rules,” you were successful. If you did not, you were not.

Those rules derived from the Emporium Model. They included

Holding class in a computer lab/computer classroom. Having students work on math during class in computer classrooms and labs proved fundamental to the success of the redesign projects. The computer-supported classroom made it impossible for students to adopt a passive strategy in the course, as they often do with lecture-discussion approaches to teaching mathematics. The mantra “students learn math by doing math” was the redesign standard.

Basing the course on instructional software. The use of effective online instructional software served as a key component in each redesign project. Each software package offered consistent, high-quality, customizable content and created a student-friendly introduction to the math courses.

Lecture videos, animated examples, electronic textbook, study plans, homework assignments, quizzes, practice tests, and post-tests were all in the same online location and could be accessed anywhere, anytime (although proctored post-tests had be taken in the classroom or lab.)

A major advantage of using interactive software was the timely feedback to students. When working a homework assignment, they immediately knew if an answer was correct or incorrect. The software gave them multiple resources (hints on how to solve problems, videos, animations, worked problems similar to the one missed, links to the e-textbook, etc.) to correct their misunderstandings.

Providing one-on-one, personalized, on-demand assistance for students. The availability of on-demand individual assistance in the lab/computer classroom ensured that students received immediate help when they needed it. Various resources were available to accommodate students differing levels of preparation, anxiety, and learning styles.

Students could ask for help online or from an instructor or tutor, watch a video, or attend a mini break-out session. Tutors and instructors in the math lab offered individual attention to address specific student problems. In many projects, instructors met with students individually each week to assess their progress and to help them develop a course of action for the next week. Face-to-face interaction provided opportunities to offer encouragement, a celebration of successes, and/or exhortation to make more progress.

Establishing greater course consistency. In the traditional format, not just grading but course objectives, learning goals, instructional strategies, and course materials varied among different instructors or different campuses within the same institution. In the Emporium Model, faculty teams developed the module content and course-delivery methods to ensure that all students had the same learning experience, regardless of instructor or campus location. Course redesign also aligned the learning goals for all sections of developmental-math courses. The result was that students moved to credit-bearing math courses only after they had mastered defined learning outcomes for the developmental sequence.

Establishing clear expectations for progress, with deadlines.  Each project divided the content of its developmental-math courses into learning modules, with weekly expectations for completion. Weekly schedules told students how fast they needed to work to complete the course on time. These schedules helped students see what they had left to accomplish in the course and ensured that each course could be finished within one semester.

Requiring attendance. It was absolutely necessary to have an incentive for attending class and/or a penalty for not attending. Math faculty and tutorial staff quickly realized that, as NCAT had found previously, “Students don’t do optional.” Whenever lab time was optional, the vast majority of students failed to take advantage of it.

At participating colleges, attendance counted between 5 and 10 percent of the final grade. And some institutions penalized students for lack of attendance (e.g., students who missed 12 hours of class were administratively withdrawn from the course.)

Monitoring student progress via logs, guidebooks, workbooks, and score sheets. All software packages contained tracking and communication tools that gave instructors feedback on each student’s progress. Some projects used a weekly score sheet that included points for staying up to date on assignments and attending class and/or lab. In others, students were required to maintain a paper notebook that that contained class notes, notes from the software’s learning tools, and solutions to exercises. Whatever the method, instructors monitored each student's progress and time on task and took appropriate action when needed.

In addition to the techniques described above, which are characteristic of the Emporium Model in both developmental and college-level math, institutions participating in CTE were required to add a number of other practices based on prior NCAT experience.

Modularizing course materials and course structure. Each project divided its developmental-math sequence into a series of modules that contained online quizzes, homework problems, and notebook assignments corresponding to learning objectives or competencies within the course sequence. Some institutions retained course titles; others eliminated the old course structure and simply offered modules in the context of a single developmental-math course.

Students could progress more quickly or slowly, if needed. They could complete one course early and move into the next course in the same semester but pay for only one course. Students who did not finish the required modules in one semester were able to begin work the next semester exactly where they left off during the previous semester. Because all sections used the same structure and procedures, students could change sections during the semester or pick up in a subsequent semester where they left off.

Requiring mastery learning. Within each module, students were required to complete assignments and could not move to the next element within the module until they had mastered each component at mastery levels ranging from 75 to 90 percent. Students typically began by taking a preview quiz on which they could demonstrate mastery and thus bypass the module. Most of them, unfamiliar with the material, moved directly to the homework.

After all homework for a module was completed, students took practice quizzes without online learning aids. If they did not demonstrate mastery on the practice test, they had to review missed concepts before taking it again.

Students were typically allowed multiple attempts on the practice quiz. To move to the next module, they had to demonstrate mastery on a proctored post-test. Those who were not able to do so met with their instructors, who reviewed the student’s work and recommended remediation techniques before the student retook the test.

Building on What We’ve Learned

Based on what NCAT has learned in CTE, we offer the following observations to inform ongoing national, state and local efforts in developmental-mathematics reform.

  • The Emporium Model will increase learning outcomes, improve completion rates and reduce instructional costs in developmental math as long as institutions follow the underlying rules.

CTE scaled up a proven innovation. Success required course teams to adopt proven redesign approaches and readily available tools and resources. Those that followed the rules achieved success; those that did not struggled and needed to make corrections to bring their projects into line with those rules.

  • The Emporium Model has reached a tipping point.

There are now more than 50 ongoing, large-scale mathematics redesigns that use the Emporium Model in developmental and college-level mathematics at both two- and four-year institutions. CTE projects are also serving as models for statewide developmental-math reform in Connecticut, Kentucky, North Carolina, Tennessee, and Virginia.

In the past year, we have seen the implementation of the Emporium Model at many institutions that were not participating in a formal NCAT program. There are now sufficient examples, resources, and experienced faculty to enable other institutions to move to the Emporium Model without the support of a formal NCAT program.

  • Strong local leadership is critical to staying the course.

Success in course redesign requires strong local project leadership at either the department or a higher administrative level, but there was no one model of successful leadership. Some redesigns were managed collegially, others depended upon a core group of tenacious faculty, and still others were implemented in a top-down fashion by administrators.

An important function of top leadership is a willingness to talk about the project and its benefits frequently and publicly, as well as to back up the project team when it runs into trouble by providing resources or fixing administrative problems. Above all, campus leaders have to ensure that the team sticks to the basic redesign plan.

  • Final course grades are not a good comparative measure of success.

Comparing final grades in traditional and redesigned courses is not the way to measure differences in student-learning outcomes unless the content, assignments and assessments of the courses being compared are the same. Anyone engaged in developmental math reform using final course grades to "prove" success is on very shaky ground. Completion of the developmental-math sequence and success in subsequent college-level math courses are the two most important data points to use in comparing student success rates in the traditional and redesigned formats.

  • It may not be possible to both prepare students well for college-level work and to ensure that they complete courses rapidly.

Mastery learning means that students do more work and learn more, which often takes longer than passing traditional courses. Mastery learning, while sometimes taking longer to accomplish, ensures that students are well prepared to take on college-level work.

  • Only a relatively small number of developmental-math students are actually able to accelerate.

Many involved in developmental-math reform want to create circumstances where students can accelerate their progress through the required course sequence. This was certainly true of the CTE institutions, and the redesigns were established to allow students to do so.

But only a small number were able to accelerate. While this provided an excellent opportunity for those students, the great majority needed the full term to complete a one-course equivalent, and many needed to slow down in order to be successful.

Just about all projects believed that many students would be able to test out of a given module and accelerate their progress through the developmental math sequence. As most discovered, however, very few students--frequently only one or two--were able to do so

  • The participating community colleges, in general, did not seem to care about cost reduction except as a by-product of an action that solved another problem.

Only a minority of projects bought into the cost-reduction aspect of the program. This is reflected in the fact that 18 of the 32 projects did not carry out their cost plans, and of the 18 that did, three reduced their costs by less than 9 percent. Yet all projects demonstrated the possibility of reducing cost while improving quality, and a large minority (15 projects) showed significant cost reduction.

  • The participating community colleges were surprisingly unprepared to deal with data.

NCAT required CTE participants to collect data on comparative student-learning outcomes, completion rates, and instructional costs using relatively simple, straightforward forms developed over the past 13 years in working with hundreds of colleges and universities. Most participants had a great deal of trouble completing these forms and found it difficult to deal conceptually with the data on them. Given that we were dealing with math faculty, NCAT found this phenomenon to be somewhat astonishing.

  • Successful redesign efforts took advantage of the collaborative aspects of the program.

Most CTE participants valued some kinds of collaborative work throughout the project. Those attending the four project workshops overwhelmingly felt that the opportunity to see real examples and interact with Emporium Model “veterans” and program staff was very valuable. Such interaction not only disseminated ideas and techniques but also built solidarity and mutual support through the knowledge that others were encountering (and overcoming) similar obstacles on their own campuses.

Yet the NCAT Redesign Scholar mentoring program—the most formal collaborative feature of the CTE program design—did not work as expected. Rather than taking advantage of the Scholar’s mentoring and of funds that were available to support campus visits, many course teams preferred to figure out things for themselves, even though this sometimes meant reinventing the wheel. The strongest projects took the most advantage of the Redesign Scholars.

The “not-invented-here” syndrome is not limited to two-year institutions—it has shown up repeatedly in prior NCAT programs.

Conclusion

CTErequired teams to radically redesign multiple courses in a relatively short period of time. Most projects had significant implementation issues that they needed to deal with (faculty training, faculty buy-in, technology problems, registrarial and financial-aid issues, facilities issues, and so on). In addition, most of the community colleges were unaccustomed to collecting assessment and cost data to evaluate the efficacy of their redesigns. Yet despite these challenges, all 32 projects intend to continue and improve on the initial implementations of their redesigns.

In retrospect, the timeline for CTEwas probably too aggressive, given the extensive changes that were required. The fall 2011 full implementation period was, in many respects, similar to a typical NCAT pilot period. Teams made mistakes that they are now in the process of correcting, and many have acknowledged that they had multiple difficulties during the transition. It is not coincidental that those institutions that began their redesigns early, before the formal grant period started have shown the strongest results.

We congratulate the CTEparticipants on their accomplishments thus far and look forward to their continuing achievements in addressing one of higher education’s most vexing problems: increasing student success in developmental mathematics.