Five Principles of Successful Course Redesign

Download PDF version

Table of Contents

Principle #1: Redesign the whole course.
Principle #2: Encourage active learning.
Principle #3: Provide students with individualized assistance.
Principle #4: Build in ongoing assessment and prompt (automated) feedback.
Principle #5: Ensure sufficient time on task and monitor student progress.
Conclusion

From the 30 projects involved in the Program in Course Redesign, we have identified five course redesign models. Each of these models embodies five principles that lead to successful course redesign, and each of these principles has both a quality dimension that contributes to improved student learning and a cost dimension that contributes to reduced instructional costs. The following principles are essential to achieving success in course redesign.

Principle #1: Redesign the whole course.

In each model, the whole course--rather than a single class or section--is the target of redesign. The course is treated as a set of products and services that can be continuously worked on and improved by all faculty rather than as a "one-off" that gets re-invented by individual faculty members each term. The collective commitment of all faculty teaching the course coupled with the capabilities provided by information technology leads to success. Information technology enables best practices to be captured in the form of interactive Web-based materials supported by sophisticated course-management software. Faculty can systematically incorporate feedback from all involved in the teaching and learning process, adding to, replacing, correcting and improving an ever-growing body of learning materials and best practices.

Improving Quality

Any large introductory course taught by multiple instructors faces the problem of "course drift," especially when the instructors are adjunct faculty members. The phrase "course drift" refers what happens when individual instructors teach the course to suit their individual interests rather than to meet agreed-upon learning goals for students, resulting in inconsistent learning experiences for students and inconsistent learning outcomes. Redesign that ensures consistent content coverage means that all students have the same kinds of learning experiences, resulting in significant improvements in course coherence and quality control.

Reducing Cost

Redesigning the whole course eliminates duplication of effort on the part of instructors and creates opportunities for using alternate staffing patterns. Faculty begin the design process by analyzing the amount of time that each person involved in the course spends on each kind of activity, which often reveals duplication of effort among multiple faculty members. Faculty members teaching the course divide their tasks among themselves and target their efforts to particular aspects of course delivery. By replacing individual development of each course section with shared responsibility for both course development and course delivery, faculty can save substantial amounts of their time while achieving greater course consistency.

Examples

Florida Gulf Coast University's traditional course comprised a growing number of 30-student sections. Because the course utilized a large number of adjuncts (approximately two-thirds of the sections were taught by adjuncts), there was significant course drift, yielding uneven coverage of the course topics and uneven student learning. Teaching was uncoordinated; some adjuncts did not adhere to the course learning goals and objectives, and some did not use the selected text. The redesign moved all students into a single section, using a common syllabus, textbook, set of assignments and a course Web site, organized in six modules, each designed by faculty experts. Students were placed into cohort groups of 60 and within them, peer-learning teams of six students each. Preceptors, a newly created position, were responsible for interacting with students and grading critical analysis essays. A single full-time faculty member, responsible for both academic matters and preceptor supervision, taught the course, working closely with a full-time course coordinator responsible for administrative aspects. The model allows FGCU to scale by adding preceptors while maintaining important faculty oversight via ongoing curricular review and course coordination.

The University of Southern Mississippi's redesign moved 16 to 20 face-to-face lecture sections (approximately 60 students each) per term into a single 800-student online section organized around four four-week modules. A course coordinator, responsible for overall course administration, managed the team-teaching of four faculty members who each taught one four-week module in their area of expertise and were responsible for content, complementary materials, quizzes, and exams. Writing assignments were administered by WebCT and were graded by graduate assistants. The coordinator and the four faculty members each received credit for teaching a single course. Before the redesign, USM needed to staff 16 to 20 sections; after the redesign, the university needed the equivalent of only five staffed sections to serve all students.

Additional examples of projects that dealt explicitly with course drift and/or moved to shared course development and delivery among faculty include Brigham Young University, Penn State University, Tallahassee Community College, The University of Alabama, the University of Idaho and Virginia Tech.

Principle #2: Encourage active learning.

Each redesign model makes significant shifts in the teaching-learning enterprise, making it more active and learner-centered. Lectures and other face-to-face classroom presentations are replaced with an array of interactive materials and activities that move students from a passive, note-taking role to an active-learning orientation. As one math professor puts it, "Students learn math by doing math, not by listening to someone talk about doing math." Instructional software and other Web-based learning resources assume an important role in engaging students with course content. Resources include tutorials, exercises and low-stakes quizzes that provide frequent practice, feedback and reinforcement of course concepts. In some instances, classroom meetings are partially or entirely supplanted by online learning activities; in others, active learning environments are created within lecture hall settings supplemented by out-of-class activities. In moving from an entirely lecture-based to a student-engagement approach, learning is less dependent on words uttered by instructors and more dependent on reading, exploring, and problem-solving undertaken actively by students.

Improving Quality

Encouraging active learning is a well-accepted pedagogical principle that leads to improved student learning. As Arthur W. Chickering and Zelda F. Gamson note in their 1987 Seven Principles for Good Practice in Undergraduate Education, "Learning is not a spectator sport. Students do not learn much just sitting in classes listening to teachers, memorizing prepackaged assignments, and spitting out answers. They must talk about what they are learning, write reflectively about it, relate it to past experiences, and apply it to their daily lives. They must make what they learn part of themselves. Working with others often increases involvement in learning. Sharing one's own ideas and responding to others' reactions sharpens thinking and deepens understanding."

Reducing Cost

When redesigns reduce the number of lectures or other classroom presentations that faculty members must prepare for and present and replace those formats with interactive learning resources and team-based learning strategies, faculty time can be reallocated to other tasks, either within the same course or in other courses. Moving away from viewing instructors as the sole source of content knowledge and assistance to a greater reliance on interactive learning materials and greater student/student interaction offers many opportunities for reducing instructional costs.

Examples

The redesigns of the Universities of Alabama, Idaho and Virginia Tech depended heavily on instructional software, including interactive tutorials, computational exercises, electronic hyper-textbooks, practice exercises, solutions to frequently asked questions, and online quizzes. Modularized online tutorials presented course content with links to a variety of additional learning tools: streaming-video lectures, lecture notes, and exercises. Navigation was interactive; students could choose to see additional explanation and examples along the way. Online weekly practice quizzes replaced weekly homework grading. With the development of a server-based testing system, large databases of questions were easily generated, and grading and record-keeping were automated.

Additional examples of projects that made heavy use of interactive learning materials include Rio Salado College, Tallahassee Community College, the University of Iowa, the University of Tennessee, Knoxville, and the University of Wisconsin-Madison.

Redesign at the University of Colorado at Boulder encouraged active learning through the use of peer-learning teams. The entire class (~200 students) met twice a week. At the first meeting, the instructor provided an overview of the week's activities. About a dozen discussion questions were then posted on the Web, ranging from factual questions to complex questions requiring students to draw conclusions. Midweek, students met for one hour in small learning teams of 10 to 15 students (supervised by undergraduate learning assistants) to prepare answers collaboratively and to carry out inquiry-based team projects. Teams were supported by software that allowed them to collaborate synchronously or asynchronously. Teams posted written answers to all questions. At the next class meeting, the instructor led a discussion session in which he directed questions to the learning teams. Rather than emphasizing students' mastery of facts, the redesign taught students to develop their understanding through written and verbal communication and to draw conclusions from collaborative inquiry-based activities.

Additional examples of projects that made team-based learning an important part of their redesigns include Fairfield University, Florida Gulf Coast University, the University of Dayton, the University of Illinois at Urbana-Champaign, the University of Iowa, the University of Massachusetts Amherst, and the University of Tennessee, Knoxville.

Principle #3: Provide students with individualized assistance.

In traditional lecture or classroom formats, students are often unlikely or unable to ask questions. Office hours attempt to mitigate this problem, but students notoriously do not take advantage of them. Students need help when they are "stuck" rather than during fixed times or by appointment. Each model either replaces or supplements lecture time with individual and small-group activities that take place in computer labs--staffed by faculty, graduate teaching assistants (GTAs) and/or peer tutors—and/or online, enabling students to have more one-on-one assistance. Students cannot live by software alone, however. When students get stuck, the tutorials built into most software programs are not enough to get them moving again. Students need human contact as well as encouragement and praise to assure them that they are on the right learning path. An expanded support system enables students to receive help from a variety of different people. Helping students feel that they are a part of a learning community is critical to persistence, learning, and satisfaction.

Improving Quality

Offering students help when they need it rather than according to a schedule not only addresses the particular problems they encounter but also helps keep them on task. Students who are unable to receive help at the time they need it too often give up and do not complete the task that they have been assigned. In addition to providing individualized assistance to students, faculty and others responsible for the course can learn what areas are most difficult for students and can continuously improve the learning activities included in the course.

Reducing Cost

By constructing support systems of various kinds of instructional personnel, the projects apply the right level of human intervention to particular student problems. Highly trained, expert faculty members are not required for all tasks associated with a course. By replacing expensive labor (full-time faculty members and graduate teaching assistants) with relatively inexpensive labor, less expert (adjunct faculty members, undergraduate peer mentors and course assistants) where appropriate, it is possible to increase the person-hours devoted to the course and the amount of assistance provided to students.

Examples

The Universities of Alabama and Idaho and Virginia Tech staffed their learning centers with a combination of faculty, GTAs, and peer tutors, who responded directly to each student's specific, immediate needs. Emporium helpers did not answer students' questions but rather directed students to resources from which they could learn. By creating a kind of triage response team, the universities increased the number of contact hours for students while greatly decreasing the cost per hour for that contact. Staffing adjustments could be made based on real use. For example, Alabama's initial plan was to staff primarily with instructors and to use graduate students and upper-level, undergraduate students for tutorial support. It soon became apparent that the undergraduates were as effective as the graduate students in providing tutorial support, thus eliminating the need for graduate students. Based on data collected during the first semester of operation, Alabama also reduced the number of instructors and undergraduate tutors by matching staffing levels to student-use trends.

Another redesign strategy is to contract with companies that specialize in providing on-demand, individualized assistance. Tallahassee Community College outsourced the evaluation of student essay drafts to SMARTHINKING, a company that provides high-quality, real-time, online academic support for core courses in higher education through chat technology, virtual whiteboards and personalized feedback. Institutions can contract with SMARTHINKING to provide tutorial services for their students, either supplementing existing campus services or outsourcing them entirely. Both students and faculty reported consistent 24-hour turnaround, valuable suggestions for improved writing, and supportive commentary; both felt that the responders were highly capable and professional. TCC's use of SMARTHINKING both saved faculty time and increased quality.

A third strategy is to create new kinds of cost-effective positions to provide individualized assistance. Rio Salado College created a new position called a course assistant to address non-content-related questions (which constituted 90 percent of all interactions with students!) and to monitor students' progress. The addition of a course assistant freed the instructor to concentrate on academic rather than logistical interactions with students. As a result, one instructor was able to teach 100 students concurrently enrolled in any of four courses. Before the redesign, the instructor typically taught 35 students in one section. Students, in turn, received more help in a timelier manner.

Additional examples of projects that increased the amount of individualized assistance available to students include Florida Gulf Coast University, The Ohio State University, Penn State University, and Riverside Community College.

Principle #4: Build in ongoing assessment and prompt (automated) feedback.

Increasing the amount and frequency of feedback to students is a well-documented pedagogical technique that leads to increased learning. Rather than relying on individual faculty members in small sections to provide feedback for students--a technique known to increase faculty workload significantly--, each model utilizes computer-based assessment strategies. In many cases, a large bank of problems for each course topic is built into instructional software, and assignments are graded on the spot. In other cases, publishers provide test banks that accompany textbooks, enabling faculty to create low-stakes mastery quizzes. Both techniques enable students to work as long as needed on any particular topic, moving quickly or slowly through the material depending on their comprehension and past experience or education. By automating the feedback process, every problem or question is graded, and students receive specific information about their performance. This, in turn, leads to more efficient and focused time on task and higher levels of learning. Building in ongoing assessment and automated feedback also lets faculty know how well students are (or are not) doing and take timely corrective action.

Improving Quality

Shifting the traditional assessment approach in large introductory courses, which typically employ only midterm and final examinations, toward continuous assessment is an essential pedagogical strategy. Students can be regularly tested on assigned readings and homework using short quizzes that probe their preparedness and conceptual understanding. These low-stakes quizzes motivate students to keep on top of the course material, structure how they study and encourage them to spend more time on task. Online quizzing encourages a "do it till you get it right" approach: Students can be allowed to take quizzes as many times as they want to until they master the material. Students need detailed diagnostic feedback that points out why an incorrect response is inappropriate and directs them to material that needs review. Automating assessment and feedback enables repeated practice as well as providing prompt and frequent feedback--pedagogical techniques that research has consistently proven to enhance learning.

Reducing Cost

The idea of giving students prompt feedback is a well-known pedagogical technique that leads to improved learning. Pedagogy in itself has nothing to do with technology. What is significant about using technology is that doing so allows faculty to incorporate good pedagogical practice into courses with very large numbers of students—a task that would have been impossible without technology. When instructors and/or teaching assistants are responsible for grading, typically they must make compromises such as spot-grading or returning composite scores to students. By replacing hand-grading with automated grading of homework, quizzes and exams, it is possible to reduce the cost of providing feedback while improving its quality. In addition, by assessing and aggregating what students do and do not understand, both individually and collectively, faculty are able to spend class time on what students do not know rather than wasting time on what they already understand, a great improvement over the one-size-fits-all lecture method.

Examples

Automated low-stakes quizzes enable both students and faculty to determine what individual students have and haven't learned. At The University of New Mexico, students received credit for completing three online mastery quizzes each week. Students were encouraged to take the quizzes as many times as needed until they attained a perfect score. For all quizzes, only the highest scores counted. The more time students spent taking quizzes and the higher their scores, the better they performed on in-class exams.

To determine whether quizzes that were mandatory (i.e., required for course credit) or voluntary (no course credit) would differentially affect exam and grade performance, UNM faculty conducted an experiment. Students in one section received course points for completion of weekly online mastery quizzes; students in the other section were encouraged to take the mastery quizzes, but received no course points for doing so. On in-class exams, students who were required to complete quizzes for credit always outperformed students in the section where taking quizzes was voluntary and received more As, Bs, and Cs, in addition to fewer C- or below grades. Students took more quizzes, scored higher, and spent longer on quizzes when course credit was at stake than students in the section where quizzes were not linked to credit. Moreover, relatively few students successfully completed quizzes when credit was not a consequence; some students chose not to take quizzes at all.

A second strategy is to build in assessments for students to use both individually and in groups. As part of Penn State's statistics redesign, students were regularly tested on assigned readings and homework using Readiness Assessment Tests (RATs), short quizzes that probed students' conceptual understanding. Constituting 30 percent of the students' grades, RATs were given five to seven times during the course. Students prepared to take the RATs outside of class by reading the textbook, completing homework assignments, and using Web-based resources. Students then took the tests individually. Immediately following the individual effort, the students took the same test in groups of four. In addition to motivating students to keep on top of the course material, RATs proved to be very effective in detecting areas in which students were not grasping the concepts, enabling faculty to take corrective actions in a timely manner.

A third strategy is to take advantage of "smart" feedback systems. Carnegie Mellon's redesign used StatTutor, an automated, intelligent tutoring system that monitored students' work as they went through lab exercises. StatTutor provided them with feedback when they pursued an unproductive path and closely tracked and assessed each student's acquisition of skills in statistical inference—in effect, providing a personal tutor for each student. After using StatTutor, students were able to achieve a level of statistical literacy not deemed possible in the course before its redesign. Florida Gulf Coast University used a software program called the Intelligent Essay Assessor (IEA) to grade short, well-structured students essays. The Intelligent Essay Assessor, once programmed, assessed student essays between 100 and 500 words based on their content and their grammar, mechanics, etc. This software required careful preparation for use, but once fine-tuned, it reliably scored short paragraphs and saved faculty a lot of grading time.

By assessing what students do and do not understand and aggregating the results, instructors are able to use class time more productively. At the University of Massachusetts Amherst, students reviewed learning objectives, key concepts, and supplemental materials posted on the class Web site before class. To assess their preparation for class, students then completed online quizzes worth points toward the final grade, which provided immediate feedback to students and data for instructors to assess students' knowledge levels. Instructors were able to reduce class time spent on topics that the students clearly understood, increase time spent on problem areas, and target individual students for remedial help. During class, UMass used ClassTalk, a commercially available, interactive technology that compiles and displays students' responses to problem-solving activities. Class time was divided into ten- to fifteen-minute lecture segments followed by sessions in which students worked in small groups applying concepts to solve problems posed by the instructor. Group responses were reported through ClassTalk. The instructor moderated the discussions and drew out key issues to reinforce specific ideas or reveal misconceptions. Similarly, at the University of Colorado at Boulder, peer-learning teams posted answers to sets of inquiry-based project questions posed by the instructor online. The instructor used software to review all the posted written answers to a given question. If all the teams correctly answered a given question, the instructor skipped that question. Instead, he devoted the discussion time to questions with dissonant answers among teams.

Additional examples of projects that made heavy use of automated feedback include Florida Gulf Coast University, the University of Iowa and the University of Southern Maine.

Principle #5: Ensure sufficient time on task and monitor student progress.

Each redesign model adds greater flexibility in the times and places of student engagement with the course. This does not mean, however, that the redesign projects are "self-paced." Rather than depending on class meetings, the redesigns ensure student pacing and progress by requiring students to master specific learning objectives, frequently in modular format, according to scheduled milestones for completion. Although some projects initially thought of their designs as self-paced, open-entry/open-exit, they quickly discovered that students need structure (especially first-year students and especially in disciplines that may be required rather than chosen) and that most students simply will not make it in a totally self-paced environment. Students need a concrete learning plan with specific mastery components and milestones of achievement, especially in more flexible learning environments.

Most software packages have excellent tracking features, allowing faculty to monitor students' time on task. All projects have seen a fairly strong, direct correlation between student success and time on task. A frequently encountered problem was getting students to spend enough time on task working with the software. Some students were slow to log in, getting too far behind to catch up. Worse yet, some students never logged on. Most projects found it necessary to require students to log in at specific intervals and to spend a minimum amount of time working with course materials. Others established some form of early alert intervention system-- a kind of "class management by exception" process, whereby baseline performance standards were set and those who were falling too behind were contacted. Email can be used to post messages and communicate with students to encourage them to "come to class."

Improving Quality

As Arthur W. Chickering and Zelda F. Gamson note in their 1987 Seven Principles for Good Practice in Undergraduate Education, "Time plus energy equals learning. There is no substitute for time on task. Learning to use one's time well is critical for students and professionals alike. Students need help in learning effective time management. Allocating realistic amounts of time means effective learning for students and effective teaching for faculty." Even though we know that time on task is essential to effective learning, it is difficult for faculty members in traditional formats unaided by technology to ascertain how much time on task each student is actually spending and to take corrective action.

Reducing Cost

By replacing time-consuming human monitoring of student performance with course management software, it is possible to reduce costs while increasing the level and frequency of oversight of student progress. Sophisticated course-management software packages enable faculty members to monitor student progress and performance, track their time on task, and intervene on an individualized basis when necessary. Course management systems can automatically generate many different kinds of tailored messages that provide needed information to students. They can also communicate automatically with students to suggest additional activities based on homework and quiz performance, or to encourage greater participation in online discussions. Using course-management systems radically reduces the amount of time that faculty members typically spend in non-academic tasks like calculating and recording grades, photocopying course materials, posting changes in schedules and course syllabi, sending out special announcements to students—as well as documenting course materials like syllabi, assignments, and examinations so that they can be used in multiple terms.

Examples

At The University of New Mexico, students who scored 75% or less on the first exam, which was administered at the end of the third week, were told that they should attend a weekly 50-minute studio for the remainder of the semester. During studios, students had the opportunity to work on multimedia course material, take quizzes, learn a memorization strategy, and discuss their course performance with undergraduate TAs (who were recruited from students who received A's in the course the previous semester). Those students who were advised to attend but who failed to attend any studio typically failed the course. In contrast, the more studios a student attended, the better their course performance.

Whereas Virginia Tech followed an open-attendance model in its redesign, the Universities of Alabama and Idaho added mandatory attendance and required group meetings to ensure that students spent sufficient time on task. Alabama required students to spend a minimum of 3.5 hours per week in its learning center and to attend a thirty-minute group session each week. This session focused on students' problems and allowed instructors to follow up in areas where testing defined weaknesses. Idaho students were assigned to focus groups, of 40 to 50 students each, grouped according to their majors, so that particular applications could be emphasized. Groups met once a week to coordinate activities and discuss experiences and expectations. Both universities believe that the group activities helped build community among students and between students and instructors.

Additional examples of projects that focused on providing structure to ensure sufficient time on task include Rio Salado College, Riverside Community College, and The Ohio State University.

Conclusion

One of the strongest reasons for using information technology in teaching and learning is that it can radically increase the array of learning possibilities presented to each individual student. Thus, the "right way" to design a high-quality course depends entirely on the type of students involved. Students need to be treated like individuals, rather than homogenous groups, and should be offered many more learning options within each course. By customizing the learning environment for each student, institutions are likely to achieve greater learning successes.

Rather than maintaining a fixed view of what all students want or what all students need, institutions must be flexible and create environments that enable greater choice for students. Students differ in the backgrounds they bring to a course. While some students have strong prior experiences in a particular discipline, either through good high school preparation or other work experience, other students have weaker backgrounds. Offering students greater choice so that they can identify and spend time on the areas where they lack knowledge rather than spending equal time on all areas can accommodate such variation in backgrounds.

Students also differ in the amount of interaction that they require with faculty, staff, and one another. At the British Open University, for example, approximately one-third of the students never interact with other people but pursue their studies independently. New York's Excelsior College reports that 20 percent of its students take up to 80 percent of staff time, indicating a strong need for human interaction, in contrast to the 80 percent of students who require very little interaction. Rather than assuming that all or most course activities need to be conducted face-to-face, successful course redesigns begin by considering what aspects of the course require face-to-face time and what aspects of the course can be better conducted online.

Currently in higher education, both on campus and online, we individualize faculty practice (that is, we allow individual faculty members great latitude in course development and delivery) and standardize the student learning experience (that is, we treat all students in a course as if their learning needs, interests, and abilities were the same). Instead, we need to do just the opposite: individualize student learning and standardize faculty practice. By thinking more creatively about how to develop course designs that respond to a variety of learning styles and preferences, we can include structures and activities that work well with diverse types of students and lead to better, more cost-effective learning for all.