Assessing the Impact of Course Re-Design: A "Three-Course Menu"

By Peter Ewell, Senior Associate, NCEMS

The basic assessment question associated with the Pew Project on Learning and Technology’s course re-design initiative is the degree to which improved learning has been achieved at lowered cost. Answering this question requires comparisons between the learning outcomes associated with a given course delivered in its traditional form and in its "re-designed" form. Normally, this comparison is accomplished by running parallel sections of the course in traditional and re-designed formats, and looking at whether there are any differences in costs and outcomes—a classic "quasi-experiment" (see Campbell, D.T. and Stanley, J.C. (1963). Experimental and Quasi-Experimental Designs for Research. Chicago: Rand-McNally). Occasionally, the experiment is in the form of "before and after," where a traditional section of the course provides baseline information, which is then compared to a later offering of the course in re-designed form. The key to validity in both cases is a) to use the same measures and procedures to collect data in both sections and, b) to as fully as possible ensure that any differences in the student populations taking each section are minimized (or at least documented so that they can be taken into account).

That said, three basic assessment data-collection approaches are recommended for use in combination, much like a price-fixed restaurant menu. On this menu, many methodological choices are available within each "course," but the "full meal" will always provide the best value.

1. Learning Outcomes Assessment.

The degree to which students have actually mastered course content appropriately is, of course, the bottom line. Therefore, some kind of credible assessment of student learning is critical to any project. Some projects use common final examinations, the results of which can be directly compared across traditional and re-designed sections. This approach is most useful if sub-scores or similar indicators of performance in particular content areas can be provided as well as simply an overall "final grade." If a common exam cannot be given—or is deemed to be inappropriate—an equally good approach is to embed some common questions or items in the examinations or assignments administered in each section. Alternatively, naturally-occurring samples of student work (e.g. papers, lab assignments, problems, etc.) can be collected and their outcomes compared—a valid and useful approach if the assignments producing the work to be examined really are quite similar. Most such approaches require faculty to agree on standards for scoring or grading—a topic treated particularly well in the second reference provided below.

References

  • Palomba, C.A. and Banta, T.W. (1999). Assessment Essentials. San Francisco: Jossey-Bass.
  • Walvoord, B.E. and Anderson, V.J. (1998). Effective Grading: A Tool for Learning and Assessment. San Francisco: Jossey-Bass.

2. Tracking Student Behavior.

Because large volume introductory courses are often prerequisites for other courses or for admission to a major, much of their effectiveness is revealed in whether students go on and how well they perform when they do. As a result, tracking student records after they complete re-designed courses, and comparing results to students completing the course in a traditional format, can be a powerful and revealing assessment technique. Particular measures to look at can include a) proportions completing the course with a satisfactory grade, b) proportions going on to a second course in the discipline, c) grade performances in "post-requisite" courses (and, if available, how students actually performed in different skill and content areas on examinations in such subsequent courses) and, d) overall retention and graduation rates. At many institutions, the Institutional Research or Assessment Office will already have established protocols for accomplishing such tracking studies, so consulting these sources first is a good idea. In a similar vein, some institutions have convened focus groups and/or conducted interviews with faculty teaching courses "downstream" to ask them specifically about the comparative strengths and weaknesses of student preparation among completers of traditional and re-designed pre-requisite course sections.

References

  • Ewell, P.T., ed. (1995). Student Tracking: New Techniques, New Demands. New Directions for Institutional Research #87, Fall 1995. San Francisco: Jossey-Bass.

3. Student Self-Reports.

Asking students about their reactions to a course and about how much they think they have learned can provide valuable contextual information for any assessment. Emphatically, however, two things must be constantly remembered when using student testimony. First, though useful and easy to obtain, student reports about their own achievements should be treated with care and are certainly not a substitute for direct assessments of learning. Second, student satisfaction per se should never be the central focus; rather, the emphasis should be placed primarily on how students are experiencing the course. Consequently, self-reports are most useful when they concentrate on such topics as a) changes in motivation or attitude about the subject (which may result in a greater willingness to enroll in further coursework or to persist in college), b) how often students engage in "good learning behaviors" known to be associated with high student achievement such as time on task or collaborative learning and, c) exactly why particular types of students believe that a particular course-delivery format works or does not work for them. Similarly, such questioning can be very useful to guide "mid-course correction," especially if it addresses whether or not students are "getting" particular concepts or understanding specific areas of content—a variant of the familiar "classroom research" approach. Finally, while the most common method of collecting student self-reports is a questionnaire survey, many campuses have obtained extremely good results from student interviews or focus groups where topics of how students actually experience learning in re-designed formats can be explored in greater depth.

References

  • Flashlight Project (1998). Current Student Inventory. Washington, DC: TLT Group, American Association of Higher Education (AAHE). (http://www.tltgroup.org.)
  • Angelo, T.A. and Cross, K.P. (1993). Classroom Assessment Techniques. San Francisco: Jossey-Bass.
  • Ewell, P.T. and Jones, D.P. (1996). Indicators of "Good Practice" in Undergraduate Education: A Handbook for Development and Implementation. Boulder, CO: National Center for Higher Education Management Systems (NCHEMS).

In all three "menu courses," care should be taken to look for differences among different kinds of students, not just overall averages. As both experience and considerable research suggest, different kinds of students may experience quite different results from their encounters with technology-enhanced course formats and unfamiliar pedagogies, and these differences are important in informing further re-design. At the same time, it is critical to remember that "impact" is only one part of the evaluation story. Just as important is the need to look carefully at the process of learning underlying the innovation and of implementing the innovation itself. Both will have a profound and ultimate effect on outcomes and, after all, represent things that further re-design work can address if problems are detected.