Improving the Quality of Student Learning

University at Buffalo (SUNY)

Based on available data about learning outcomes from the course pilot, what were the impacts of re-design on learning and student development?

Four measures were used to compare learning in the traditional and in the redesigned course: the amount of material covered, final course grades, pre-test and post-test scores, and attitudinal surveys given at the beginning and at the end of the semester. We collected baseline data in the traditional course during the semester before the redesign began (spring 2000). The second set of data came from the first semester with the fully redesigned course (spring 2001).

The first measure indicated that learning increased in the redesigned course as compared to the traditional course since the professor reported being able to cover more material in the redesigned course.

An analysis of the final grades for the course is another method for determining the relative learning in the traditional versus the redesigned course. Table 1 shows that the completion rate (the percentage of students completing the course as opposed to withdrawing or resigning) was slightly higher in the redesigned course. There was also a slight increase in retention (the percentage of students earning a grade of C or higher). There was a larger increase in the percentage of students earning a grade of A- or higher. Finally, the mean grade earned in the course increased by a third of a letter grade, from a C+ to a B-. One potential weakness of a grade analysis assessment is that the professor assigning the grades is usually one of the researchers involved in a redesign project. Thus the professor’s predisposition to believe that the redesign is an improvement might alter the grading. In this project, the professor was a part of the redesign team, but was not aware that the distribution of grades had changed.

Table 1. Analysis of final grades in the course


Completion Retention Very Good Mean Grade

F or higher C or higher A- or higher
Traditional 91% 74% 37% 2.59
Redesign 94% 78% 56% 2.90

The third measure of learning was more direct and used pre-test and post-tests. A set of questions from the final exam of the traditional course was selected to serve as a pre and post-test. The questions were asked of the students in the redesigned course at the beginning of the semester and again at the end of the semester. Table 2 shows the results, and it can be seen that learning did occur in the redesigned course, as the percent of correct answers increased from 30% to 60% during the course of the semester. While at the beginning of the semester some students could not correctly answer any of the questions, by the end of the course, no student answered less than 32% of the questions correctly. Similarly the maximum percent correct increased.

Did the learning increase in the redesigned course versus the traditional course? Unfortunately, we did not give the pretest during the traditional semester. If we were to assume that the same type of students were coming into the course with the same background each semester, then we could compare the post-test results of the two semesters. Under this assumption, there was slightly less learning occurring during the redesigned course. However, the next measure of learning calls this assumption into question.

Table 2. Pretest and post test scores


Percent Correct Min. % Correct Max. % Correct
Traditional Post 69% 21% 88%
Redesigned Pre 30% 0% 79%
Redesigned Post 66% 32% 96%

The final measure of learning was a survey of student attitudes towards computing and their confidence in their computing skills. Since one of the goals of the course is to empower students in their use of technology, positive changes in student attitudes towards computing would be an indication that the course is successful in achieving this goal. The survey questions were answered using a four-point scale with 1 indicating strongly the most negative response and 4 indicating the most positive response. Table 3 shows the results of the surveys and indicates that the survey results at the beginning of the course were different for the traditional and the redesigned courses. The survey results show that students entering the redesigned course were less confident and less empowered than the students from the previous year who were entering the traditional course. This may be because there were fewer students in the redesigned course, and thus a higher percentage of students with little background in computing.

The final positive change in the redesigned course is seen in the attitudinal data. In the traditional course the students’ attitudes towards computing appear to have become more negative over the semester. During the redesigned course there was instead a small increase in positive attitude.

Table 3. Attitudinal survey results


Survey Average % Change in Course
Traditional - Before 2.45
Traditional - After 2.04 -14%
Redesigned - Before 1.97
Redesigned - After 1.99 1%

Back

 

Program in Course Redesign Quick Links:

Program In Course Redesign Main Page...

Lessons Learned:
Round 1...
Round II...
Round III...

Savings:
Round I...
Round II...
Round III...

Project Descriptions:
Sorted by Discipline...
Sorted by Model...
Sorted by Success...
Sorted by Grant Rounds...