Improving the Quality of Student Learning
Iowa State University
Based on available data about learning outcomes from the course pilot, what were the impacts of re-design on learning and student development?
The Discrete Mathematics pilot course started in spring 2002 with 80 students. We used the three traditional large lecture sections (consisting of about 600 students) taught by other instructors and several TAs as a control group. The exams were arranged so that four of the exams and the final were the same in both the pilot course and the comparison sections.
The average scores achieved in these exams are shown in the following table. (A more detailed statistical analysis, including student performance in subsequent courses, is not completed yet.)
Test 1 covered mainly linear algebra and matrices. Test 2 was on linear inequalities and linear programming. Test 3 covered set theory and counting (permutations and combinations). Test 4 was on probability. 75% of the final was based on the material from tests 3 and 4.
The online students did slightly better than the traditional classroom students on most exams. This might be due to the fact that they were able to take the exam several times and improve their scores. The online students did substantially better on test 3 and on the final. As mentioned above, at least 1/3 of the final consisted of the same material as test 3, so those two scores might be related.
The main problem with the pilot redesign was student retention and motivation: 26 students dropped the online course, and another 15 students gave up (they remained enrolled, but did not keep up with most of the assignments). The students who kept up with the online assignments did better than the students in the traditional classroom sections, but we need to work on motivating all students to keep up in future terms.
December 2002 Update: The percentage of students who dropped the course went down from 32.5% to 23.8%. The percentage of students who remained enrolled in the class but did not do any work beyond the first few weeks went down from 18.8% to 13%.
We did not do a comparison of scores with a classroom section this time. The scores for fall 2002 were
These scores are a little lower than in spring 2002, but we are nevertheless quite pleased with the results. We had better student retention, so there were more of the mediocre students left in the class, and the tests themselves were a bit more challenging than last time.
Program in Course Redesign Quick Links: