University of Southern Maine
Looking back on the course pilot itself, what worked best?
The thing that has worked best is quizzing. Students do them, and doing them seems to help them learn the course material. These are easy to implement, self-grading, and provide immediate feedback to the student about what they do not know.
Quizzing in addition to testing has repeatedly been shown to increase student understanding of course material and performance on exams. The more times students are asked to demonstrate what they know, the more time they spend reviewing the material. Quizzing helps students learn by forcing them to spread their learning over the semester rather than cramming just prior to exams.
Another potential advantage of computer-based questions is that, unlike some other forms of direct assessment of student knowledge, quizzes and tests can be set up to provide immediate feedback to the student. Instead of having to wait until the instructor or scanner gets around to grading their work, computer-based testing makes it possible to show students which questions they got wrong within seconds of their completing the questions. This is important because, as has been repeatedly demonstrated, immediacy of feedback has been shown to enhance the process of learning.
Furthermore, because grading is automated, computer-based quizzing and testing encourages professors to allow students to retake exams and quizzes. Enabling students to take multiple evaluations may reduce evaluation-related anxiety and encourages them to manipulate and see the course-related information in context multiple times rather than just on test day. This makes exams and quizzes less of an evaluation tool, more of a way to learn what each student needs to focus on. In effect, the opportunity to retake quizzes and/or exams may focus the student on mastery of course related material rather than worry and concern about failure.
Another feature that we like but may not continue in large sections of the course is the asynchronous linked discussion. Reading and grading these is very time consuming. Though students and faculty like them, they, like the essay exams we used to administer, are not likely to be continued simply because they take too much to grade.
We like that students e-mail us with questions and problems. We can handle these on a case-by-case basis, quickly and efficiently. The feedback to the student is helpful, as is the opportunity for one-on-one contact. In fact, the faculty seem to like this feature more than the students.
The opportunity for students to work independently or in small groups at their own pace and time also works well. We think that this helps to sustain interest in the course material. The success of students in the asynchronous course tells us that, left to their own devices; students are fully capable of doing well in this type of environment. We are looking into the personality traits that predict who will succeed in this type of course.
What worked least well?
We continue find dealing with textbook publishers difficult. They want to sell a product and see technology as a way to sell. We believe that they should be more focused on technology than they are at present – they talk a good talk but follow-through has not been consistent. Getting them to see the light in terms of making courses easy to customize has proven a challenge, but both Worth and Prentice Hall are slowly coming around.
Program in Course Redesign Quick Links: