Course Readiness Criteria - Example 5
Have the course’s expected learning outcomes and a system for measuring their achievement been identified?
Successful large-scale redesign efforts begin by identifying the intended learning outcomes and developing alternative methods other than lecture/presentation for achieving them. Have those responsible for the course identified the course’s expected/intended learning outcomes in detail? Has the curriculum been built backward from the intended outcomes?
Many campuses have established an "assessment culture" which makes it easier for them to assess the learning outcomes of innovative projects as well as for traditional courses and programs. Does your campus have assessment processes in place—e.g., the ability to collect data? the availability of baseline data? the establishment of long-term measures? Is there a system for measuring the achievement of these outcomes at both the individual student level and the class level? Does the department or program take advantage of nationally normed assessment instruments in its particular discipline?
Here are some examples of the ways various institutions responded to this criterion.
This course redesign will benefit extensively from work performed using more than $500,000 of NSF and CSU funding for ISGE that resulted in a basic matrix of nationwide standards in biology for non-biology majors. It will also benefit from an internal study of introductory biology teaching and learning standards conducted by a different biology department faculty team for the current core curriculum revision project.
The multimedia learning modules that will be adopted and adapted from the national Integrated Science project have from two to four universal learning objectives, seven to ten specific learning objectives, and several applied learning objectives per module. We expect to end with at least 30 "hybrid" off-the-shelf and custom multimedia modules by the end of this project for the Bio 110/111L courseware. Given at least seven learning objectives per module, this would result in 210 detailed learning objectives for the quarter of study.
The project will designate control (regular classroom) and experimental versions of the course. Data will be obtained on the demographic characteristics of individual students (e.g., age, class level, gender, ethnicity, prior GPA) and whether the student was in the control or experimental section. Using these data, exam scores, and other objective criterion of learning outcomes, it will be possible to estimate whether there is a statistical difference in the outcomes measure controlling for the influence of the other variables at the individual student level. The evaluation plan will be specified and in place before instruction begins.
Learning outcomes have been identified for both R100, Introduction to Sociology, and W131, Elementary Composition. The student learning outcomes for R100 are:
Students who successfully complete English W131 will be able to:
In general, there is an assessment culture at IUPUI. The Office of Institutional Management and Institutional Research (IMIR) routinely collects information at both the class level and the individual student level. At the class level, they provide information on DWF rates, retention rates, etc. At the student-level, IMIR distributed the "New Student Survey" and a host of other questionnaires on a regular basis. IMIR information is provided quickly, upon request.
The R100 Committee has developed its own assessment instrument. Since 1997 the Committee has been systematically examining the DWF rate for the course. As part of this examination, the Committee developed a survey for all students in R100. This survey was distributed on the first day of class, in August 1998. In December 1998, a follow-up survey, with questions that assessed student learning, was distributed to the students who remained in the course. With these data, and with data provided by the IUPUI Office of Institutional Management and Institutional Research (e.g., on student’s high school rank, SAT score, grade in the course, etc.), we hope to gain an understanding of which students succeed and which students fail in R100. Both questionnaires included questions that allow assessment student learning. For Spring 1999, additional student learning assessment were added to a questionnaire that was distributed to one large section (200 students) of R100. This section will receive a follow-up questionnaire in May.
We have identified a set of five major goals for the redesigned Stat 200 course. At the conclusion of the course, students will:
We have already collaborated with Penn State's Schreyer Institute for Innovation in Learning in developing an assessment plan for our current courses. Our future assessment program will utilize existing and new assessment methods to focus on four major areas: student learning, student attitudes, cost effectiveness, and program implementation.
The methods that will be used to measure student learning are:
The methods that will be used to measure student attitudes are:
The methods that will be used to measure cost effectiveness are:
The methods that will be used to measure the project implementation are:
The Student Quality Teams and the measures of student learning and student attitudes have already been designed and implemented into the assessment of Stat 200 and will continue to be a part of the assessment plan. The additional assessment methods (surveys, tracking, comparisons and classroom observations) will aid us in determining how well we have met our intended goals in both the short and long run.
The redesign will be based on the learning outcomes for computer literacy courses found in the NRC and NSF sponsored report of April 1999 "Be FIT: Fluency in Information Technology." The CSE 101 team has closely followed the development of the report over the past year, agrees with its conclusions, and will use them as a guideline for this project. The report states that students need to learn three kinds of knowledge that are interdependent and co-equal:
Within these knowledge components there are ten particular learning outcomes specified by the report.
The team has not yet identified a system for measuring the achievement of the learning goals. As part of a prior project, however, we have been collecting baseline data for two semesters in CSE 101 and are working to interpret them in collaboration with an assessment specialist from UB’s School of Education.
An additional outcomes challenge will occur when CSE 101 becomes a required general education course: how can we devise an appropriate screening test that demonstrates degrees of proficiency and defines a threshold that allows students to place out of the course? This challenge gets to the issues of learning outside of the context of a "course" and establishing proficiency certification. Working on the course proper should produce new ideas for solving the proficiency demonstration problem.
We know what we want the students to learn. We want them to:
We can assess some of these skills by traditional tests, others by self-directed exploratory projects (term projects, etc.), and others by direct interactions, such as small discussion sessions and interviews. As part of our proposal, we will request resources to support experts to assess the effectiveness of the tools we develop in more depth than usual. We have already undertaken to assess baseline performance of our large introductory lecture courses toward these goals. As part of the implementation of this proposal, we will concurrently offer courses taught by traditional methods and will make a comparative assessment of performance of students taught in both traditional form and in the computer-intensive format that we propose to develop.
The learning outcomes have been identified in detail and are consistent with those established by the National Communication Association (NCA). Information relevant to assessing learning outcomes is available from two separate sources. First, NCA has a very well developed assessment procedure for the basic public speaking course. The Speech Communication Department will contract with the developers of the NCA assessment to consult on implementing the project and evaluating learning outcomes. Second, a study designed to assess learning outcomes in the public speaking course and collection of baseline data are currently underway in the Speech Communication Department. The study involves all of the students enrolled in the public speaking course in the Spring of 1999. This study will be used as a pilot project to design an experimental study for comparing learning outcomes in the redesigned course and the existing course. In addition, the Office of Evaluation Services now functions under the Center for Undergraduate Excellence. The Director of CUE has considerable experience in assessment for communication courses, demonstrated in both her publications and her presentations at NCA.
As a result of the NSF-sponsored systemic curriculum project in the UW-Madison chemistry department, we have developed both learning outcomes and means for assessing them. Outcomes were based partly on surveys of faculty in other disciplines that asked what content and process skills they expected students to gain from chemistry courses. Another basis is the general consensus on content in introductory chemistry across most of the chemistry departments in the U.S., which is congruent with the feedback we received from other departments.
The NSF systemic project has collaborated with the Learning through Evaluation, Adaptation, and Dissemination (LEAD) center at UW-Madison to obtain baseline data on our chemistry courses and to develop and analyze attitude surveys regarding those courses and the changes we have been making in them. We have also collaborated with Dr. Diane Bunce of Catholic University, a specialist on evaluation of chemistry courses, and the American Chemical Society Examinations Institute at Clemson University, which provides nationally normed exams. These latter experts have developed and tested a new content exam for each semester of Chemistry 103-104 that involves traditional questions and related questions designed to determine whether students have developed conceptual understanding. We have used these exams in the past and therefore can obtain comparative data to evaluate the success of our project. We have also developed systems for spot testing students' understanding by interviewing a small number of students selected to represent a cross section of abilities.