Ordered rankings slake the public’s thirst for information about colleges and universities, but at the expense of annoying many institutions. For example, over sixty private nonprofit institutions collectively committed to “disengage” from the U.S. News & World Report ranking system. Widespread disappointment is, á priori, an outcome when ten “winner” placeholders constitute a small percentage of all institutions in any ranking category. Nonprofit higher education’s paradoxical pursuit of and aversion to peer prestige rankings have combined with persistent extra-inflationary net-tuition increases to provoke reasonable policy demands for improvements in academic productivity – measurable improvements in learning outcomes and related socio-economic development outcomes at reduced or sub-inflationary per-student annual operating costs. Presidents and their faculty colleagues should respond by embracing an available set of non-governmentally managed options for directly but independently assessing, rating (not ranking), and peer-comparing student learning outcomes – and improving academic productivity in the process.
A first rung of learning accountability can be accomplished by partially decoupling testing from teaching in a handful of courses which almost all nonprofit colleges deliver to most of their students. Decoupling is clearly possible when limited to “common courses” – the courses offered in common at almost all U.S. colleges and universities from syllabi evidencing nearly identical content coverage and learning objectives. Anyone familiar with higher education can readily cite 10 or more common courses, and can then proceed systematically by starting with any institution’s undergraduate program and listing courses in descending order of their enrollments (aggregated across all course sections). Stop when the list’s accumulating enrollment count first totals at least 40% of all undergraduate enrollments in a four-year institution or at least 50% in a two-year institution. The resulting list will feature 20-35 courses which are indeed taught in common at almost all institutions having undergraduate general education requirements and/or popular undergraduate professional programs with required courses, such as introductory accounting and marketing in business.
Many common courses have counterparts in the college preparatory curricula of the nation’s high schools, and student learning outcomes in common courses often can be assessed at both secondary and post-secondary levels using one of several independent assessment collections available from ACT, College Board, ETS, IB, and other assessment organizations. A group of these assessments, such as a representative sample of the AP and/or CLEP exams from the College Board, could be chosen by an institution’s (or system’s) faculty to represent key general education objectives and/or admissions requirements and also to match common courses addressing the basic quantitative and communication fluencies and introductory perspectives in the humanities, social sciences, natural sciences, and so on. Each assessment could be required in the course it was selected to match – as the final exam, or not. Instructors would still assign letter grades to their students and also would be free to use (or not) the results of the independent assessments as they deem appropriate to their grading methodologies. This practice would have several advantages:
- Institutions and systems would have a manageable set of institutional mean and median scores and percentile ratings to compare to counterpart average metrics based on peer groupings of their choosing. They could include (or not) a student’s score or percentile rating (and the counterpart internal and peer average metrics) alongside the corresponding common course in the student’s transcript.
- Students could include these nationally recognized independent learning assessment scores or percentile ratings (to represent a substantive slice of their general education) in their private life-long e-portfolios (as portfolio technology and its provisions for security, verifiable authenticity, and access controls improve). They could share (or not) their private data with employers, family, and others as they see fit.
- Institutions and systems could specify, for each assessed course, the minimum score or percentile for which credit will be awarded to an entering or transfer student. As an additional courtesy to students, that minimum could be translated into transfer-credit minimums for other national instruments assessing approximately the same content. This limited transfer-of-credit transparency would alleviate much of today’s clamor for more comprehensive national transfer-of-credit protocols. Common courses, after all, are the courses for which credit transfer should arguably be transparent, though at a level determined by the individual institution.
- Secondary and postsecondary partnerships could use the same common assessments and even share common courses, sometimes taught by college instructors to secondary students, and vice versa. The resulting bridge between the two sectors, while not universally traversable, would have some broad and useful scaffolding for correcting the current misalignment between secondary and postsecondary education.
- Independent assessments, even if utilized only by a single institution across all sections of a common course, would mirror and amplify the powerful learning improvement strategies of the National Center for Academic Transformation’s “course redesign program,” and vice versa. The Center has compellingly demonstrated that technology can be used to redesign high-enrollment introductory (and developmental) courses to improve simultaneously both learning outcomes and per-enrollment direct instructional expenses. By adopting the Center’s course redesign strategies and linking them with independent assessment strategies (as outlined above), higher education could account for learning on a voluntary peer basis in a way that has national significance from a policy perspective while also improving academic productivity. (The Center reports not only measurable improvements in learning outcomes, but also average per-enrollment expense offsets of 38 percent across its first 30 course redesign projects, which can translate into potential annual per-student operating expense reductions of up to 10 percent.)
An alternative and/or complement to the above common course assessment strategy is a requirement for students to complete the Collegiate Learning Assessment or the MAPP assessment from ETS before a degree is granted – the GRE would also be appropriate at four-year schools. Instead of assessing learning objectives specific to a particular course or body of knowledge, these instruments assess the basic fluencies and critical thinking skills typically cited as one core goal of most undergraduate programs. Many of the advantages cited immediately above would accrue to this broader approach to assessing general education competencies independently. Requiring the CLA would strengthen its current use to test for value-added learning by sampling students early in their studies and again before graduation. An institution, at its discretion, could even require an institutionally determined and published minimum performance level for graduation.
The above ideas in gentler, more culturally correct variations have been discussed in various circles, such as in the “Voluntary System of Accountability” discussions seeded by AASCU and NASULGC and in the Council for Higher Education Accreditation’s Tenth Anniversary Commission discussions. My stronger suggestions may be culturally incorrect and judged simple-minded or heretical, and me naive – or worse. My perspective, nevertheless, derives from a 30-year faculty career (including a stint overseeing a general education program and experience in other academic administrative positions) and from subsequent executive consulting work with hundreds of institutions. Allow me to cite my practical professional philosophy for assessing learning.
As a mathematics faculty member, I gave students a final letter grade in each of my courses. Like many teachers, I assigned numerical scores to student work, computed a weighted final average of each student’s scores over the semester, and then grouped the final averages into the A – F rating (not ranking) system. Believing that my primary purpose as an instructor is to help students learn, I decided early in my faculty career to replace any lower full-class-period test score by the two-hour final exam score when computing a students’ final numerical average. (The final exam had a weight of 2 in the final averaging process.)
I would have preferred a flexible teaching / learning opportunity based on mastery learning, but the time-fixed semester system precluded a more flexible mastery system. Still, awarding a final numerical average at least as high as the score on a student’s comprehensive final exam motivated the student to strive for improvement and mastery whenever and however possible, from the beginning to the end of the course. This grading practice typically resulted in a percentage of C course grades exceeding that averaged by my colleagues who favored more traditional grading practices, while resulting in percentages of A and B grades indistinguishable from their averages.
We group-graded internally developed common final exams across all sections in a few common courses – pre-calculus and calculus, for example. I would have been pleased had we instead selected independent subject-matter assessments as the mandatory common final exams in those courses, for that would have conferred potential benefits to students and national benchmarking opportunities to me and the department. In any case, nothing in the common-final model dictated how each of us assigned final letter grades, though most of us presumably factored the common-final-exam score into our usual final-grade methodologies. I stuck to my methodology and, presumably, my colleagues to theirs.
Institutions and their faculties and student bodies are all different, but they all deliver approximately the same small group of courses representing nationally shared general education competencies. These courses and the “courseless” competencies they address can be independently assessed without abrogating faculty teaching and grading prerogatives. Nonprofit higher education and its accreditation partners could take this minimally invasive step up to a first (minimal) rung of transparent learning accountability and, in the process, improve productivity and affordability. Doing so would require some fine-tuning of assessments, along with technology-enabled adjustments in the assessment process and its pricing and peer reporting. At the very least, the questions I have begged should be broadly debated rather than narrowly preempted by fundamentalist beliefs about leadership and change in the academy, beliefs which are no longer mission-defensible in many institutions, systems, and districts.
With the rising cost of college and the lowering of standards across the boards, the CLEP is the best thing to come along for students who worked hard in high school and can pass the undergrad classes by testing out of them.
More people should know about the CLEP exams.
http://www.cleptestingguide.com/index.shtml
Posted by: CLEP Grad | December 28, 2007 at 08:31 PM