Assessment That Works
We at Plymouth State University have been talking about appropriate assessment of our General Education program for most of the 20 years that I have been here. When we instituted our “new” Gen Ed program in 2005, we developed a General Education Handbook which identified some administrative goals for the new program, including: “A General Education program should have built into it a mechanism of assessment and change to keep it on track and up to date. • All courses and components of the program, and the program as a whole, should be regularly assessed and reconsidered.” We developed a plan for assessing individual courses (in the Directions and Connections areas) but have struggled to assess the various components of the program. And we have not been able to figure out how to assess the program as a whole. The time has come for us to figure that out.
Last February, a group of faculty from the Gen Ed Committee, the cluster guides, and one member of the Curriculum Committee attended the Association of American Colleges and Universities‘ (AAC&U) conference on General Education and Assessment. As I wrote at the time, the conference reinforced much of what we had been talking about concerning the role that design thinking could play in our General Education program. At the conference, we also learned about several AAC&U nation-wide initiatives, including the development of the VALUE (Valid Assessment of Learning in Undergraduate Education) rubrics. Around this time, I also had read Beyond the Skills Gap as part of a reading group which prompted us to start thinking about “habits of mind.” Based on this work, two different groups developed PSU’s habits of mind (which are described in the First Year Seminar’s open educational resource) as the learning outcomes of the General Education program. We also developed a set of benchmarks to assess the habits of mind. These steps were based on the work done by AAC&U and were a great start in figuring out how to assess the Gen Ed program as a whole. The next step is to develop an assessment plan. So I decided to go back to the source of our work to see what steps AAC&U recommends next.
To get me thinking in that direction, I read Assessment That Works: A National Call, A Twenty-First Century Response. Published by the AAC&U, this is a comprehensive guide to the work that the organization has done to create a new kind of assessment framework for liberal education programs like our Gen Ed program. The author, Peggy Maki, writes, “A liberal education helps students develop a sense of social responsibility, as well as strong and transferable intellectual and practical skills such as communication, analytical and problem-solving skills, and a demonstrated ability to apply knowledge and skill in real-world settings (p. 1).”
The underlying goals of the work described in this short book (only 52 pages including appendices) are to “articulate what contemporary higher education institutions commonly expect students to demonstrate as a result of a liberal education (p. 1).” The book describes the process used to develop the national Liberal Education and America’s Promise (LEAP) twenty-first century Essential Learning Outcomes. One of the early steps was to survey employers about their needs. One finding of these surveys is that “Today’s students must be able to solve the kinds of complex, unstructured problems that technology cannot solve (p. 5-6).” AAC&U researchers also articulated the kinds of realities faced by today’s colleges and universities that challenge our traditional notions of what a college education looks like. The main finding here is: “The full-time, traditional-aged, white student with ‘acceptable’ standardized test scores who begins and completes a degree at a single institution, following a two- or four-year trajectory, is no longer typical in American higher education (p. 9).” Given the many challenges facing American higher education, the author asks, “Is there an equitable way to certify or credit learning across all higher education institutions and providers that does not compromise the integrity of an individual institution’s curriculum, that helps facilitate transfer and access, and that enables students to develop a coherent understanding of education as something more than checking off courses or credits on the road to a degree? (p. 11)”
In response to this work, AAC&U launched the Greater Expectations Initiative. The goal of the initiative was to determine what all graduates should know and be able to do as a result of a college education. In addition, the initiative sought to clarify how all students could be prepared “to achieve the high levels of expected performance needed to address the emerging demands and challenges arising from the workplace, American society, and globalization (p. 13).” From this work, the LEAP Essential Learning Outcomes and the VALUE rubrics used to assess the outcomes were developed. These have become “a guiding compass for student accomplishment in the twenty-first century. They also have become a guiding compass for students themselves as they seek to connect general education to their majors (p. 15).” This “shared framework for undergraduate liberal learning” is designed to help students understand the value of general education. To further support students’ view that general education is important, the author suggests that students should not be required to complete gen ed “before they begin study in their majors” since that may cause them “to view that body of courses and experiences as an unnecessary roadblock (p. 15).” “Furthermore, holding a general education course hostage as the sole opportunity for students to learn a specific liberal arts outcome, such as quantitative reasoning in a required general education mathematics course, may limit students’ understand of how that outcome is relevant to and can be applied in a range of other contexts (p. 15).” In other words, the LEAP Essential Learning Outcomes should be seen as outcomes of an entire college education, not just of the general education portion of the experience.
What are the LEAP Essential Learning Outcomes? They can be found on the AAC&U web site (here). They include things like excellent written and oral communication, information literacy, teamwork and problem solving, foundations and skills for lifelong learning, and integrative and applied learning. These are the kinds of outcomes that we at PSU based our habits of mind on. The LEAP report has suggestions for how to support student achievement of the Essential Learning Outcomes. The Principles of Excellence, for example, “identify ways to engage students in their learning (p. 16)” and include things like “Immerse all students in analysis, discovery, problem solving, and communication, beginning in school and advancing in college” and “Prepare students for citizenship and work through engaged and guided learning on ‘real-world’ problems.” The report also suggests that we engage students in high-impact practices such as first year seminars, collaborative assignments and projects, and capstone courses and projects. I particularly like the report’s recommendation that we focus on students’ signature work which means that we “prepare all students to complete a substantial cross-disciplinary project in a topic significant to the student and society, as part of the expected pathway to a degree.”
In addition, the report suggests that we engage in “authentic assessment” via the VALUE rubrics or adaptations of them (p. 16). The idea is “to value the results of student assignments that are organically related to the pedagogies, educational practices, and expected outcomes that shape the educational experience to be assessed (p. 17).” This means that we should assess work completed by the student in the course of their studies rather than developing an assessment plan where there is a separate assessment event for the student.
The VALUE rubrics were designed for “institutional-level assessment” so that we can identify and “address patterns of underperformance in student work and to create a program-wide commitment to continuous improvement (p. 18).” When such patterns are identified, faculty should engage in discussion that are “focused, first, on why these patterns occur and, second, on how they can be improved through changes (often systemic) in pedagogy, instruction, course sequencing, or assignment design (p. 18).” In other words, our assessment activities should be focused on improving the environment in which student learning occurs.
The book goes on to provide some examples of how the VALUE rubric approach to assessment can be implemented. This section of the book is in direct opposition to the movement towards standardized testing to determine student learning. The author describes the program at the University of Delaware where they used adapted versions of the VALUE rubrics to score examples of student work. The author says, “In contrast with the direct and indirect costs associated with using standardized tests–purchasing test booklets and Scantron sheets, for example, and covering processing fees–the cost of using VALUE rubrics to score student work was ‘far more modest,’ even after the compensation provided to faculty scorers is taken into account (p. 19).” These costs, she says, “while expensive in terms of faculty time, involve reallocations of internal spending rather than a net outflow of resources (p. 19).” In addition, scoring student work using the rubrics “yielded results that were more useful and actionable than test scores (p. 19).” “Collecting student artifacts was less disruptive than addressing the scheduling and oversight issues related to administering a standardized test (p. 19).” And finally, “The sampling of student work can be expanded or designed purposefully to capture data of specific interest to the faculty and the institution, such as data from first-year students enrolled in multiple sections of a general education course for comparison with data from seniors enrolled in capstone experiences across all majors (p. 19-20).”
Focusing on Essential Learning Outcomes and VALUE rubrics to assess those outcomes allows American higher education to “publicly identify what liberal learning looks like across” our institutions (p. 41). Increasingly, the evidence of student learning is presented in ePortfolios, the medium that “is emerging as the best way to demonstrate a student’s ownership of his or her learning and to document levels of achievement (p. 41).” This approach is valuable to students as well because the rubrics “serve collectively as a compass that can be used to guide them on their educational journey (p. 41).” The use of VALUE rubrics can ease transfers (especially when an articulation agreement is in place) by allowing the focus to be on shared learning outcomes rather than “common or comparable course titles and textbooks (p. 38).”
There is much more of interest in this short book. For example, there are numerous case studies from institutions and groups of institutions about the implementation of the Essential Learning Outcomes and VALUE rubrics. As we move forward with our own assessment plans, I highly recommend that more people read the book. Having done so myself, I feel like we are on a good, proven path for implementing assessment of our General Education program.
Through reading this book, I also learned about AAC&U’s newest initiative, General Education Maps and Markers (GEMS). The initiative’s guiding principles for designing General Education programs seem promising to me. I look forward to learning more about the initiative as we move toward rethinking our program in response to our upcoming assessment of it.