Media, Technology, and Education
AssessmentGeneral EducationHabits of Mind

Grading, Assessment, and Habits of Mind

In my last post, I wrote about the Habits of Mind that the Gen Ed Outcomes Task Force created last year as Gen Ed program learning outcomes. These were developed as a necessary step in developing our plan to assess the Gen Ed program. We also created a set of benchmarks that look kind of like rubrics to be used in our assessment tasks. In our description of each of the benchmarks, we made the following statement:

These benchmarks are not intended to be used like traditional rubrics. They are meant to gauge the current level of achievement of students as they progress through the General Education program, not as mechanisms to assess the quality of individual assignments.

To understand why we made this statement, we need to understand the different activities that get labeled as “assessment.”Β One of the things that the Task Force talked a lot about is the difference between assessing the program as a whole, assessing individual Gen Ed classes, and assessing student performance within either a particular Gen Ed class or the program as a whole.

Perhaps the best way to understand these different activities is think about the design of the benchmarks. Below is the Self-Regulated Learning benchmark. Note that it starts with a description of what the Habit of Mind is all about. Then there is a grid that looks like many rubrics used in education. The leftmost column contains signposts which are indicators of student achievement in the particular Habit of Mind. In other words, if we want to determine how developed a student’s habit of self-regulated learning is, we would look for the student level at which they take responsibility for their own learning, how engaged they are in the learning process, and the level of metacognitive awareness they have. Across the top, there are three levels of achievement. A student is at Base Camp, Climbing, or at the Summit in their achievement of each of the signposts, with each level representing more sophistication. The box at the intersection of each row and column is a description of what that column’s level of achievement looks like for that row’s signpost. For example, a student who has reached the Base Camp achievement level in taking responsibility for their own learning “strives to meet learning goals and evaluation criteria embedded in assignments and courses” while a student who has reached the Summit “sets high expectations for oneself and develops a plan to meet those expectations.” Note that both of these achievement levels articulate what the student can do. We avoided using a deficit model by not describing what the student CANNOT do at a particular level of achievement.Β 

These benchmarks were designed for evaluation of the Gen Ed program as a whole. This means that we would expect a first year student to have a lower level of achievement than a senior. That is, we would expect a fairly high percentage of first year student to be at the Base Camp level of achievement for this Habit of Mind and fairly low percentage of seniors to be at the Base Camp level of achievement. A measure of effectiveness for our Gen Ed program would be that a high percentage of students move from Base Camp to Summit over the course of their involvement in the program. Such movement doesn’t prove that our Gen Ed program is THE thing that is helping students to develop this Habit of Mind but it gives us some evidence that we are doing something to support this development and Gen Ed is a likely candidate. It would be very difficult to isolate Gen Ed among all the variables involved in student learning to know for sure that Gen Ed is THE thing.

So why couldn’t we use these benchmarks as rubrics to grade individual student assignments? Why wouldn’t the benchmarks allow us to assess the quality of individual assignments? Understanding the answers to these questions will help us to understand how program assessment is different than assessing student learning.

Let’s assume we’re going to use the Self-Regulated Learning benchmark as a rubric for grading an assignment. To do that, we will need to assign points or a letter grade to each of the levels of achievement for each of the signposts. So we might say that the assignment we’re grading is worth a maximum of 30 points. If a student demonstrates the Summit level of achievement for any of the signposts, they receive 10 points (and 10 points times 3 signposts equals 30 points). If they achieve the Climbing level, they receive 5 points, and if they achieve the Base Camp level, they receive 1 point. We expect that most first year students will be at the Base Camp level of achievement for the signposts. This means that the typical first year student will receive 3 out of a possible 30 points on the assignment–even though the student is developmentally exactly where we think they should be. One might quibble with how I’ve assigned points (or letter grades) to the various levels of achievement. But no matter how we do the assigning, we are using the benchmark in a way that is contrary to its design. The Summit level of achievement is the goal for the entire program and so unless we’re talking about something being done in the Gen Ed Capstone course, we would not expect a student to achieve that level on an individual assignment.

We are starting to use these benchmarks to assess the Gen Ed program this Fall. We have a pilot program where faculty teaching Gen Ed courses will submit student work for evaluation by the Gen Ed Assessment Plan Task Force. We will expect the work done by First Year Seminar students to primarily fall in the Base Camp level and the work done in Directions courses to primarily fall in the Climbing level. If we find something different than that, we have some information to see that there is something wrong with either the design of our assessment or the design of our Gen Ed program. Which it is depends on how what we find differs from our expectations. We are also designing a set of Gen Ed Capstone classes to be taught in the Spring. When we evaluate the final projects in those classes, we expect to see that most students have achieved the Summit level for the Habits of Mind. Again, if we find something other than that, we’ll have to figure out what the problem is.

Let me know if you want more information about the early design of this assessment plan. I’m also working on some other assessment activities that I’ll write about as they solidify.

Article written by:

I am currently Professor of Digital Media at Plymouth State University in Plymouth, NH. I am also the current Coordinator of General Education at the University. I am interested in astrophotography, game studies, digital literacies, open pedagogies, and generally how technology impacts our culture.

Leave a Reply

Your email address will not be published. Required fields are marked *

Creative Commons License Licensed by Cathie LeBlanc under a Creative Commons Attribution 4.0 International License