MISSOURI ONLINE RECOMMENDS
Missouri Online recommends clearly communicating expectations for assessments (learning activities and assignments) via a rubric or suitable alternative. See item #30 in our 5 Pillars Quality Review checklist.
Moreover, because rubric criteria and assessments are most effective when aligned with learning outcomes, Missouri Online asks that online courses have measurable course- and module-level outcomes (checklist items #22, 23, and 25).
Assessments close the loop of course design, allowing you to determine whether students have met the stated learning outcomes. Selected-response assessments, such as multiple-choice quizzes, are straightforward to grade: The answer is correct or it is wrong. However, constructed-response assessments and authentic assessments can be more challenging to grade.
A well-constructed rubric is one way to articulate the expectations and requirements for such an assignment. It can streamline the grading process, potentially removing bias from grading and ensuring fairness. Using the rubric tool within Canvas makes grading with rubrics even more convenient. Moreover, students can use the rubric as a guide in completing the assessment and, potentially, in self and peer assessment, enhancing reflection and metacognition.
What is a rubric?
A rubric is “a scoring tool that lists the criteria for a piece of work, or ‘what counts’” and “articulates gradations of quality for each criterion, from excellent to poor” (Goodrich, 1996). It is not the same as a checklist; however, a checklist is a suitable alternative to a rubric when a simple yes or no rating for each criterion is appropriate. A rubric also differs from a Likert scale in that it describes the performance for each criterion (Brookhart, 2018; Ragupathi & Lee, 2020).
Rubrics are typically used for assessments that require a task or performance (Popham, 1997), not those in which students are expected to provide an agreed-upon correct response, such as a multiple-choice or true/false question.
Popham (1997) defines three dimensions of a rubric:
-
Evaluative criteria: What is being measured?
-
Quality definitions: How does a student demonstrate complete mastery of the criteria, no mastery, or some degree in between?
-
Scoring strategy: Will performance be assessed holistically, or will components of the performance be analyzed? Rogers (2018) presents a concise analogy for these approaches to scoring strategy: “How well the engine runs overall can be judged. Such a judgment is holistic. The engine consists of parts that work together to make the whole engine run well or poorly; they can be examined separately for quality. This would be analytic.”
Andrade (2005) and Ragupathi and Lee (2020) also differentiate between scoring rubrics and instructional rubrics. Scoring rubrics are used in summative assessment: The instructor uses it to rate the student. The student might not even be provided with the rubric in advance. In contrast, instructional rubrics are used for formative assessment. Students are expected to use the rubric to guide the process, to assess their work along the way, and to do peer review.
What are the benefits of rubrics?
Rubrics can help students and instructors in many ways, including the following:
Articulating outcomes and expectations
A well-designed rubric closes the loop of backward design: The instructor begins with measurable learning outcomes and then uses the rubric to determine whether the student has achieved those outcomes. This allows the instructor to ensure that the assessment accurately addresses those desired outcomes, rather than assessing students on irrelevant or trivial criteria, making those assessments more effective and efficient.
Streamlining the grading process
Once created, a rubric can save time in grading student work. Andrade (2005) notes, “Giving focused feedback is wildly time consuming. A good rubric allows me to provide individualized, constructive critique in a manageable time frame.” Panadero & Jonsson (2020) identified additional studies finding that the time invested in creating rubrics was recouped once the rubrics were implemented.
Increasing fairness and transparency of assessment
When the assessment criteria and expectations are stated explicitly, there is less room for vagueness (“I know excellent work when I see it”). This is especially helpful when multiple graders will be looking at the work; for example, in a class with one or more teaching assistants.
Providing transparent assessment criteria also promotes equity in grading. Wolf and Stevens (2007) point out, “In academic environments, we often operate on unstated cultural assumptions about the expectations for student performance and behavior and presume that all students share those same understandings.” Those unstated assumptions particularly penalize first-generation students, who lack previous exposure to those expectations, and neurodivergent students, who need explicit, literal explanations and expectations.
As Feldman and Reed Marshall (2020) observe, “By explicitly describing what it means for students to succeed, educators create a safeguard that can prevent us from inadvertently bringing biased assumptions or hidden expectations to our evaluations. Everyone—whether students or the teacher—uses the same criteria to judge performance, and everyone in the class is held equally accountable to those criteria.”
Reducing student anxiety around assessment
Andrade and Du (2005) found that using rubrics allowed students to “focus their efforts, produce work of higher quality, earn a better grade, and feel less anxious about an assignment.”
Yet another benefit of reducing student anxiety is that it can help promote academic integrity. Students are more likely to cheat when they lack self-efficacy; that is, when they don’t feel confident in completing the assignment successfully (Lang, 2013). Providing students with clear expectations can give them the guidance they need to succeed with their own effort.
Supporting student self and peer assessment
Andrade and Du (2007) found that when using rubrics for reflection and revision while working on their assessment, students discovered that “careful self-assessment could help them do better work and get better grades.” However, those self-assessment practices did not carry over in courses where instructors did not encourage them.
Peer review is an authentic strategy for fostering collaboration and community, but students often struggle with assessing their classmates fairly. They might be unclear on the criteria, or they might be so concerned about hurting a classmate’s feelings that they provide bland, unhelpful feedback. Asking students to use a rubric addresses both of these issues. Indeed, Canvas peer review assignments are structured to allow students to use the same rubric the instructor uses, though the peer ratings are not factored into the assignment grade.
What are the challenges of rubrics?
Not everyone believes rubrics are helpful for students or faculty. Some common arguments against using rubrics include the following:
Time investment to create rubrics
Wolf and Stevens (2007) acknowledge that constructing a thoughtful rubric takes time. This must be weighed against the potential time savings in using the rubric once it has been developed. In addition, generative AI could now streamline the process of creating rubrics.
Fear of "criteria compliance"
Panadero and Jonsson (2020) analyzed 27 publications critical of the use of rubrics, synthesized the most common arguments against their use, and presented evidence to refute those arguments. One of the most common criticisms leveled against rubrics is that they promote “criteria compliance”: that students will follow the rubric in order to meet a minimum standard and will not strive for excellence beyond those criteria (Panadero & Jonsson, 2020).
Bearman and Ajjawi (2018) echo this concern: “The students themselves can use the written criteria to control their own experiences. Many students seek to use the written criteria to pass the assessment rather than learn.” Ito (2015) quotes a professor who summarizes this criticism: “Rubrics may set limitations for students; they may just make enough efforts to meet the criteria of rubrics and do no more than that.”
Other studies find that while some students might use a rubric as a “cookbook,” others use it to guide deeper learning. Moreover, prioritizing grades over learning can be an issue whether rubrics are provided or not. As Panadero & Jonsson (2020) point out, “It is also important to remember that there is almost always a need for students to comply with the teacher’s expectations, but with access to rubrics some students seem to feel more confident, experiencing less anxiety, which means that they may become more motivated to focus on learning, without the fear of failure.” Moreover, even alternative grading structures that avoid the use of points or letter grades often provide rubrics to establish standards for competence or mastery.
Also, consider that for every student who might not consider striving for standards beyond those described in the rubric, many others will have the structure and guidance needed to meet those standards when, without the rubric, they likely would have fallen short.
Finally, consider that a rubric can set standards of excellence as well as those for satisfactory performance. As Wolf and Stevens (2007) note, “The challenge then is to create a rubric that makes clear what is valued in the performance or product—without constraining or diminishing them.”
Capturing assignment complexity
Other critics argue that it is impossible to capture an assignment’s nuances or complexity in a rubric. Sadler (2009) recommends a holistic assessment approach instead, stating, “By limiting itself to preset criteria, [a rubric] cannot take into account all the necessary nuances of expert judgments. Neither can analytic appraisal … represent the complex ways in which criteria are actually used.”
Guiding students in using rubrics
As noted, one common criticism leveled against rubrics is that they encourage students to take a “cookbook” approach to completing their assessments, following the rubric as if following a recipe with the goal of getting a top grade but not necessarily learning the material. The instructor must be specific about why the rubric criteria have been established and how they align with the course outcomes. This adds to the time required for implementing the rubric.
What are recommendations for creating rubrics?
To provide the most benefit for you and your students, rubrics must be created and used with intention and care (Andrade, 2005; Turley & Gallagher, 2008; Ito, 2015).
Determining the evaluative criteria
“Because we get what we assess, we must assess what matters” (Andrade 2006). Your guiding questions in defining the evaluative criteria for your assessment are
-
What matters?
-
Why does it matter?
Rubric criteria should be construct-relevant; that is, they should emphasize the knowledge, skills, or abilities being measured by an assessment (CAST, n.d.). Start by identifying which module or course outcomes the assignment assesses. What knowledge or skills will students demonstrate by completing the assignment? If your stated learning outcomes are measurable, these can transfer easily to your rubric.
For example, if one of your course outcomes is “Formulate a persuasive argument on a significant event in nineteenth century U.S. history,” appropriate rubric criteria could include “demonstrates an understanding of the selected event” and “communicates persuasively about that event.”
How much weight you assign to each criterion should also be determined by your learning outcomes. If you teach a research writing class where students are expected to demonstrate mastery of your discipline’s citation format expectations, it would be appropriate to give more weight to those criteria. However, for the learning outcome “Formulate a persuasive argument on a significant event in nineteenth century U.S. history,” criteria such as formatting, spelling, and mechanics should be given less weight, with points deducted only when errors interfere with the student’s ability to communicate persuasively.
Establishing the quality definitions
How will you differentiate excellent performance from merely satisfactory performance? When determining how many performance levels to include in your rubric, make sure there is a measurable difference between them. If you have five levels, but there’s barely a difference between level 3 and level 4, then just use three levels to measure performance. Having fewer levels of performance also improves the reliability and efficiency of the rubric; as ratings become more granular, distinctions between them become more subjective (Wolf & Stevens, 2007; Ito 2015).
In labeling your performance levels, try to avoid negative language. For example, instead of describing performance as “poor” or “unsatisfactory,” consider “novice” or “developing.” As Wolf and Stevens (2007) note, such language is more respectful of the learner and emphasizes the potential for growth.
When writing the evaluative criteria and quality definitions, Goldberg (2014) recommends using parallel structure in word choices and syntax. For example, ensure that each quality definition is a phrase or a complete sentence but not a mix of phrases and sentences, and use the same verb tense for each definition. Consistency in language choices will simplify the cognitive load of reading and interpreting the rubric for both the instructor and the student (Goldberg, 2014).
Using rubrics as an instructional tool
Research suggests that rubrics are most effective when used as instructional tools: when students are provided with the rubric when the assignment is presented, when the criteria are discussed with the students, and—especially—when students are asked to use the rubric to complete self and peer assessments.
Andrade (2001, 2005) adds that effective instructional rubrics are written in language that the students understand and address common pitfalls seen in student work, offering students a path toward avoiding those pitfalls in their own work.
When introducing the assignment and its rubric, consider providing specific examples that meet various levels of performance, and point out how assignments that received less than full scores missed the mark. Then, you could share additional examples and ask students to provide their own rubric scoring (see Wolf & Stevens, 2007, and Goodrich, 1996). This can help prepare students for assessing their own work and participating in peer reviews. Moreover, presenting both strong and weak examples can strengthen students’ critical thinking skills (Wolf & Stevens 2007).
In the best-case scenario, when instructional rubrics are used in teaching, “students come to see assessment as a source of insight and help instead of its being the occasion for meting out rewards and punishments and they tend to use the feedback to improve on their assignments” (Andrade, 2005).
Creating rubrics within Canvas
Canvas allows you to attach a rubric to a discussion or an assignment. This further streamlines the grading process; when evaluating a given assignment, you can select the point value for each criterion, and the points add up automatically to provide the student’s score.
Visit Rubrics Overview (Instructors) to watch a video, and see the Canvas Instructor Guide for specific guidance:
References
Andrade, H., & Du, Y. (2005). Student perspectives on rubric-referenced assessment. Practical Assessment, Research & Evaluation, 10, 3.
Andrade, H. G. (2001). The Effects of Instructional Rubrics on Learning to Write. Current Issues in Education, 4(4). Arizona State University
Andrade, H. G. (2005). Teaching with Rubrics: The Good, the Bad, and the Ugly. College Teaching, 53(1), 27–30.
Andrade, H. L. (2006). The Trouble with a Narrow View of Rubrics. The English Journal, 95(6), 9–9.
Bargainnier, S. (2003). Fundamentals of Rubrics. Pacific Crest. University of Idaho.
Bearman, M., & Ajjawi, R. (2018). From “Seeing Through” to “Seeing With”: Assessment Criteria and the Myths of Transparency. Frontiers in Education, 3.
Bearman, M., & Ajjawi, R. (2021). Can a rubric do more than be transparent? Invitation as a new metaphor for assessment criteria. Studies in Higher Education, 46(2), 359–368. https://doi.org/10.1080/03075079.2019.1637842
Beckett, Gulbahar H., Amaro‐Jiménez, C., & Beckett, KelvinS. (2010). Students’ use of asynchronous discussions for academic discourse socialization. Distance Education, 31(3), 315–335.
CAST. (n.d.). UDL On Campus: UDL and Assessment.
Dawson, P. (2017). Assessment rubrics: Towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347–360.
Feldman, J., & Reed Marshall, T. (2020). Empowering Students By Demystifying Grading: Giving students more insight into performance expectations increases their learning agency. Educational Leadership, 77(6), 49–53.
Gallardo, K. (2020). Competency-Based Assessment and the Use of Performance-Based Evaluation Rubrics in Higher Education: Challenges towards the Next Decade. Problems of Education in the 21st Century, 78(1), 61–79.
Goldberg, G. L. (2014). Revising an Engineering Design Rubric: A Case Study Illustrating Principles and Practices to Ensure Technical Quality of Rubrics. Practical Assessment, Research, and Evaluation, 19(1), Article 1.
Goodrich, H. (1996). Understanding rubrics. Educational Leadership, 54, 14–17.
Ito, H. (2015). Is a Rubric Worth the Time and Effort? Conditions for Success. International Journal of Learning, Teaching and Educational Research, 10(2), 32–45.
Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment. Assessment & Evaluation in Higher Education, 39(7), 840–852.
Leader, D. C., & Clinton, M. S. (2018). Student Perceptions of the Effectiveness of Rubrics. Journal of Business & Educational Leadership, 8(1), 86–99.
Lipnevich, A. A., Panadero, E., & Calistro, T. (2023). Unraveling the effects of rubrics and exemplars on student writing performance. Journal of Experimental Psychology: Applied, 29(1), 136–148
Panadero, E., Alonso-Tapia, J., & Reche, E. (2013). Rubrics vs. Self-assessment scripts effect on self-regulation, performance and self-efficacy in pre-service teachers. Studies in Educational Evaluation, 39(3), 125–132.
Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129–144.
Panadero, E., & Jonsson, A. (2020). A critical review of the arguments against the use of rubrics. Educational Research Review, 30, 100329.
Pérez-Guillén, S., et al. (2022). Students’ perceptions, engagement and satisfaction with the use of an e-rubric for the assessment of manual skills in physiotherapy. BMC Medical Education, 22, 623.
Ragupathi, K., & Lee, A. (2020). Beyond Fairness and Consistency in Grading: The Role of Rubrics in Higher Education. In C. S. Sanger & N. W. Gleason (Eds.), Diversity and Inclusion in Global Higher Education: Lessons from Across Asia (pp. 73–95). Springer.
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448.
Sadler, D. R. (2009). Indeterminacy in the Use of Preset Criteria for Assessment and Grading: Assessment & Evaluation in Higher Education. Assessment & Evaluation in Higher Education, 34(2), 159–179.
Turley, E. D., & Gallagher, C. W. (2008). On the “Uses” of Rubrics: Reframing the Great Rubric Debate. The English Journal, 97(4), 87–92.
Wolf, K., & Stevens, E. (2007) The Role of Rubrics in Advancing and Assessing Student Learning. Journal of Effective Teaching, 7(1), 3–14.
Wu, X. V., Heng, M. A., & Wang, W. (2015). Nursing students’ experiences with the use of authentic assessment rubric and case approach in the clinical laboratories. Nurse Education Today, 35(4), 549–555.
Wyss, V. L., Freedman, D., & Siebert, C. J. (2014). The Development of a Discussion Rubric for Online Courses: Standardizing Expectations of Graduate Students in Online Scholarly Discussions. TechTrends, 58(2), 99–107.
Created on May 24, 2024