<< Chapter < Page Chapter >> Page >
Ccsso expectations
CCSSO 2008: Expectation 2, Element B: Rigorous Curriculum and Instruction
B - 1 Develops shared understanding of rigor
B - 2 Guides alignment process
B - 3 Monitors curriculum&instructional system effects
B - 4 Uses research&data-based strategies&practices

When criteria are explicit and clear, inter-rater reliability is likely to be higher. As discussed in Chapter 3, clear criteria also inform instruction and guide student learning. However, clarity is usually associated with specificity and writing detailed criteria is labor intensive. For example, consider what would be involved to establish specific criteria for evaluating the degree to which a student developed “a shared understanding of rigorous curriculum and instruction” in a reflective journal. Most departments will likely defer developing criteria statements and in many cases the standards descriptions will become the de facto set (at least initially). The question that must be addressed by each department is how specific the criteria must be to guide evaluation and contribute to program development. Criteria can be refined as departments study performance results and examine student work in later steps in the process. When criteria have been selected, the next task is to incorporate the criteria as statements in an assessment rubric.

Step 5: develop the assessment rubric

An assessment rubric identifies the characteristics of possible responses along a continuum from exceptional to unacceptable. The rubric can be designed to address each assessment criterion individually or combine a number of criteria for a holistic evaluation. Decisions about individual versus holistic scoring will affect the complexity of the evaluation task. Although a case can be made for individual criterion scoring, our experience disclosed that the behaviors associated with the assessment products we received from students were complex and variable so it made more sense to evaluate how well multiple expectations were integrated in a student’s response using a holistic approach. A related decision in rubric design involves how many performance levels you will use for each criterion and what you will name them. Too many categories can make it difficult to describe how one differentiates performance so we recommend starting with three: Exceeds Standards, Meets Standards, and Does Not Meet Standards.

One of the acknowledged purposes for developing performance assessments is to provide evidence to accreditation agencies that students have met the criteria for professional licensure or certification. Among other things, accreditation reviewers will be looking for alignment between the standards expectations and the performance assessments used to judge competence. Incorporating the language of the standards in the rubrics used to evaluate student performance makes the logic of alignment transparent and simplifies accreditation review. However, using the language of the standards often results in generalized descriptions of the evaluation criteria, which is not particularly helpful for students trying to write to the rubric. Providing students with templates and sample responses for each assignment are ways to overcome vague rubrics written to ensure alignment is clear.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Performance assessment in educational leadership programs; james berry and ronald williamson, editors. OpenStax CNX. Sep 26, 2009 Download for free at http://cnx.org/content/col11122/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Performance assessment in educational leadership programs; james berry and ronald williamson, editors' conversation and receive update notifications?

Ask