Rubric Design

Colleagues insightful comments really helped me to crystallise the idea on how to use the rubric in my future teaching.

Hitomi
'Hi all For me, one 'Important design element for effective assessment' is 'rubric design'.
I must admit that I haven't really considered 'rubric' im my experience so far. I think part of the reason is that in Japan where I studied and started career, it is not a common practice to use rubric and no teachers don't really show rubric in scoring assessments. The other reason is that I expect students to be creative in desgining assessment. I don't want their 'thinking' to be restricted by the rubric.

But it is true that 'graduate and students value rubrics because they clarify the targets for their work' (Reddy and Andrade, 2009) and it is also more transparent on what basis the assessment is marked. On the other hand, Reddy and Andrade (2009) argue that 'there is evidence of both positive responses and resistance to rubric use', and 'more research is needed on validity and reliability or rubric'.

I personally feel (also as a student of this course) that rubric is still helpful but understand that this is tricky. Therefore I think 'rubric design' is important element to be examined well for expected outcomes (assist learning, transparent scoring etc.).

Best wishes,
Hitomi'


Coralie
' Hi Hitomi, Like you I feel that rubrics have many advantages. One of the limitiations not related to the concept of a rubric is what the rubric contains. For example, a criterion in a rubric that said "includes ten articles in the reference list" seems not particularly useful, how does the teacher know if the articles were read and how they contributed to the student's learning. I think it is more useful to have a criterion that relates to understanding readings and applying that knowledge in the assignment.
What if the students designed a rubric for an assessment item? Has anyone tried this or read about it?
cheers, Coralie'

Shane
'Hi Everyone, Thanks for raising the topic of rubrics Hitomi, Coralie and I would like to pose the following question to everyone to facilitate conversation around this topic.
The question is: In theory rubrics are important and useful, however in your experience and in your context what is it about rubrics that has not worked? Did you manage to fix the problem in subsequent iterations of the rubric?
Cheers,'

Dalma
' I am glad to see this topic in the forum, as we discussed this with Shane just earlier today.
In my perception, rubrics can be useful for both students and academics,as they can provide guidance to study and can help a lot in marking, but there are several downsides as well, which might make them less worthy, depending on the subject matter and the type of work. As lawyers say: it depends. And it sure does, on several factors.
One downside is that they risk transforming the assessment into a mechanical exercise of satisfying the rubrics; of ticking the box and working for the assessment itself, instead of using the assessment as a checking of actual knowledge and skills. I am interested to see how a student understands or knows something, but the more assessment criteria and the more detailed rubrics I provide, the more the students' focus will shift from showing their knowledge and skills to satisfying my listed expectations and try to fit in the square box I create, instead of thinking freely. Ultimately, students can learn to play the system, which can easily determine them to be surface learners. In an extreme, then, any assessment can become similar to an IELTS test, where you can get a high score by knowing the test mechanism, instead of actually knowing the language at that level.
The second downside is that rubrics can be perfect if you can quantify expectations, but are not easy to use (if at all) if you want to assess quality, creative or critical thinking. Levels of thinking may be defined with adjectives for the rubrics, but they may not say too much to the students working towards the assessment - e.g. I may define a criterion for an HD as "a solution to the client's case found through an innovative approach" or "finding a creative alternative solution to the client's case", compared to a DI as "a solution to the client's case that would stand in court", but this would not help the students more than what the assessment criteria and the assessment instructions offer anyway. The problem is that these rubrics require the highest level of knowledge and understanding in order to see the difference between the levels and in order to know how to satisfy any of them. If there is any other way of defining rubrics for this type of assessment, I would be very interested.   
Finally, yet another problem arises in problem-solving assessments, where I find it hard to see how to transform the expected content into rubrics that don't actually give out the answers. I use such rubrics as marking guide for myself, but with room for flexibility. Unless there is only one right way of answering something, rubrics may take away the flexibility of achieving an excellent result by different means. Having 10 authorities listed as a condition for an HD seems completely unsuitable, when a student might only use one, but in a way that serves its purpose much better than all the other nine alltogether, or when the student's original ideas worth a million compared to any of the authorities available out there. Similarly, expecting students to find and use one particular case (in law) is too rigid, if another student may find and use a different case that does not initially seem as relevant, but uses it in a way that just makes it perfect. 
Assessing thinking instead of regurgitated knowledge, does not seem to easily fit into rubrics. For this reason, and because in law everything is debatable, creating a rigid system of rubric-based assessment can become very counter-productive, inhibiting the exact way of thinking we try to develop in our students.
Conclusively, I am not saying that rubrics are bad per se, I am only saying that they are not suitable for every assessment. I am, of course, open to and interested in evidence to the contrary.
Dalma'

Gemma
'Hi Dalma,
I agree about the limitations of rubrics to properly assess creative and imaginitive projects.  As the creator of a rubric we are limited by our experiences, but the exceptional students can draw from a different and often extremely broad range of their own experiences and so come up with something you hadn't thought of.
Also, at the other extreme, introductory courses, it might be difficult to use rubrics for all assessments.  I teach Intro Physics and at least some of the assessment is to be around numbers, traditional 'tests' if you will, hence a rubric might not be useful.  I would, however, like to mention that I have certainly broadened my own thoughts on assessment for Intro Physics after reading about constructive alignment and Bloom's taxonomy.  One thing that has always struck me about quantitative marking is that students can pass, even if they just don't get it.  This was well put by the example (I forget where) of the surgery student who could tick all the boxes about neat stitching, precise cutting, timeliness, etc, but removed the wrong organ.  Would that be picked up be a rubric?  Got the high ticks for all but one criteria - would make an HD!  But the student really should fail.
I think, however, that it is in designing the course learning outcomes, the assessment, and any applicable rubrics all together, that does allow scope for rich learning.  We need to follow our own teaching and think creatively and 'out-of-the-box' for the whole course design, not just try and fit a rubric to what we already assess or even have the same learning activities and assessment that has always been.  I dream of an Intro Physics course without 'tests'.  If only I was given the opportunity.
Cheers,
Gemma.'

Nell
'I have found the Business Assessment Grid provided by Price et al (2004) to be a wonderful resource and something that I will definitely store for future reference and there would certainly be areas within the Grid that I would be open to sharing with students to increase their overall understanding of both assessment expectations in grading along with explanation of certain assessment terminology which further builds on Oliver et al (2005) concepts of formulating clear learning outcomes and graduate attributes. The detailed discussion and use of rubics (Reddy et al 2010) has increased my personal insight into this as a method for use in future teaching practices.  While, as Gemma has discussed the rubic used do have the potential to be limited by own own experiences, the ability of using rubics generally in assessment items to increase overall student clarity in their learning and quality targets is invaluable, however I do believe that giving students access to rubics before/with the assessment item is crucial for student transparency in assessment expectations.'

Hitomi
'Hi Everyone
Thank you for sharing your insightful thoughts on 'rubric'.

I have similar concern Dalma - similar to Law, 'Urban and Regional Planning' is also debatable and
'rubric' can be tricky.
But I still think that rubric is helpful for students and teachers. I usually received questions from students about the expectation for assessment. And I've found that students have various understanding of 'asessment criteria'. I have an impression that just describing the asessment criteria is still unclear - of course it depends on the discipline and unit.

There is still 'a lot' to think in using and designing rubric!.
Hitomi'

Shane
'Hi Hitomi, I am aware of cases where, given the time and opportunity, staff have developed rubrics with their students, what the criteria for an assessment item/performance look like, along with what woukd constitue an HD, D, C, etc. This helps develop a shared understanding. For assignment 2b add valued resources like the one you mentioned about rubrics, or link to them, to your portfolio with some detail about your context and why you added it. Cheers, Shane.'


0 件のコメント:

コメントを投稿