In forum discussion, one of the colleague suggested a great assessment design. Gemma designed this especially for introductory course, but I think this can be applied to many unit generally and I will refer for my next semester.
Gemma
' With my area of an introductory course, I think one very important
part of assessment is teaching the students how to do the assessment.
This isn't necessarily more important than other aspects, but I feel it
is often overlooked.
Teaching 'how', will make assessment a lot less stressful for the
students, and you will then be able to assess how well they have learnt
the content, as opposed to how well they can deal with technical
trouble, or how cluey they are as to finding help. This includes
1) Examples where appropriate - give the student an example of a
good essay in the same or different topic, and point out why it was good
in terms of content and structure.
2) Practice in the means of assessment - such as practice quizzes to
allow the students to know what it is like and what to expect,
especially in online assessment and where technology is involved.
3) Adequate feedback not just about the content, but about how they
went about it - tell them if they demonstrate good content knowledge,
but should just check the formatting guide.
4) Ask for and listen to suggestions - students will tell you what
they need help with, and as a teacher you can say that it is OK to ask
not only about the content, but about the means, how technology works
etc. We are embedded in this every day and don't always know what a
particular cohort of students don't know until we ask.
Moderation Strategy
In the forum discussion, some colleagues referred to ALTC's moderation strategy and assessment moderation toolkit.
http://resource.unisa.edu.au/course/view.php?id=285
The toolkit is based on key principles identified by research under an ALTC project on moderation for fair assessment in transnational learning and teaching.
I don't need to consider moderation so much now as my unit is small and I am only the maker. But I think this is a useful toolkit that I will utilise in future when I need.
http://resource.unisa.edu.au/course/view.php?id=285
The toolkit is based on key principles identified by research under an ALTC project on moderation for fair assessment in transnational learning and teaching.
I don't need to consider moderation so much now as my unit is small and I am only the maker. But I think this is a useful toolkit that I will utilise in future when I need.
Feedback approach
I enjoyed reading ‘The mythology of feedback’ (Adcroft 2011). This
paper questions: do academics and students share the same feedback
mythology? If there is only a limited sharing of feedback mythology, How does the dissonance this creates manifest itself?
For me, the most interest argument in this paper is, ‘effort put into feedback that is not focused on assessment, despite the investment of time and resources, is simply not seen as feedback by students’. ‘Students cannot learn from feedback if they do not recognise that they are receiving feedback or if they are only interested in the marks they receive on assessed work’.
I agree with 'accurate measurement of feedback effectiveness is difficult and perhaps impossible'(Price et al. 2010).
My feedback approach is:
- Make effort to assist student learning, assist them to achieve their learning goals.
- To do this, understanding the expectation of student is important (I found one minute paper is useful).
- Check if the intended learning outcome and assessment criteria (and rubric) are really addressing what it claimed to be for the unit
For me, the most interest argument in this paper is, ‘effort put into feedback that is not focused on assessment, despite the investment of time and resources, is simply not seen as feedback by students’. ‘Students cannot learn from feedback if they do not recognise that they are receiving feedback or if they are only interested in the marks they receive on assessed work’.
I agree with 'accurate measurement of feedback effectiveness is difficult and perhaps impossible'(Price et al. 2010).
My feedback approach is:
- Make effort to assist student learning, assist them to achieve their learning goals.
- To do this, understanding the expectation of student is important (I found one minute paper is useful).
- Check if the intended learning outcome and assessment criteria (and rubric) are really addressing what it claimed to be for the unit
Feedback
This is the 'feedback' section of UC Assessment Policy
SECTION 4 FEEDBACK 4.1 Key principle: Students will be provided with timely and constructive feedback on assessment items that is explicitly related to the learning outcomes of the unit. Feedback will support student learning and include advice on how performance can be improved.
4.2 In the context of assessment, feedback is information returned to students on their
progress in their course or unit. The purpose of feedback is to provide students with information on:
(a) what they have learnt and how effectively they are learning;
(b) what standard of performance they have achieved; and
(c) what they need to do to improve that standard of performance.
4.3 Students will be provided with feedback on all assessment items whether they count towards a grade or not.
4.4 Students will be given feedback on assessment items at an early stage after a unit commences, particularly in the first semester or year of a course.
4.5 Both qualitative and quantitative feedback are necessary for student learning.
4.6 Students will be given the opportunity to discuss their performance and the feedback they have received with an appropriate member of the academic staff.
4.7 Assessment by academic staff should be accompanied by opportunities for students to assess both their own performance (self-assessment) and the performance of others (peer assessment).
SECTION 4 FEEDBACK 4.1 Key principle: Students will be provided with timely and constructive feedback on assessment items that is explicitly related to the learning outcomes of the unit. Feedback will support student learning and include advice on how performance can be improved.
4.2 In the context of assessment, feedback is information returned to students on their
progress in their course or unit. The purpose of feedback is to provide students with information on:
(a) what they have learnt and how effectively they are learning;
(b) what standard of performance they have achieved; and
(c) what they need to do to improve that standard of performance.
4.3 Students will be provided with feedback on all assessment items whether they count towards a grade or not.
4.4 Students will be given feedback on assessment items at an early stage after a unit commences, particularly in the first semester or year of a course.
4.5 Both qualitative and quantitative feedback are necessary for student learning.
4.6 Students will be given the opportunity to discuss their performance and the feedback they have received with an appropriate member of the academic staff.
4.7 Assessment by academic staff should be accompanied by opportunities for students to assess both their own performance (self-assessment) and the performance of others (peer assessment).
Can student feedback questionnaires improve students' learning?
In Module 1, we discussed 'Can student feedback questionnaires improve students' learning?'
We agreed, 'yes', but it really depends on the circumstances, such as questions, purpose, and context.
Janet's comment describes the 'circumstances' very well.
'I say "yes" with an "if" and a "but" to this forum question. Yes, student feedback can improve student learning primarily by instigating critical reflection. With a cursory nod to Prosser, change for change sake is indeed counter productive. Change as a result of evaluation, reflection, contemplation and comparison can very much be worth while. One can always improve... right?
Hence, the "if". Student feedback can improve learning if the instructor is open to feedback, if the right questions are asked, if the student's opinion is actually valued and if meaningful changes are thoughtfully applied. The but: student feedback should be correlated with other methods of evaluation of teaching effectiveness. Viewed in isolation, student feedback has limitations.'
My comment was
'I agree with Janet. I think students feedback would improve student's learning but this may require to meet some specific conditions such as;
- Student lens are consistent with expected learning outcomes and thus their feedback is actually reflecting this
- Teachers really consider student feedback into teaching
In my experience, student feedback is helpful in providing me the opportunities to 'realise/understand' what their expectation is. This may not be always meaningful or consistent with the learning outcomes from teacher's point of view, but I think this is significant information in understanding the gaps between students and teacher's perceptions. Then helps in considering what should be done to improve teaching.
Best wishes,
Hitomi'
I distributed 'one minute' paper to students in my class following Corarlie's advice. I asked their expectations of this unit. Interestingly, their expectations were similar to the intended learning outcomes but it is varied.
The below is some comments from students, which I refer and utilise in designing unit.
'I attempt to understand what factors affect the infrastructure planning'.
'be able to understand the various stakeholders in infrastructure planning'.
'My goal is to understand about the infrastructure planing process, especially in Australia'.
'I am expecting to understand the process, issues, and how to solve the isseus by using professional skills'.
'I would like to know about the difficulty&challenges of infrastracture planning in Australia. Moreover I want to know the solutions to these challenges, apply to practice.'
'To learn about types of infrastructure, implications of choosing one option over another'.
'To be able to present educated arguments for best practice in frastructure development and delivery'.
'To understand public/private partnership arrangements and how they are negotiated on what basis.'
'To understand the relationship between infrastructure planning and delivery and human behaviour'.
We agreed, 'yes', but it really depends on the circumstances, such as questions, purpose, and context.
Janet's comment describes the 'circumstances' very well.
'I say "yes" with an "if" and a "but" to this forum question. Yes, student feedback can improve student learning primarily by instigating critical reflection. With a cursory nod to Prosser, change for change sake is indeed counter productive. Change as a result of evaluation, reflection, contemplation and comparison can very much be worth while. One can always improve... right?
Hence, the "if". Student feedback can improve learning if the instructor is open to feedback, if the right questions are asked, if the student's opinion is actually valued and if meaningful changes are thoughtfully applied. The but: student feedback should be correlated with other methods of evaluation of teaching effectiveness. Viewed in isolation, student feedback has limitations.'
My comment was
'I agree with Janet. I think students feedback would improve student's learning but this may require to meet some specific conditions such as;
- Student lens are consistent with expected learning outcomes and thus their feedback is actually reflecting this
- Teachers really consider student feedback into teaching
In my experience, student feedback is helpful in providing me the opportunities to 'realise/understand' what their expectation is. This may not be always meaningful or consistent with the learning outcomes from teacher's point of view, but I think this is significant information in understanding the gaps between students and teacher's perceptions. Then helps in considering what should be done to improve teaching.
Best wishes,
Hitomi'
I distributed 'one minute' paper to students in my class following Corarlie's advice. I asked their expectations of this unit. Interestingly, their expectations were similar to the intended learning outcomes but it is varied.
The below is some comments from students, which I refer and utilise in designing unit.
'I attempt to understand what factors affect the infrastructure planning'.
'be able to understand the various stakeholders in infrastructure planning'.
'My goal is to understand about the infrastructure planing process, especially in Australia'.
'I am expecting to understand the process, issues, and how to solve the isseus by using professional skills'.
'I would like to know about the difficulty&challenges of infrastracture planning in Australia. Moreover I want to know the solutions to these challenges, apply to practice.'
'To learn about types of infrastructure, implications of choosing one option over another'.
'To be able to present educated arguments for best practice in frastructure development and delivery'.
'To understand public/private partnership arrangements and how they are negotiated on what basis.'
'To understand the relationship between infrastructure planning and delivery and human behaviour'.
Norm-referencing VS Criterion-referencing methods
In my discipline (urban and regional planning), criterion-referencing is the 'norm'. But we certainly cross-check our marking to make sure that our assessment is consistent, which means we somehow 'compare' assignment and marking. We discussed norm-referencing and criterion-referencing methods in the forum. It was interesting to know that some colleagues experienced the environment that norm-referencing is the standard. We need to decide which referencing is appropriate for the field, and most importantly, for the intended learning outcome.
Hitomi
'Hi all,
I read this article with interest (suggested by Shane).
http://www.cshe.unimelb.edu.au/assessinglearning/06/normvcrit6.html
'There is a strong culture of norm-referencing in higher education.'
'The goal of criterion-referencing is to report student achievement against objective reference points that are independent of the cohort being assessed.'
'Norm-referencing, on its own — and if strictly and narrowly implemented — is undoubtedly unfair. With norm-referencing, a student’s grade depends – to some extent at least – not only on his or her level of achievement, but also on the achievement of other students. For example, a student who fails in one year may well have passed in other years! '
'Criterion-referencing requires giving thought to expected learning outcomes: it is transparent for students, and the grades derived should be defensible in reasonably objective terms – students should be able to trace their grades to the specifics of their performance on set tasks.'
Im my case, I apply 'criterion-referencing' methods as a) my student cohort is very small (around 10) and 2) my discipline (urban and regional planning) requires cross-cutting knowledge and skills. Some students might be very good with particular skills but may not with others. With criteria-referencing, students can identify which particular areas they are weak and need improvement. This may guide their future learning.
It is sometimes difficult for me to well explain the results at the faculty assessment board meeting. At the meeting, we discuss based on 'norm referencing distribution'. I understand that there are some cases that 'norm-referencing' works well but I don't feel comfortable in applying norm-referencing with the reasons above.
Which is appropriate in your case?
Bestt wishes,
Hitomi'
Dalma
'Hi Hitomi!
I am familiar with both systems, as at my former university there was a bell curve used for norm referencing, while in my current discipline we all work with criterion-referencing. We include (more-or-less) clearly defined assessment criteria in the unit outlines, and define them in further detail for marking. (The criteria listed in the unit outline do not contain the expected answers to an assessment question, of course.) The critera are aligned with the learning outcomes, inlcuding both knowledge and skills, but weight differently - e.g. getting an answer 'right' is obviously more important, even if presented in an unedited form, than having a perfectly edited submission with an absolutely wrong content.
In spite of the expectation to use criterion-referencing, the reporting to the Assessment board has to contain the percentage of each grade, and I've heard about colleagues having difficulties justifying grades that were not spread across the entire range. I never had this problem, as all my units where results were close to each other had small cohorts, either consisting in a small elite, or making very effective teaching possible, both these being accepted as justifictaion for grades generally above the discipline average.
Finally, I also perform a cross-reference as a final review of the marking I've completed, not to compare the results, but to make sure that the criterion-based evaluations are consistent across the entire cohort, and not influenced by external factors, like me getting tired in marking or frustrated by bad submissions.
Dalma'
Carlos
'Hi Hitomi, Dalma, very interesting discussion. Personally I have only become aware of the characteristics of norm referencing and criterion referencing methods through this module and the readings.
As Dalma, I come from having taught in a very strong Norm_refencing system. The use of Bell curves was very strict in its percentages, accross all disciplines in a big,state funded University. Furthermore, within an Asian context, study was highly valued culturally, and the university constantly promoted its ranking and the high quality of their "selected students", which were considered an "elite". The whole society in my previous teaching context values the idea of "meritocracy".
Although I quite like criterion-referencing methods, I think there is no black and white. For the sake of discussion and critical thinking, at this stage I would like to highlight a few aspects of norm-referencing. Although in this context it appears as an "unfair way of assessing, mainly by comparing and ranking of the students" I would like to highlight that competition and comparison are a reality in our world. People compete for jobs, in sports and for pure entertainment. Industries offer their products in the market, competing with similiar products, and we as buyers and consumers make daily choices based on comparison of products. Competition motivates improvement. Without going into details of the exact definitions of the words or semantics, the origins of the words "competency" (based assesssment) and being "competent" seem to me to be related to "competition". Furthermore, we live in a society with norms, codes of conduct, and professional protocols. Completely avoiding norms, or avoiding comparison and competition in assessment, could be, form this point of view, disconnected to reality.
Possibly a balance between criterion and norm referencing might give good results?'
Shane
'Hi Everyone,
This is a very lively and interesting discussion.
I would like to build on the thoughts around competition, in particualr how criterion referenced assessment approaches don't exclude the possibility of incorporating competition or competitive behaviour.
My contribution stems from a seminar I attended at UTS, whereby a software system had been created that allow academic staff to load their assignment criterion, and then grant access to students. Students would use the system to self assess their own assignments (a useful learning experience in itself when compared against the teachers judgements) and then the teacher would assess the work. The competitive element is contained in point 3 below:
The demonstrated impact of what the system allowed students and staff to do was multifaceted:
students could self-assess using the system and then compare against the teachers assessment - this resulted in students (over the course multiple assessments) developing a better sense of the quality of their own performances.
Hitomi
' Hi Dalma and Carlos
Thank you for sharing your experiences. It is interesting to know that some discipline has 'strong norm-referencing system'.
I agree, in Asian culture the idea of 'meritocracy' is really strong. I was one of the students in this system and as a student, didn't like it
I agree that balancing criterion and norm referencing would be good way but we may need to consider the effective 'balance' and rational for it. Wondering if there is any reserach on this.
Best wishes,
Hitomi'
Gemma
'At CQUniversity, there is a strong trend towards professional degrees, such as Nursing, Accounting, Engineering, Education etc where there is a higher need for more of these in the workforce. I think this is why we have quite a strong focus on criteria referenced assessment, because so long as someone is capable of doing the job, meet all the criteria, then they should pass and be able to go and do that job. It is not a particularly competetive marketplace. That may, of course, change at any time.'
Hitomi
'Hi all,
I read this article with interest (suggested by Shane).
http://www.cshe.unimelb.edu.au/assessinglearning/06/normvcrit6.html
'There is a strong culture of norm-referencing in higher education.'
'The goal of criterion-referencing is to report student achievement against objective reference points that are independent of the cohort being assessed.'
'Norm-referencing, on its own — and if strictly and narrowly implemented — is undoubtedly unfair. With norm-referencing, a student’s grade depends – to some extent at least – not only on his or her level of achievement, but also on the achievement of other students. For example, a student who fails in one year may well have passed in other years! '
'Criterion-referencing requires giving thought to expected learning outcomes: it is transparent for students, and the grades derived should be defensible in reasonably objective terms – students should be able to trace their grades to the specifics of their performance on set tasks.'
Im my case, I apply 'criterion-referencing' methods as a) my student cohort is very small (around 10) and 2) my discipline (urban and regional planning) requires cross-cutting knowledge and skills. Some students might be very good with particular skills but may not with others. With criteria-referencing, students can identify which particular areas they are weak and need improvement. This may guide their future learning.
It is sometimes difficult for me to well explain the results at the faculty assessment board meeting. At the meeting, we discuss based on 'norm referencing distribution'. I understand that there are some cases that 'norm-referencing' works well but I don't feel comfortable in applying norm-referencing with the reasons above.
Which is appropriate in your case?
Bestt wishes,
Hitomi'
Dalma
'Hi Hitomi!
I am familiar with both systems, as at my former university there was a bell curve used for norm referencing, while in my current discipline we all work with criterion-referencing. We include (more-or-less) clearly defined assessment criteria in the unit outlines, and define them in further detail for marking. (The criteria listed in the unit outline do not contain the expected answers to an assessment question, of course.) The critera are aligned with the learning outcomes, inlcuding both knowledge and skills, but weight differently - e.g. getting an answer 'right' is obviously more important, even if presented in an unedited form, than having a perfectly edited submission with an absolutely wrong content.
In spite of the expectation to use criterion-referencing, the reporting to the Assessment board has to contain the percentage of each grade, and I've heard about colleagues having difficulties justifying grades that were not spread across the entire range. I never had this problem, as all my units where results were close to each other had small cohorts, either consisting in a small elite, or making very effective teaching possible, both these being accepted as justifictaion for grades generally above the discipline average.
Finally, I also perform a cross-reference as a final review of the marking I've completed, not to compare the results, but to make sure that the criterion-based evaluations are consistent across the entire cohort, and not influenced by external factors, like me getting tired in marking or frustrated by bad submissions.
Dalma'
Carlos
'Hi Hitomi, Dalma, very interesting discussion. Personally I have only become aware of the characteristics of norm referencing and criterion referencing methods through this module and the readings.
As Dalma, I come from having taught in a very strong Norm_refencing system. The use of Bell curves was very strict in its percentages, accross all disciplines in a big,state funded University. Furthermore, within an Asian context, study was highly valued culturally, and the university constantly promoted its ranking and the high quality of their "selected students", which were considered an "elite". The whole society in my previous teaching context values the idea of "meritocracy".
Although I quite like criterion-referencing methods, I think there is no black and white. For the sake of discussion and critical thinking, at this stage I would like to highlight a few aspects of norm-referencing. Although in this context it appears as an "unfair way of assessing, mainly by comparing and ranking of the students" I would like to highlight that competition and comparison are a reality in our world. People compete for jobs, in sports and for pure entertainment. Industries offer their products in the market, competing with similiar products, and we as buyers and consumers make daily choices based on comparison of products. Competition motivates improvement. Without going into details of the exact definitions of the words or semantics, the origins of the words "competency" (based assesssment) and being "competent" seem to me to be related to "competition". Furthermore, we live in a society with norms, codes of conduct, and professional protocols. Completely avoiding norms, or avoiding comparison and competition in assessment, could be, form this point of view, disconnected to reality.
Possibly a balance between criterion and norm referencing might give good results?'
Shane
'Hi Everyone,
This is a very lively and interesting discussion.
I would like to build on the thoughts around competition, in particualr how criterion referenced assessment approaches don't exclude the possibility of incorporating competition or competitive behaviour.
My contribution stems from a seminar I attended at UTS, whereby a software system had been created that allow academic staff to load their assignment criterion, and then grant access to students. Students would use the system to self assess their own assignments (a useful learning experience in itself when compared against the teachers judgements) and then the teacher would assess the work. The competitive element is contained in point 3 below:
The demonstrated impact of what the system allowed students and staff to do was multifaceted:
students could self-assess using the system and then compare against the teachers assessment - this resulted in students (over the course multiple assessments) developing a better sense of the quality of their own performances.
- teachers could save time by focusing detailed feedback on areas of disparity betweenthe self judgement of the student and the judgement of the teacher.
- once marking and moderation had been completed the students could see their criterion scores on a scale which contained the average of the cohort.
Hitomi
' Hi Dalma and Carlos
Thank you for sharing your experiences. It is interesting to know that some discipline has 'strong norm-referencing system'.
I agree, in Asian culture the idea of 'meritocracy' is really strong. I was one of the students in this system and as a student, didn't like it
I agree that balancing criterion and norm referencing would be good way but we may need to consider the effective 'balance' and rational for it. Wondering if there is any reserach on this.
Best wishes,
Hitomi'
Gemma
'At CQUniversity, there is a strong trend towards professional degrees, such as Nursing, Accounting, Engineering, Education etc where there is a higher need for more of these in the workforce. I think this is why we have quite a strong focus on criteria referenced assessment, because so long as someone is capable of doing the job, meet all the criteria, then they should pass and be able to go and do that job. It is not a particularly competetive marketplace. That may, of course, change at any time.'
Rubric Design
Colleagues insightful comments really helped me to crystallise the idea on how to use the rubric in my future teaching.
Hitomi
'Hi all For me, one 'Important design element for effective assessment' is 'rubric design'.
I must admit that I haven't really considered 'rubric' im my experience so far. I think part of the reason is that in Japan where I studied and started career, it is not a common practice to use rubric and no teachers don't really show rubric in scoring assessments. The other reason is that I expect students to be creative in desgining assessment. I don't want their 'thinking' to be restricted by the rubric.
But it is true that 'graduate and students value rubrics because they clarify the targets for their work' (Reddy and Andrade, 2009) and it is also more transparent on what basis the assessment is marked. On the other hand, Reddy and Andrade (2009) argue that 'there is evidence of both positive responses and resistance to rubric use', and 'more research is needed on validity and reliability or rubric'.
I personally feel (also as a student of this course) that rubric is still helpful but understand that this is tricky. Therefore I think 'rubric design' is important element to be examined well for expected outcomes (assist learning, transparent scoring etc.).
Best wishes,
Hitomi'
Coralie
' Hi Hitomi, Like you I feel that rubrics have many advantages. One of the limitiations not related to the concept of a rubric is what the rubric contains. For example, a criterion in a rubric that said "includes ten articles in the reference list" seems not particularly useful, how does the teacher know if the articles were read and how they contributed to the student's learning. I think it is more useful to have a criterion that relates to understanding readings and applying that knowledge in the assignment.
What if the students designed a rubric for an assessment item? Has anyone tried this or read about it?
cheers, Coralie'
Shane
'Hi Everyone, Thanks for raising the topic of rubrics Hitomi, Coralie and I would like to pose the following question to everyone to facilitate conversation around this topic.
The question is: In theory rubrics are important and useful, however in your experience and in your context what is it about rubrics that has not worked? Did you manage to fix the problem in subsequent iterations of the rubric?
Cheers,'
Dalma
' I am glad to see this topic in the forum, as we discussed this with Shane just earlier today.
In my perception, rubrics can be useful for both students and academics,as they can provide guidance to study and can help a lot in marking, but there are several downsides as well, which might make them less worthy, depending on the subject matter and the type of work. As lawyers say: it depends. And it sure does, on several factors.
One downside is that they risk transforming the assessment into a mechanical exercise of satisfying the rubrics; of ticking the box and working for the assessment itself, instead of using the assessment as a checking of actual knowledge and skills. I am interested to see how a student understands or knows something, but the more assessment criteria and the more detailed rubrics I provide, the more the students' focus will shift from showing their knowledge and skills to satisfying my listed expectations and try to fit in the square box I create, instead of thinking freely. Ultimately, students can learn to play the system, which can easily determine them to be surface learners. In an extreme, then, any assessment can become similar to an IELTS test, where you can get a high score by knowing the test mechanism, instead of actually knowing the language at that level.
The second downside is that rubrics can be perfect if you can quantify expectations, but are not easy to use (if at all) if you want to assess quality, creative or critical thinking. Levels of thinking may be defined with adjectives for the rubrics, but they may not say too much to the students working towards the assessment - e.g. I may define a criterion for an HD as "a solution to the client's case found through an innovative approach" or "finding a creative alternative solution to the client's case", compared to a DI as "a solution to the client's case that would stand in court", but this would not help the students more than what the assessment criteria and the assessment instructions offer anyway. The problem is that these rubrics require the highest level of knowledge and understanding in order to see the difference between the levels and in order to know how to satisfy any of them. If there is any other way of defining rubrics for this type of assessment, I would be very interested.
Finally, yet another problem arises in problem-solving assessments, where I find it hard to see how to transform the expected content into rubrics that don't actually give out the answers. I use such rubrics as marking guide for myself, but with room for flexibility. Unless there is only one right way of answering something, rubrics may take away the flexibility of achieving an excellent result by different means. Having 10 authorities listed as a condition for an HD seems completely unsuitable, when a student might only use one, but in a way that serves its purpose much better than all the other nine alltogether, or when the student's original ideas worth a million compared to any of the authorities available out there. Similarly, expecting students to find and use one particular case (in law) is too rigid, if another student may find and use a different case that does not initially seem as relevant, but uses it in a way that just makes it perfect.
Assessing thinking instead of regurgitated knowledge, does not seem to easily fit into rubrics. For this reason, and because in law everything is debatable, creating a rigid system of rubric-based assessment can become very counter-productive, inhibiting the exact way of thinking we try to develop in our students.
Conclusively, I am not saying that rubrics are bad per se, I am only saying that they are not suitable for every assessment. I am, of course, open to and interested in evidence to the contrary.
Dalma'
Gemma
'Hi Dalma,
I agree about the limitations of rubrics to properly assess creative and imaginitive projects. As the creator of a rubric we are limited by our experiences, but the exceptional students can draw from a different and often extremely broad range of their own experiences and so come up with something you hadn't thought of.
Also, at the other extreme, introductory courses, it might be difficult to use rubrics for all assessments. I teach Intro Physics and at least some of the assessment is to be around numbers, traditional 'tests' if you will, hence a rubric might not be useful. I would, however, like to mention that I have certainly broadened my own thoughts on assessment for Intro Physics after reading about constructive alignment and Bloom's taxonomy. One thing that has always struck me about quantitative marking is that students can pass, even if they just don't get it. This was well put by the example (I forget where) of the surgery student who could tick all the boxes about neat stitching, precise cutting, timeliness, etc, but removed the wrong organ. Would that be picked up be a rubric? Got the high ticks for all but one criteria - would make an HD! But the student really should fail.
I think, however, that it is in designing the course learning outcomes, the assessment, and any applicable rubrics all together, that does allow scope for rich learning. We need to follow our own teaching and think creatively and 'out-of-the-box' for the whole course design, not just try and fit a rubric to what we already assess or even have the same learning activities and assessment that has always been. I dream of an Intro Physics course without 'tests'. If only I was given the opportunity.
Cheers,
Gemma.'
Nell
'I have found the Business Assessment Grid provided by Price et al (2004) to be a wonderful resource and something that I will definitely store for future reference and there would certainly be areas within the Grid that I would be open to sharing with students to increase their overall understanding of both assessment expectations in grading along with explanation of certain assessment terminology which further builds on Oliver et al (2005) concepts of formulating clear learning outcomes and graduate attributes. The detailed discussion and use of rubics (Reddy et al 2010) has increased my personal insight into this as a method for use in future teaching practices. While, as Gemma has discussed the rubic used do have the potential to be limited by own own experiences, the ability of using rubics generally in assessment items to increase overall student clarity in their learning and quality targets is invaluable, however I do believe that giving students access to rubics before/with the assessment item is crucial for student transparency in assessment expectations.'
Hitomi
'Hi Everyone
Thank you for sharing your insightful thoughts on 'rubric'.
I have similar concern Dalma - similar to Law, 'Urban and Regional Planning' is also debatable and
'rubric' can be tricky.
But I still think that rubric is helpful for students and teachers. I usually received questions from students about the expectation for assessment. And I've found that students have various understanding of 'asessment criteria'. I have an impression that just describing the asessment criteria is still unclear - of course it depends on the discipline and unit.
There is still 'a lot' to think in using and designing rubric!.
Hitomi'
Shane
'Hi Hitomi, I am aware of cases where, given the time and opportunity, staff have developed rubrics with their students, what the criteria for an assessment item/performance look like, along with what woukd constitue an HD, D, C, etc. This helps develop a shared understanding. For assignment 2b add valued resources like the one you mentioned about rubrics, or link to them, to your portfolio with some detail about your context and why you added it. Cheers, Shane.'
Hitomi
'Hi all For me, one 'Important design element for effective assessment' is 'rubric design'.
I must admit that I haven't really considered 'rubric' im my experience so far. I think part of the reason is that in Japan where I studied and started career, it is not a common practice to use rubric and no teachers don't really show rubric in scoring assessments. The other reason is that I expect students to be creative in desgining assessment. I don't want their 'thinking' to be restricted by the rubric.
But it is true that 'graduate and students value rubrics because they clarify the targets for their work' (Reddy and Andrade, 2009) and it is also more transparent on what basis the assessment is marked. On the other hand, Reddy and Andrade (2009) argue that 'there is evidence of both positive responses and resistance to rubric use', and 'more research is needed on validity and reliability or rubric'.
I personally feel (also as a student of this course) that rubric is still helpful but understand that this is tricky. Therefore I think 'rubric design' is important element to be examined well for expected outcomes (assist learning, transparent scoring etc.).
Best wishes,
Hitomi'
Coralie
' Hi Hitomi, Like you I feel that rubrics have many advantages. One of the limitiations not related to the concept of a rubric is what the rubric contains. For example, a criterion in a rubric that said "includes ten articles in the reference list" seems not particularly useful, how does the teacher know if the articles were read and how they contributed to the student's learning. I think it is more useful to have a criterion that relates to understanding readings and applying that knowledge in the assignment.
What if the students designed a rubric for an assessment item? Has anyone tried this or read about it?
cheers, Coralie'
Shane
'Hi Everyone, Thanks for raising the topic of rubrics Hitomi, Coralie and I would like to pose the following question to everyone to facilitate conversation around this topic.
The question is: In theory rubrics are important and useful, however in your experience and in your context what is it about rubrics that has not worked? Did you manage to fix the problem in subsequent iterations of the rubric?
Cheers,'
Dalma
' I am glad to see this topic in the forum, as we discussed this with Shane just earlier today.
In my perception, rubrics can be useful for both students and academics,as they can provide guidance to study and can help a lot in marking, but there are several downsides as well, which might make them less worthy, depending on the subject matter and the type of work. As lawyers say: it depends. And it sure does, on several factors.
One downside is that they risk transforming the assessment into a mechanical exercise of satisfying the rubrics; of ticking the box and working for the assessment itself, instead of using the assessment as a checking of actual knowledge and skills. I am interested to see how a student understands or knows something, but the more assessment criteria and the more detailed rubrics I provide, the more the students' focus will shift from showing their knowledge and skills to satisfying my listed expectations and try to fit in the square box I create, instead of thinking freely. Ultimately, students can learn to play the system, which can easily determine them to be surface learners. In an extreme, then, any assessment can become similar to an IELTS test, where you can get a high score by knowing the test mechanism, instead of actually knowing the language at that level.
The second downside is that rubrics can be perfect if you can quantify expectations, but are not easy to use (if at all) if you want to assess quality, creative or critical thinking. Levels of thinking may be defined with adjectives for the rubrics, but they may not say too much to the students working towards the assessment - e.g. I may define a criterion for an HD as "a solution to the client's case found through an innovative approach" or "finding a creative alternative solution to the client's case", compared to a DI as "a solution to the client's case that would stand in court", but this would not help the students more than what the assessment criteria and the assessment instructions offer anyway. The problem is that these rubrics require the highest level of knowledge and understanding in order to see the difference between the levels and in order to know how to satisfy any of them. If there is any other way of defining rubrics for this type of assessment, I would be very interested.
Finally, yet another problem arises in problem-solving assessments, where I find it hard to see how to transform the expected content into rubrics that don't actually give out the answers. I use such rubrics as marking guide for myself, but with room for flexibility. Unless there is only one right way of answering something, rubrics may take away the flexibility of achieving an excellent result by different means. Having 10 authorities listed as a condition for an HD seems completely unsuitable, when a student might only use one, but in a way that serves its purpose much better than all the other nine alltogether, or when the student's original ideas worth a million compared to any of the authorities available out there. Similarly, expecting students to find and use one particular case (in law) is too rigid, if another student may find and use a different case that does not initially seem as relevant, but uses it in a way that just makes it perfect.
Assessing thinking instead of regurgitated knowledge, does not seem to easily fit into rubrics. For this reason, and because in law everything is debatable, creating a rigid system of rubric-based assessment can become very counter-productive, inhibiting the exact way of thinking we try to develop in our students.
Conclusively, I am not saying that rubrics are bad per se, I am only saying that they are not suitable for every assessment. I am, of course, open to and interested in evidence to the contrary.
Dalma'
Gemma
'Hi Dalma,
I agree about the limitations of rubrics to properly assess creative and imaginitive projects. As the creator of a rubric we are limited by our experiences, but the exceptional students can draw from a different and often extremely broad range of their own experiences and so come up with something you hadn't thought of.
Also, at the other extreme, introductory courses, it might be difficult to use rubrics for all assessments. I teach Intro Physics and at least some of the assessment is to be around numbers, traditional 'tests' if you will, hence a rubric might not be useful. I would, however, like to mention that I have certainly broadened my own thoughts on assessment for Intro Physics after reading about constructive alignment and Bloom's taxonomy. One thing that has always struck me about quantitative marking is that students can pass, even if they just don't get it. This was well put by the example (I forget where) of the surgery student who could tick all the boxes about neat stitching, precise cutting, timeliness, etc, but removed the wrong organ. Would that be picked up be a rubric? Got the high ticks for all but one criteria - would make an HD! But the student really should fail.
I think, however, that it is in designing the course learning outcomes, the assessment, and any applicable rubrics all together, that does allow scope for rich learning. We need to follow our own teaching and think creatively and 'out-of-the-box' for the whole course design, not just try and fit a rubric to what we already assess or even have the same learning activities and assessment that has always been. I dream of an Intro Physics course without 'tests'. If only I was given the opportunity.
Cheers,
Gemma.'
Nell
'I have found the Business Assessment Grid provided by Price et al (2004) to be a wonderful resource and something that I will definitely store for future reference and there would certainly be areas within the Grid that I would be open to sharing with students to increase their overall understanding of both assessment expectations in grading along with explanation of certain assessment terminology which further builds on Oliver et al (2005) concepts of formulating clear learning outcomes and graduate attributes. The detailed discussion and use of rubics (Reddy et al 2010) has increased my personal insight into this as a method for use in future teaching practices. While, as Gemma has discussed the rubic used do have the potential to be limited by own own experiences, the ability of using rubics generally in assessment items to increase overall student clarity in their learning and quality targets is invaluable, however I do believe that giving students access to rubics before/with the assessment item is crucial for student transparency in assessment expectations.'
Hitomi
'Hi Everyone
Thank you for sharing your insightful thoughts on 'rubric'.
I have similar concern Dalma - similar to Law, 'Urban and Regional Planning' is also debatable and
'rubric' can be tricky.
But I still think that rubric is helpful for students and teachers. I usually received questions from students about the expectation for assessment. And I've found that students have various understanding of 'asessment criteria'. I have an impression that just describing the asessment criteria is still unclear - of course it depends on the discipline and unit.
There is still 'a lot' to think in using and designing rubric!.
Hitomi'
Shane
'Hi Hitomi, I am aware of cases where, given the time and opportunity, staff have developed rubrics with their students, what the criteria for an assessment item/performance look like, along with what woukd constitue an HD, D, C, etc. This helps develop a shared understanding. For assignment 2b add valued resources like the one you mentioned about rubrics, or link to them, to your portfolio with some detail about your context and why you added it. Cheers, Shane.'
Rubric example - urban development project
In Urban and Regional Planning Course (both UG and PG), common assignment is 'essay on urban issues and development'. Students identify the problem/urban issues based on evidence (statistics, report from national/state/local government) and develop ideas on how urban planning could cope with the issues such as society(demographic change), environment, and economy. The example below is used in Ohio Department of Education 'Simulated Urban Planning Course Urban Planning Project Assignment'. I think this assignment is especially relevant to my unit 'Infrastructure Planning and Delivery'. In 'Infrastructure Planning Delivery', students analyse the UC campus and develop the ideas on Campus development as a planner. I refer to this example from Ohio Department of Education. Sometimes students are not sure what is expected for the assignment. I think this example describe clearly on what should be included and considered in the assignment. Especially including 'culture' is a good point as this is very important aspect of city which is related to social sustainability but developers often focus on economic aspect only.The scoring guideline (rubric) seems broad but still a good guide for students to understand what is assessed in the assignment.
(Source: Ohio Depertment of Education)
(Source: Ohio Depertment of Education)
Rubric example - Planning Report
In Master of Urban and Regional Planning, University of Canberra, the last unit that students take to finish the course is 'Planning Report' which is a mini research project. We don't have class for this unit. Students discuss with their supervisors and plan, develop, and finish the mini research project. Usually course convener (myself) supervise each student on this unit.
This unit is 'corpus' of studies that students undertake throughout the course. To develop planning report, students refer to planinng theory, analytical method, some tools such as statistics software, geotraphical information system. The evaluation of the report is thus comprehensive. I refer to the assessment criteria and rubric that is used in San Jose State University, Department of Urban and Regional Planning (below). The assessement criteria below is systematic and transparent. I think, for comprehensive report, this criteria works very well to evaluate how the assignment is addressing the criteria (learning outcome). This can be used as a check list on how assignment should be structured and what shold be included. Students can understand the strong and weak points of their assignment by using this sheet and utilise for their future learning.
(Source: San José State University, The Faculty of the Department of Urban and Regional Planning, URBP 298B: Special Study: Planning Report Completion Spring 2010)
This unit is 'corpus' of studies that students undertake throughout the course. To develop planning report, students refer to planinng theory, analytical method, some tools such as statistics software, geotraphical information system. The evaluation of the report is thus comprehensive. I refer to the assessment criteria and rubric that is used in San Jose State University, Department of Urban and Regional Planning (below). The assessement criteria below is systematic and transparent. I think, for comprehensive report, this criteria works very well to evaluate how the assignment is addressing the criteria (learning outcome). This can be used as a check list on how assignment should be structured and what shold be included. Students can understand the strong and weak points of their assignment by using this sheet and utilise for their future learning.
Rubric example - written assignment grading rubric
I've found some example of rubric used in planning schools.
The below is the example of written assignment grading rubric, from Cornel University College of Art, Architecture, and Planning, Department of City and Regional Planning.
In my units, students write essay and report on planning theory and practice. I think this rubric is useful for assessing students writing from five angles, although this is still much dependent on assessor's subjective evaluation on each point. This can be also used for cheking the research paper manuscript. Especially 'argument' is important. There is still need for checking the consistency of marking by cross-checking the marked assignments.
The below is the example of written assignment grading rubric, from Cornel University College of Art, Architecture, and Planning, Department of City and Regional Planning.
In my units, students write essay and report on planning theory and practice. I think this rubric is useful for assessing students writing from five angles, although this is still much dependent on assessor's subjective evaluation on each point. This can be also used for cheking the research paper manuscript. Especially 'argument' is important. There is still need for checking the consistency of marking by cross-checking the marked assignments.
(Source: Cornel University College of Art, Architecture, and Planning, Department of City and Regional Planning Promise and Pitfalls of Contemporary Urban Planning Syllavbus for Fall Semester 2012)
Studio-Specific Learning Outcomes Assessment Model (Nemeth and Grant Long)
Nemeth and Grant Long recently publised the article 'Assessing learning outcomes in U.S. Planning Studio Courses'. (Jeremy Németh and Judith Grant Long, Assessing Learning Outcomes in U.S. Planning Studio Courses Journal of Planning Education and Research December 2012 32: 476-490)
This paper proposes a model for assessing learning outcomes specific to planning studio courses. I don't have 'studio' style teaching but The units I teach includes 'project-based' assignment (e.g. develop infrastructure desing for University Campus). Therefore the proposed model by Nemeth and Grant Long is very useful for my units.
This paper proposes a model for assessing learning outcomes specific to planning studio courses. I don't have 'studio' style teaching but The units I teach includes 'project-based' assignment (e.g. develop infrastructure desing for University Campus). Therefore the proposed model by Nemeth and Grant Long is very useful for my units.
Examples of individual outcomes
Nemeth and Grant Long recently publised the article 'Assessing learning outcomes in U.S. Planning Studio Courses'. (Jeremy Németh and Judith Grant Long, Assessing Learning Outcomes in U.S. Planning Studio Courses Journal of Planning Education and Research December 2012 32: 476-490)
Nemeth and Grant group the dozens of individual outcomes into six categories derived from the planning literature on the subject: communication, professional experience, learning-by-doing, problem-solving, teamwork, and ethics/values as below. I think this is very usuful in desinging the learnig outcomes and associated assessment. These outcomes are applicable to any planning units if you customise to fit into the unit context and objectives.
Professional experience
Examples of individual outcome
- Provide “real world” work environment
Communication
Examples of individual outcome
- Graphical/visual skills
Learning by doing
Examples of individual outcome
- Understanding of how theory informs practice
Ethics/values
Examples of individual outcome
Problem solving
Examples of individual outcome
Teamwork
Examples of individual outcome
Nemeth and Grant group the dozens of individual outcomes into six categories derived from the planning literature on the subject: communication, professional experience, learning-by-doing, problem-solving, teamwork, and ethics/values as below. I think this is very usuful in desinging the learnig outcomes and associated assessment. These outcomes are applicable to any planning units if you customise to fit into the unit context and objectives.
Professional experience
Examples of individual outcome
- Provide “real world” work environment
- Gain project management skills (budgeting, workflow)
- Understanding quality standards expected in practice
- Learn the planning and plan-making process
- Understand various roles of plannerCommunication
Examples of individual outcome
- Graphical/visual skills
- Written skills
- Oral presentation
- Understanding relationship between plans and physical realityLearning by doing
Examples of individual outcome
- Understanding of how theory informs practice
- Understanding of how practice informs theory
- Application of general planning concepts to specific context
- Learning how to synthesize skills, knowledge, values
- Acknowledge uncertainty/complexity in planning practice
- Recognition of planning as iterative, long-term processEthics/values
Examples of individual outcome
- Recognition of broader “public interest”
- Assess planning outcomes on set of values (e.g., justice, sustainability)
- Sublimation of personal opinion
- Creation of ethical foundation for future practice
- Recognize accountability/responsibility to served group
- Acknowledge and challenge systemic power imbalancesProblem solving
Examples of individual outcome
- Ability to formulate logical, defensible planning decisions
- Learn how to evaluate several possible scenarios
- Negotiate oppositional viewpoints
- Recognize importance of flexibility in decision-making process
- Seek appropriate assistance and expertise
- Being creative designing solutions and processes
- Develop critical thinking abilityTeamwork
Examples of individual outcome
- Role recognition in collaborative work
- Understanding basic group dynamics
- Development of leadership qualities
- Gain vital listening abilities
- Development of interpersonal cooperation skillsGood learning outcome
In the forum of 'Assessment and Evaluation in Tertiary Teaching' unit, we discussed that assessment is learning fundamental and thus assessment should be well connected with learning outcomes.
'Using eVALUate to improve student learning' (Curtin University of Technology, Oliver et al.) is showing some examples of how good unit learning outcomes might be expressed. I think this is very useuful.
'Using eVALUate to improve student learning' (Curtin University of Technology, Oliver et al.) is showing some examples of how good unit learning outcomes might be expressed. I think this is very useuful.
Successful students in this unit can:
•
Construct and justify logical arguments about [a significant issue in the field].
•
Critically analyse [a significant issue in the field].
•
Conduct a critical review of [a significant issue in the field].
•
Produce a [significant document] at a level acceptable to [stakeholders in the field].
•
Collaborate with [stakeholders or peers] in the successful production of a [significant
document]
•
Plan and carry out a [significant event] for [stakeholders in the field].
•
Use [discipline resources] to effectively [manage a discipline-related problem].
•
Evaluate [a scenario in the field] and produce [a significant report]• Work effectively as a team member to solve [a significant problem in the field].
- Use
[a discipline-related] theory to develop a solution to [a significant problem] within thecontext of discipline.Assessment Criteria Grid
Margaret Price and Chris Rust created a very useful criteria grid for assessment. I selected some examples that are relevant to urban and regional planning field.
I think the original grid by Price and Rust is also useful for checking the quality of research paper.
CRITERION
|
A
|
B+
|
B
|
C
|
REFER/FAIL
| |
Presentation & style
| ||||||
Communication and presentation (appropriate to discipline)
|
Can engage effectively in debate in a professional manner and produce detailed and coherent project reports
|
Can communicate effectively in a format appropriate to the discipline and report practical procedures in a clear and concise manner with all relevant information in a variety of formats
|
Can communicate effectively in a format appropriate to the discipline and report procedures in a clear and concise manner with all relevant information
|
Some communication is effective and in a format appropriate to the discipline. Can report practical procedures in a structured way
|
Communication is unstructured and unfocused and/or in a format inappropriate to the discipline
| |
Presentation (oral)
|
Imaginative presentation of material resulting in clarity of message and information
|
Well structured and signposted presentation. Audible and pace appropriate to audience. Visual aids used to support the presentation
|
Clearly structured and addressed to audience. Pace and audibility satisfactory. Visual aids used
|
Shows some attempt to structure material for presentation, pace and audibility are satisfactory most of the time
|
Material is difficult to understand due to poor structure and/or pace and audibility
| |
Conforming to instructions/clarity of objectives.
| ||||||
Attention to purpose
|
Has addressed the purpose of the assignment comprehensively and
imaginatively
|
Has addressed the purpose of the assignment coherently and with some attempt to demonstrate imagination
|
Has addressed the main purpose of the assignment
|
Some of the work is focused on the aims and themes of the assignment
|
Fails to address the task set
| |
Referencing
|
Referencing is consistently accurate
|
Referencing is mainly accurate
|
Some attempt at referencing
|
Referencing is absent/
unsystematic
| ||
Clarity of objectives and focus of work
|
Has defined objectives in detail and addressed them comprehensively and imaginatively.
|
Has defined objectives and addressed them through the work
|
Has outlined objectives and addressed them at the end of the work
|
Has provided generalised objectives and focused the work on the topic area
|
NO INFO PROVIDED
| |
Content and knowledge
| ||||||
Use of literature/
evidence of reading
|
Has developed and justified using own ideas based on a wide range of sources which have been thoroughly analysed, applied and discussed
|
Able to critically appraise the literature and theory gained from variety of sources, developing own ideas in the process
|
Clear evidence and application of readings relevant to the subject; uses indicative texts identified
|
Literature is presented uncritically, in a purely descriptive way and indicates limitations of understanding
|
Either no evidence of literature being consulted or irrelevant to the assignment set
| |
Context in which subject is used
|
Takes account of complex context and selects appropriate technique
|
Takes some account of context and selects some appropriate techniques
|
Recognises defined context and uses standard techniques for that context
|
Context acknowledged but not really taken into account
|
Context not recognised as relevant
| |
Thinking/ analysis/conclusions
| ||||||
Conclusions
|
Analytical and clear conclusions well grounded in theory and literature showing development of new concepts
|
Good development shown in summary of arguments based in theory/
literature
|
Evidence of findings and conclusions grounded in theory/literature
|
Limited evidence of findings and conclusions supported by theory/literature
|
Unsubstantiated/invalid conclusions based on anecdote and generalisation only, or no conclusions at all
| |
Analysis
|
Can analyse new and/or abstract data and situations without guidance using a wide range of techniques appropriate to the topic
|
Can analyse a range of information with minimum guidance, can apply major theories and compare alternative methods/techniques for obtaining data
|
Can analyse with guidance using given classification / principles
|
Can analyse a limited range of information with guidance using classification / principles
|
Fails to analyse information
| |
Critical reasoning
|
Consistently demonstrates application of critical analysis well integrated in the text
|
Clear application of theory through critical analysis/critical thought of the topic area
|
Demonstrates application of theory through critical analysis of the topic area
|
Some evidence of critical thought/critical analysis and rationale for work
|
Lacks critical thought /analysis / reference to theory
| |
Flexibility
|
Independently takes and understands multiple perspectives and through these can develop/adjust personal point of view
|
Recognises multiple perspectives which may affect personal view point
|
Can recognise alternative perspectives
|
Limited ability to see alternative perspectives
|
Fails to recognise alternative perspectives
| |
Practical/Interpersonal/Interpersonal Skills
| ||||||
Performance Skills
|
Can perform complex skills consistently with confidence. Able to choose an appropriate response from a repertoire of actions, and can evaluate own and others’ performance.
|
When given a complex task can choose and perform an appropriate set of actions to complete it adequately. Can evaluate own performance.
|
Able to perform basic skills with awareness of the necessary techniques and their potential uses and hazards. Needs external evaluation.
|
Able to perform basic skills with guidance on the necessary technique. Needs external evaluation.
|
Fails to perform even basic skills.
| |
Data/information gathering/processing
|
Selects and processes data with confidence and imagination.
|
Selects appropriate data and processes using relevant tools.
|
Makes a selection from data and applies processing tools.
|
Collects some information and makes some use of processing tools.
|
Random information gathering. Inappropriate use of processing tools.
| |
Self-criticism (include. reflection on practice)
|
Is confident in application of own criteria of judgement and in challenge of received opinion in action and can reflect on action.
|
Is able to evaluate own strengths and weaknesses; can challenge received opinion and begins to develop own criteria and judgement.
|
Is largely dependent on criteria set by others but begins to recognise own strengths and weaknesses.
|
Dependent on criteria set by others. Begins to recognise own strengths and weakness.
|
Fails to meaningfully undertake the process of self criticism.
| |
Self Presentation
|
Adopts a style of self presentation and selects from a range appropriate interpersonal skills consistent with the individual’s aims and the needs of the situation.
|
Can be flexible in the style of presentation adopted and interpersonal skills used.
|
Can adopt both a formal and informal style, and uses basic interpersonal skills appropriately.
|
Can adopt both a formal and informal style, and uses basic interpersonal skills but not always matching the needs of the situation.
|
No obvious sense of self and/ or interpersonal skills and/or skills used inappropriately.
| |
Communication and Presentation (appropriate to discipline)
|
Can engage effectively in debate in a professional manner and produce detailed and coherent project reports.
|
Can communicate effectively a format appropriate to the discipline and report practical procedures in a clear and concise manner with all relevant information in a variety of formats.
|
Can communicate effectively in a format appropriate to the discipline and report practical procedures in a clear and concise manner with all relevant information.
|
Some communication is effective and in a format appropriate to the discipline. can report practical procedures in a structured way.
|
Communication is unstructured and unfocussed and/or in a format inappropriate to the discipline.
| |
Rationale
|
Uses all available data to evaluate the options. Clear criteria are applied to demonstrate reasons for final decision/choice/outcome.
|
Uses data to evaluate options and selections of final outcome clearly follows from evaluation.
|
Uses data to evaluate some options and selection of final outcome is linked to the evaluation
|
Presents benefits and disadvantages of some potential outcomes but without providing clarity on reason for final outcome/choice.
|
Little explanation of how the final outcome/choice was made OR no indication of final outcome/choice.
|
登録:
投稿 (Atom)