Assessment design

In forum discussion, one of the colleague suggested a great assessment design. Gemma designed this especially for introductory course, but I think this can be applied to many unit generally and I will refer for my next semester.

Gemma
' With my area of an introductory course, I think one very important part of assessment is teaching the students how to do the assessment.  This isn't necessarily more important than other aspects, but I feel it is often overlooked.  
Teaching 'how', will make assessment a lot less stressful for the students, and you will then be able to assess how well they have learnt the content, as opposed to how well they can deal with technical trouble, or how cluey they are as to finding help.  This includes
1)  Examples where appropriate - give the student an example of a good essay in the same or different topic, and point out why it was good in terms of content and structure.
2)  Practice in the means of assessment - such as practice quizzes to allow the students to know what it is like and what to expect, especially in online assessment and where technology is involved.
3)  Adequate feedback not just about the content, but about how they went about it - tell them if they demonstrate good content knowledge, but should just check the formatting guide.
4)  Ask for and listen to suggestions - students will tell you what they need help with, and as a teacher you can say that it is OK to ask not only about the content, but about the means, how technology works etc.  We are embedded in this every day and don't always know what a particular cohort of students don't know until we ask.

Moderation Strategy

In the forum discussion, some colleagues referred to ALTC's moderation strategy and assessment moderation toolkit.

 http://resource.unisa.edu.au/course/view.php?id=285

The toolkit is based on key principles identified by research under an ALTC project on moderation for fair assessment in transnational learning and teaching.

I don't need to consider moderation so much now as my unit is small and I am only the maker. But I think this is a useful toolkit that I will utilise in future when I need.

Feedback approach

I enjoyed reading ‘The mythology of feedback’ (Adcroft 2011). This paper questions: do academics and students share the same feedback mythology? If there is only a limited sharing of feedback mythology, How does the dissonance this creates manifest itself?
For me, the most interest argument in this paper is, ‘effort put into feedback that is not focused on assessment, despite the investment of time and resources, is simply not seen as feedback by students’. ‘Students cannot learn from feedback if they do not recognise that they are receiving feedback or if they are only interested in the marks they receive on assessed work’.

I agree with 'accurate measurement of feedback effectiveness is difficult and perhaps impossible'(Price et al. 2010).

My feedback approach is:
- Make effort to assist student learning, assist them to achieve their learning goals.
- To do this, understanding the expectation of student is important (I found one minute paper is useful).
-  Check if the intended learning outcome and assessment criteria (and rubric) are really addressing what it claimed to be for the unit

Feedback

This is the 'feedback' section of UC Assessment Policy

SECTION 4 FEEDBACK 4.1 Key principle: Students will be provided with timely and constructive feedback on assessment items that is explicitly related to the learning outcomes of the unit. Feedback will support student learning and include advice on how performance can be improved.
4.2 In the context of assessment, feedback is information returned to students on their
progress in their course or unit. The purpose of feedback is to provide students with information on:
(a) what they have learnt and how effectively they are learning;
(b) what standard of performance they have achieved; and
(c) what they need to do to improve that standard of performance.
4.3 Students will be provided with feedback on all assessment items whether they count towards a grade or not.
4.4 Students will be given feedback on assessment items at an early stage after a unit commences, particularly in the first semester or year of a course.
4.5 Both qualitative and quantitative feedback are necessary for student learning.
4.6 Students will be given the opportunity to discuss their performance and the feedback they have received with an appropriate member of the academic staff.
4.7 Assessment by academic staff should be accompanied by opportunities for students to assess both their own performance (self-assessment) and the performance of others (peer assessment).

Can student feedback questionnaires improve students' learning?

In Module 1, we discussed 'Can student feedback questionnaires improve students' learning?'
We agreed, 'yes', but it really depends on the circumstances, such as questions, purpose, and context.

Janet's comment describes the 'circumstances' very well.
'I say "yes" with an "if" and a "but" to this forum question. Yes, student feedback can improve student learning primarily by instigating critical reflection. With a cursory nod to Prosser, change for change sake is indeed counter productive. Change as a result of evaluation, reflection, contemplation and comparison can very much be worth while. One can always improve... right?
Hence, the "if". Student feedback can improve learning if the instructor is open to feedback, if the right questions are asked, if the student's opinion is actually valued and if meaningful changes are thoughtfully applied. The but: student feedback should be correlated with other methods of evaluation of teaching effectiveness. Viewed in isolation, student feedback has limitations.'

My comment was
'I agree with Janet. I think students feedback would improve student's learning but this may require to meet some specific conditions such as;
- Student lens are consistent with expected learning outcomes and thus their feedback is actually reflecting this
- Teachers really consider student feedback into teaching

In my experience, student feedback is helpful in providing me the opportunities to 'realise/understand' what their expectation is. This may not be always meaningful or consistent with the learning outcomes from teacher's point of view, but I think this is significant information in understanding the gaps between students and teacher's perceptions. Then helps in considering what should be done to improve teaching.

Best wishes,
Hitomi'


I distributed 'one minute' paper to students in my class following Corarlie's advice. I asked their expectations of this unit. Interestingly, their expectations were similar to the intended learning outcomes but it is varied. 
The below is some comments from students, which I refer and utilise in designing unit.

'I attempt to understand what factors affect the infrastructure planning'.

'be able to understand the various stakeholders in infrastructure planning'.

'My goal is to understand about the infrastructure planing process, especially in Australia'.

'I am expecting to understand the process, issues, and how to solve the isseus by using professional skills'.

'I would like to know about the difficulty&challenges of infrastracture planning in Australia. Moreover I want to know the solutions to these challenges, apply to practice.'

'To learn about types of infrastructure, implications of choosing one option over another'.

'To be able to present educated arguments for best practice in frastructure development and delivery'.

'To understand public/private partnership arrangements and how they are negotiated on what basis.'

'To understand the relationship between infrastructure planning and delivery and human behaviour'.





  

 

Norm-referencing VS Criterion-referencing methods

In my discipline (urban and regional planning), criterion-referencing is the 'norm'. But we certainly cross-check our marking to make sure that our assessment is consistent, which means we somehow 'compare' assignment and marking. We discussed norm-referencing and criterion-referencing methods in the forum. It was interesting to know that some colleagues experienced the environment that norm-referencing is the standard. We need to decide which referencing is appropriate for the field, and most importantly, for the intended learning outcome.

Hitomi
'Hi all,

I read this article with interest (suggested by Shane).
http://www.cshe.unimelb.edu.au/assessinglearning/06/normvcrit6.html

'There is a strong culture of norm-referencing in higher education.'

'The goal of criterion-referencing is to report student achievement against objective reference points that are independent of the cohort being assessed.'

'Norm-referencing, on its own — and if strictly and narrowly implemented — is undoubtedly unfair. With norm-referencing, a student’s grade depends – to some extent at least – not only on his or her level of achievement, but also on the achievement of other students. For example, a student who fails in one year may well have passed in other years! '

'Criterion-referencing requires giving thought to expected learning outcomes: it is transparent for students, and the grades derived should be defensible in reasonably objective terms – students should be able to trace their grades to the specifics of their performance on set tasks.'

Im my case, I apply 'criterion-referencing' methods as a) my student cohort is very small (around 10) and 2) my discipline (urban and regional planning) requires cross-cutting knowledge and skills. Some students might be very good with particular skills but may not with others. With criteria-referencing, students can identify which particular areas they are weak and need improvement. This may guide their future learning.

It is sometimes difficult for me to well explain the results at the faculty assessment board meeting. At the meeting, we discuss based on 'norm referencing distribution'. I understand that there are some cases that 'norm-referencing' works well but I don't feel comfortable in applying norm-referencing with the reasons above.

Which is appropriate in your case?
Bestt wishes,
Hitomi'


Dalma
'Hi Hitomi!
I am familiar with both systems, as at my former university there was a bell curve used for norm referencing, while in my current discipline we all work with criterion-referencing. We include (more-or-less) clearly defined assessment criteria in the unit outlines, and define them in further detail for marking. (The criteria listed in the unit outline do not contain the expected answers to an assessment question, of course.) The critera are aligned with the learning outcomes, inlcuding both knowledge and skills, but weight differently - e.g. getting an answer 'right' is obviously more important, even if presented in an unedited form, than having a perfectly edited submission with an absolutely wrong content.
In spite of the expectation to use criterion-referencing, the reporting to the Assessment board has to contain the percentage of each grade, and I've heard about colleagues having difficulties justifying grades that were not spread across the entire range. I never had this problem, as all my units where results were close to each other had small cohorts, either consisting in a small elite, or making very effective teaching possible, both these being accepted as justifictaion for grades generally above the discipline average. 
Finally, I also perform a cross-reference as a final review of the marking I've completed, not to compare the results, but to make sure that the criterion-based evaluations are consistent across the entire cohort, and not influenced by external factors, like me getting tired in marking or frustrated by bad submissions.
Dalma'

Carlos
'Hi Hitomi, Dalma,  very interesting discussion. Personally I have only become aware of the characteristics of  norm referencing  and criterion referencing methods through this module and the readings.
As Dalma, I come from having taught in a very strong  Norm_refencing system. The use of Bell curves was very strict in its percentages, accross all disciplines in a big,state funded University. Furthermore, within an Asian context, study was highly valued culturally, and the university constantly promoted its ranking and the high quality of their "selected students", which were considered an "elite". The whole society in my previous teaching context values the idea of "meritocracy".
Although I quite like criterion-referencing methods, I think there is no black and white. For the sake of discussion and critical thinking, at this stage I would like to highlight a few aspects of norm-referencing. Although in this context it appears as an "unfair way of assessing, mainly by comparing and ranking of the students"  I would like to highlight that competition and comparison are a reality in our world. People compete for jobs, in sports and for pure entertainment. Industries offer their products in the market, competing with similiar products, and we as buyers and consumers make daily choices based on comparison of products. Competition motivates improvement. Without going into details of the exact definitions of the words or semantics,  the origins of the words "competency" (based assesssment) and being "competent" seem to me to be related to "competition". Furthermore, we live in a society with norms, codes of conduct,  and professional protocols. Completely avoiding norms, or avoiding comparison and competition in assessment, could be, form this point of view, disconnected to reality.   
Possibly a balance between criterion and norm referencing might give good results?'

Shane
'Hi Everyone,
This is a very lively and interesting discussion.
I would like to build on the thoughts around competition, in particualr how criterion referenced assessment approaches don't exclude the possibility of incorporating competition or competitive behaviour. 
My contribution stems from a seminar I attended at UTS, whereby a software system had been created that allow academic staff to load their assignment criterion, and then grant access to students. Students would use the system to self assess their own assignments (a useful learning experience in itself when compared against the teachers judgements) and then the teacher would assess the work. The competitive element is contained in point 3 below:
The demonstrated impact of what the system allowed students and staff to do was multifaceted:
students could self-assess using the system and then compare against the teachers assessment - this resulted in students (over the course multiple assessments) developing a better sense of the quality of their own performances.
  1. teachers could save time by focusing detailed feedback on areas of disparity betweenthe self judgement of the student and the judgement of the teacher.
  2. once marking and moderation had been completed the students could see their criterion scores on a scale which contained the average of the cohort.
See more here: http://www.review-edu.com/

Hitomi
' Hi Dalma and Carlos

Thank you for sharing your experiences. It is interesting to know that some discipline has 'strong norm-referencing system'.

I agree, in Asian culture the idea of 'meritocracy' is really strong. I was one of the students in this system and as a student, didn't like itsmile

I agree that balancing criterion and norm referencing would be good way but we may need to consider the effective 'balance' and rational for it. Wondering if there is any reserach on this.

Best wishes,
Hitomi'


Gemma
'At CQUniversity, there is a strong trend towards professional degrees, such as Nursing, Accounting, Engineering, Education etc where there is a higher need for more of these in the workforce.  I think this is why we have quite a strong focus on criteria referenced assessment, because so long as someone is capable of doing the job, meet all the criteria, then they should pass and be able to go and do that job.  It is not a particularly competetive marketplace.  That may, of course, change at any time.'

Rubric Design

Colleagues insightful comments really helped me to crystallise the idea on how to use the rubric in my future teaching.

Hitomi
'Hi all For me, one 'Important design element for effective assessment' is 'rubric design'.
I must admit that I haven't really considered 'rubric' im my experience so far. I think part of the reason is that in Japan where I studied and started career, it is not a common practice to use rubric and no teachers don't really show rubric in scoring assessments. The other reason is that I expect students to be creative in desgining assessment. I don't want their 'thinking' to be restricted by the rubric.

But it is true that 'graduate and students value rubrics because they clarify the targets for their work' (Reddy and Andrade, 2009) and it is also more transparent on what basis the assessment is marked. On the other hand, Reddy and Andrade (2009) argue that 'there is evidence of both positive responses and resistance to rubric use', and 'more research is needed on validity and reliability or rubric'.

I personally feel (also as a student of this course) that rubric is still helpful but understand that this is tricky. Therefore I think 'rubric design' is important element to be examined well for expected outcomes (assist learning, transparent scoring etc.).

Best wishes,
Hitomi'


Coralie
' Hi Hitomi, Like you I feel that rubrics have many advantages. One of the limitiations not related to the concept of a rubric is what the rubric contains. For example, a criterion in a rubric that said "includes ten articles in the reference list" seems not particularly useful, how does the teacher know if the articles were read and how they contributed to the student's learning. I think it is more useful to have a criterion that relates to understanding readings and applying that knowledge in the assignment.
What if the students designed a rubric for an assessment item? Has anyone tried this or read about it?
cheers, Coralie'

Shane
'Hi Everyone, Thanks for raising the topic of rubrics Hitomi, Coralie and I would like to pose the following question to everyone to facilitate conversation around this topic.
The question is: In theory rubrics are important and useful, however in your experience and in your context what is it about rubrics that has not worked? Did you manage to fix the problem in subsequent iterations of the rubric?
Cheers,'

Dalma
' I am glad to see this topic in the forum, as we discussed this with Shane just earlier today.
In my perception, rubrics can be useful for both students and academics,as they can provide guidance to study and can help a lot in marking, but there are several downsides as well, which might make them less worthy, depending on the subject matter and the type of work. As lawyers say: it depends. And it sure does, on several factors.
One downside is that they risk transforming the assessment into a mechanical exercise of satisfying the rubrics; of ticking the box and working for the assessment itself, instead of using the assessment as a checking of actual knowledge and skills. I am interested to see how a student understands or knows something, but the more assessment criteria and the more detailed rubrics I provide, the more the students' focus will shift from showing their knowledge and skills to satisfying my listed expectations and try to fit in the square box I create, instead of thinking freely. Ultimately, students can learn to play the system, which can easily determine them to be surface learners. In an extreme, then, any assessment can become similar to an IELTS test, where you can get a high score by knowing the test mechanism, instead of actually knowing the language at that level.
The second downside is that rubrics can be perfect if you can quantify expectations, but are not easy to use (if at all) if you want to assess quality, creative or critical thinking. Levels of thinking may be defined with adjectives for the rubrics, but they may not say too much to the students working towards the assessment - e.g. I may define a criterion for an HD as "a solution to the client's case found through an innovative approach" or "finding a creative alternative solution to the client's case", compared to a DI as "a solution to the client's case that would stand in court", but this would not help the students more than what the assessment criteria and the assessment instructions offer anyway. The problem is that these rubrics require the highest level of knowledge and understanding in order to see the difference between the levels and in order to know how to satisfy any of them. If there is any other way of defining rubrics for this type of assessment, I would be very interested.   
Finally, yet another problem arises in problem-solving assessments, where I find it hard to see how to transform the expected content into rubrics that don't actually give out the answers. I use such rubrics as marking guide for myself, but with room for flexibility. Unless there is only one right way of answering something, rubrics may take away the flexibility of achieving an excellent result by different means. Having 10 authorities listed as a condition for an HD seems completely unsuitable, when a student might only use one, but in a way that serves its purpose much better than all the other nine alltogether, or when the student's original ideas worth a million compared to any of the authorities available out there. Similarly, expecting students to find and use one particular case (in law) is too rigid, if another student may find and use a different case that does not initially seem as relevant, but uses it in a way that just makes it perfect. 
Assessing thinking instead of regurgitated knowledge, does not seem to easily fit into rubrics. For this reason, and because in law everything is debatable, creating a rigid system of rubric-based assessment can become very counter-productive, inhibiting the exact way of thinking we try to develop in our students.
Conclusively, I am not saying that rubrics are bad per se, I am only saying that they are not suitable for every assessment. I am, of course, open to and interested in evidence to the contrary.
Dalma'

Gemma
'Hi Dalma,
I agree about the limitations of rubrics to properly assess creative and imaginitive projects.  As the creator of a rubric we are limited by our experiences, but the exceptional students can draw from a different and often extremely broad range of their own experiences and so come up with something you hadn't thought of.
Also, at the other extreme, introductory courses, it might be difficult to use rubrics for all assessments.  I teach Intro Physics and at least some of the assessment is to be around numbers, traditional 'tests' if you will, hence a rubric might not be useful.  I would, however, like to mention that I have certainly broadened my own thoughts on assessment for Intro Physics after reading about constructive alignment and Bloom's taxonomy.  One thing that has always struck me about quantitative marking is that students can pass, even if they just don't get it.  This was well put by the example (I forget where) of the surgery student who could tick all the boxes about neat stitching, precise cutting, timeliness, etc, but removed the wrong organ.  Would that be picked up be a rubric?  Got the high ticks for all but one criteria - would make an HD!  But the student really should fail.
I think, however, that it is in designing the course learning outcomes, the assessment, and any applicable rubrics all together, that does allow scope for rich learning.  We need to follow our own teaching and think creatively and 'out-of-the-box' for the whole course design, not just try and fit a rubric to what we already assess or even have the same learning activities and assessment that has always been.  I dream of an Intro Physics course without 'tests'.  If only I was given the opportunity.
Cheers,
Gemma.'

Nell
'I have found the Business Assessment Grid provided by Price et al (2004) to be a wonderful resource and something that I will definitely store for future reference and there would certainly be areas within the Grid that I would be open to sharing with students to increase their overall understanding of both assessment expectations in grading along with explanation of certain assessment terminology which further builds on Oliver et al (2005) concepts of formulating clear learning outcomes and graduate attributes. The detailed discussion and use of rubics (Reddy et al 2010) has increased my personal insight into this as a method for use in future teaching practices.  While, as Gemma has discussed the rubic used do have the potential to be limited by own own experiences, the ability of using rubics generally in assessment items to increase overall student clarity in their learning and quality targets is invaluable, however I do believe that giving students access to rubics before/with the assessment item is crucial for student transparency in assessment expectations.'

Hitomi
'Hi Everyone
Thank you for sharing your insightful thoughts on 'rubric'.

I have similar concern Dalma - similar to Law, 'Urban and Regional Planning' is also debatable and
'rubric' can be tricky.
But I still think that rubric is helpful for students and teachers. I usually received questions from students about the expectation for assessment. And I've found that students have various understanding of 'asessment criteria'. I have an impression that just describing the asessment criteria is still unclear - of course it depends on the discipline and unit.

There is still 'a lot' to think in using and designing rubric!.
Hitomi'

Shane
'Hi Hitomi, I am aware of cases where, given the time and opportunity, staff have developed rubrics with their students, what the criteria for an assessment item/performance look like, along with what woukd constitue an HD, D, C, etc. This helps develop a shared understanding. For assignment 2b add valued resources like the one you mentioned about rubrics, or link to them, to your portfolio with some detail about your context and why you added it. Cheers, Shane.'