Biggs, J.B., & Tang, C. (2007). Aligning assessment
tasks with intended learning outcomes: principles (Ch. 9). Teaching for quality
learning at university (3rd ed., pp. 163-194). Maidenhead, UK:
McGraw-Hill/Society for Research into Higher Education & Open University
Press.
Assessment tasks should comprise an authentic representation of the
course ILOs (intended learning outcomes).
Self- and peer – assessment are particularly helpful TLAs for training
students to reflect on the quality of their own work.
A valid or authentic assessment must be of the total performance, not
just aspects of it.
In making holistic assessments the details are not ignored. The
question is whether, like
the bricks of a building or the characters in a novel, the specifics
are tuned to create an
overall structure or impact (p. 184)
In order to assess learning outcomes holistically, it is necessary to
have a conceptual
framework that enables us to see the relationship between the parts and
the whole.
Teachers, like journal editors, need to develop their own framework.
(p. 185)
Self-assessment: What points do Biggs & Tang make about
self-assessment on p. 187?
James, R. McInnes, C. & Devlin, M. (2002) Assessing learning
in Australian universities. Retrieved February 13, 2007 from University of
Melbourne website.
It is a matter of
urgency that assessment is addressed in a way that acknowledges the
multiplicity of inter-related issues and concerns.
Assessment needs to
motivate and challenge the learner, to stimulate learning and to provide
feedback.
Assessment strategies
must accommodate all the purposes of assessment, which need to be addressed,
including accreditation and quality assurance, but without a strong commitment
to assessment’s role for learning, there is a danger that this role will
largely be lost.
If assessment suffers
from being an afterthought in the course design process, feedback is distanced
even further, rarely being considered in a strategic way.
it must be acknowledged
that the practice of assessment is multifaceted requiring a range of skills
such as design, student support, communication, clarification and the
application of standards, stimulating
and enhancing student
engagement with the task, and feedback, and, although each facet warrants
consideration in its own right,
Managing a complex issue
such as assessment demands an integrated approach as well as one that takes
account of the context in which assessment takes place.
There is a false (but
widely held) view that marks are a systematic and consistent reflection of the
quality of students’ work (indeed, the system depends on it!). In reality, many
studies document that marking systems are an unreliable means for grading student
work (see, e.g. Hartog and Rhodes 1935; Laming 1990; Newstead and Dennis 1994;
QAA 2006).
It takes considerable
thought and reflection to design assessment for complex learning, in particular
assessments which are challenging and fit for purpose. Further challenges are
then posed by the need for staff to be highly skilled and trained in assessment
in order to ensure effectiveness.
Learning in a structured
environment is underpinned by students’ understanding of the assessment
standards and processes used by the disciplinary community into which they are
being inducted.
summative assessment
generates marks and regulates whether students can pass through a specific
boundary when moving towards accreditation. Formative assessment, on the other
hand, gives students information about how their learning is progressing.
Summative assessments
are often considered to be more resource-intensive (than formative) because of
the administrative workload required to verify and record the results.
Many formative
assessment tasks are allocated a proportion of the summative marks to ensure
students undertake the work.
It may be that summative
assessment based on programme outcomes can provide a more accurate reflection
of student achievement, allowing formative assessment to support learning.
Developing a consistent
assessment strategy and coherence between strategy and implementation is
dependent on a clear and widely agreed view about the role and position of
assessment in relation to the continuums.
A comparison of norm-referencing and criterion-referencing methods for
determining student grades in higher education
For large student cohorts (such as in senior secondary education),
statistical moderation processes are used to adjust or standardise student
scores to fit a normal distribution.
There is a strong culture of norm-referencing in higher education.
In contrast, criterion-referencing,
as the name implies, involves determining a student’s grade by comparing his or
her achievements with clearly stated criteria for learning outcomes and clearly
stated standards for particular levels of performance. The goal of
criterion-referencing is to report student achievement against objective
reference points that are independent of the cohort being assessed.
Which of these methods is preferable? Mostly, students’ grades in
universities are decided on a mix of both methods, even though there may not be
an explicit policy to do so.
Norm-referencing, on its own — and if strictly and narrowly implemented
— is undoubtedly unfair. With norm-referencing, a student’s grade depends – to
some extent at least – not only on his or her level of achievement, but also on
the achievement of other students.
Criterion-referencing requires giving thought to expected learning
outcomes: it is transparent for students, and the grades derived should be
defensible in reasonably objective terms – students should be able to trace
their grades to the specifics of their performance on set tasks.
Best practice in grading in higher education involves striking a
balance between criterion-referencing and norm-referencing.
0 件のコメント:
コメントを投稿