M. Enerson, Kathryn M. Plank, and R. Neill Johnson
Unhappy indeed are the moments when we discover--often while grading the
final exam--that what our students have learned is not at all what we
thought we were teaching. Faculty, and for that matter students, need
effective ways of monitoring learning throughout the semester. Although
individual instructors often do invent, discover, or simply stumble upon
a strategy that works, these informal and often serendipitous
discoveries rarely become a matter of public record. Thus, a few years
ago, Thomas A. Angelo and K. Patricia Cross compiled a volume, Classroom
Assessment Techniques: A
Handbook for College Teachers,
describing strategies that college teachers had found useful and
that could be used as models for asking the fundamental but often
elusive questions "What are your students learning?" and its
corollary, "How effectively are you teaching?"
For several years now, the Center for Excellence in Learning and
Teaching (CELT) has been promoting the use of Angelo and Cross's
strategies, with generally good results. Along the way, it has become
quite clear to us that the data faculty obtain from classroom assessment
activities can be immensely useful for improving teaching and learning
in a variety of ways. For example, such activities can help students
learn how to study, encourage teachers to analyze objectively what
transpires in the classroom, and guide students in a self-analysis of
their own learning processes.
But we have also discovered that some of the classroom assessment
strategies Angelo and Cross describe have considerably greater
applicability for Penn State teachers than do others. Fortunately, those
with the most widespread appeal and utility are also among the simplest
to use. In the sections that follow, we have briefly outlined those
techniques that are easy to use, interpret, respond to, and modify. We
have also included examples of how some of these techniques have been
adapted and implemented by Penn State teachers. We hope you will be
encouraged to try one or two of them in your own classroom. Or better
yet, perhaps reading about the techniques others have designed will
inspire you to invent one or two of your own. If you do, please let us
know so that we can pass it on to others. As always, we welcome your
feedback and comments about your experiences--successes or
failures--with classroom assessment techniques. If in experimenting with
these ideas you have questions or would like to discuss your results,
please feel free to stop by or call CELT (401 Grange Building,
Recommendations for Use of Assessment Techniques
As you read through the classroom assessment
techniques described below, consider which one has the most potential
for a course you are teaching. Why? What do you think you might learn by
using this technique? How do you think the feedback you receive will be
useful to you? As with most other decisions about teaching, responsible
and effective use of classroom assessment techniques begins with a clear
understanding of purpose and expected outcomes.
Will you need to modify the basic procedure(s) to suit your
particular situation? For example, as an alternative to having students
respond individually to a task, some Penn State teachers have had good
success with asking students to work in pairs or small groups. This
variation will give you a less accurate gauge of any individual's
performance, but it does provide a reasonable estimate of class
performance and can help to consolidate feedback in large class
sections. In addition, group activities of this sort can be an extremely
effective way to introduce students to one another and to encourage
group learning. Many other variations are possible. The descriptions
that follow are meant to provide a general outline of possibilities, not
a rigid prescription.
Next, should student performance be anonymous? Although Cross and
Angelo recommend that classroom assessment techniques be ungraded, some
faculty at Penn State have found it more successful, especially early
on, to "count" them. Even 1% of the final grade (perhaps with
a grading scheme of check, plus, and minus) will encourage students to
take them seriously. These techniques seem to work best when they are
viewed as a source of feedback and not as a system for evaluating
student performance. Achieving the right balance may take a bit of
As you design the activity, keep it simple. What do you want students
to do? What kind of response do you think you will get? Don't ask for
more data that you need; don't ask for more data than you are willing to
use. Read through the completed questions or tasks you have written--or
better yet have someone else read them--and check to see that they will
in fact solicit the kinds of information you are looking for.
Introduce the activity by letting students know why you are using
these techniques. How will the information you collect help them? Even a
simple statement about how such information makes it easier for you to
plan subsequent class sessions may be enough. Genuine but judicious
explanations of the technique and its purpose seem to work the best.
Once you have collected the students' responses, sort and analyze the
data. Look for any patterns. What is the most common response? How
common is it? Any bimodal distributions? What did you not already know?
What suspicions were confirmed? What do you plan to do about it? For the
most part, student responses will probably sort easily into a few
general categories. If you have a large number of responses (75 or
more), begin your analysis with a sample of the total set of responses
(but be sure it is a random sample).
When you have analyzed the data, share at least some part of that
analysis with your students. What did you expect? What did you not
expect? How will this activity affect their experience as learners in
the classroom? Students seem to benefit greatly from knowing how you
will be using the information they provided you and, perhaps more
importantly, how they can use their responses as a guide for improving
their skills as learners.
Finally, don't feel you have to rush into using these techniques in
every class or during every session. Begin slowly. And if you cannot see
how a technique will work in a particular class, don't force it.
Premature or frivolous use of these techniques can actually be
counterproductive. These basic descriptions have already served as a
source of fruitful ideas for many Penn State faculty. There is much
richness in them. But do avoid the only real danger in classroom
assessment--too much data and not enough time or experience to know what
to do with those data.
Brief Overview of Techniques
Background Knowledge Probe
Asking students for general information about their background and
preparation for a course is a fairly common practice among college
Background knowledge probes are simple
questionnaires that extend this activity to include a few focused
questions about concepts that students will need to know to succeed in
the course. Asking questions of this sort can help to highlight
important concepts for the students as well as to inform the instructor
about the students' knowledge and abilities.
Background knowledge probes can be used at the beginning of a course,
at the start of a new unit or lesson, or prior to introducing an
important new topic. Once collected and analyzed, the data can be
extremely useful when planning subsequent sessions or units of the
course. Although many classroom assessment activities can be done for
credit, it is usually best to make these probes an ungraded activity.
Discovering that your students' background and preparation are at
odds with your expectations can throw even the best-planned lesson or
syllabus off-track. However, knowing is certainly better than not
knowing. At the very least, such data help you guide students to the
appropriate resources for any supplementary assistance they may need.
The misconception/preconception check is a variant
of the background knowledge probe, but it focuses directly on those
kinds of prior knowledge (or beliefs) that may actually hinder learning.
This technique can be particularly useful in courses dealing with
controversial or sensitive issues, or those in which students may have
developed intuitive but inaccurate theories. Assessing this ahead of
time will save considerable frustration later on.
When preparing questions of this sort, begin by asking yourself the
following questions: What misconceptions or preconceptions might be
commonplace among students who take this course? Which of these are most
likely to interfere directly with learning for the course? How can I
deal with these misconceptions once they are identified?
Generally speaking, misconceptions are not dislodged simply by
admonishing students to stop thinking that way. Rather, students
typically need to deal directly with their preconceived notions before
they can be successfully led to an understanding of why those beliefs
are untrue. For example, many college students believe that it's cold in
winter because the earth is further from the sun, even though they have
almost certainly been taught otherwise. In this case, they could be
asked to develop an explanation that accounts for other facts, such as
the difference in seasons in the northern and southern hemispheres. Such
an activity may involve nothing more than sorting the responses to a set
of questions, discussing the general types of misunderstanding with the
students, giving them a chance to explore the limitations of those
misunderstandings, and then letting them respond to a new set of
problems with an opportunity for additional feedback and
A quick and extremely easy way to collect written feedback on what
students have learned, with only minimal investment of time and energy,
is an activity often referred to as the minute paper. The
instructor simply stops class two or three minutes early and asks
students to respond briefly to the following two questions: "What
was the most important thing you learned during this class" and
"What important question remains unanswered for you?" Despite
their simplicity, minute papers assess more than mere recall. To select
the most important or significant information, learners must evaluate
what they recall. Repeated use of minute papers helps students learn to
focus more effectively during lectures.
Obviously, this technique lends itself to countless variations.
For example, you can encourage students to give more substantive
responses by asking them to write their minute papers as if they were
communicating the most important point from that class period to someone
who had been absent. Another variation is sometimes called the muddiest
point, which directs students to describe what was most
confusing about a particular lesson or topic.
Documented problems are a natural extension of the not
uncommon request to "show your work." By asking students to
show both their work and show the reasoning behind their work, teachers
can get extremely valuable and detailed information about any conceptual
difficulties or lingering misconceptions students may have, as well as
an overview of the basic strategies they are using to solve problems. In
addition, documented problems are an extremely effective means of
helping students clarify their thinking and gain more deliberate control
over their approach to problem solving. Documentation of a problem can
be something as simple as a brief paragraph or two of what was done (and
why) or extensive as a line-by-line report of each step in a
mathematical proof. Having students write out the reasoning behind each
step of a problem gives the teacher very detailed feedback about the
students' skills and understanding. It also gives students the
opportunity to assess how well they understand a particular type of
Although relatively easy to imagine using, this assessment technique
can actually be difficult to implement. Students are often not
accustomed to writing simply and clearly and need to be told how to
explain what they have done. Thus, they may need to be given
instructions about what constitutes a good written explanation or how to
get their points across. Also, because many students may continue to
focus on just getting the right answer, they may need frequent and very
direct encouragement to focus on the processes--not products--of problem
In many disciplines, especially at an introductory level, a first
step to real problem solving is learning how a variety of conceptual
taxonomies work. In other words, students need to learn the rules for
what goes with what. Categorizing grids can be a useful
diagnostic aid in these situations. Courses in the biological and life
sciences, for example, lend themselves easily to the use of this
technique. To begin, you will need to identify a key taxonomy and then
design a grid that represents those interrelationships.
Keep it simple at first. Avoid trivial or ambiguous
relationships, which tend to backfire by focusing students on
superficial kinds of learning. Although probably most useful in
introductory courses, this technique can also be used to help develop
basic study skills for students who plan to continue in the field.
of Misconception/Preconception Check
Sample provided courtesy
of Wayne H. Bylsma (Psychology)
On the first or second day of a social psychology class, I set up the
issue of common sense vs. the scientific method. We then play a round of
"Socialpsychobabble." This consists of randomly picking
students to come up and answer a question. They introduce themselves and
then I ask a question, such as "Do you think that most people would
over- or underestimate how attractive they think they are as compared to
how other people view them? Would this differ between men and women? If
so, how?" The answers to these questions are based on research
data.* Usually students answer about half correctly.
In addition to giving them a chance to get to know a few other
students in a large class, and to preview some of the issues the course
would address, this activity helps point out that even though
"common sense" will sometimes generate the same answer, we are
more willing to trust claims about human nature that are derived via the
*[Research suggests that everyone overestimates, but among men there
is no relationship between self-ratings and ratings by others whereas
among women there is a small relationship.]
of a Variation of the Minute Paper
Sample provided courtesy
of Katherine R. Chandler (English)
I have students answer a couple of questions for each paper they turn
in because I've found that their answers reveal a lot about what they've
learned and need/want to learn. I think they think about
their writing if I ask them to do this; otherwise, it tends to be just
another task completed.
Just before they hand in their papers, they answer questions or
complete sentences like the following.
I'm most satisfied with . . . I'm least
satisfied with . . . I'm having problems with .
In writing this essay,
what did you learn that surprised you? When editing your paper, what
were you unsure about?
Point out specific places
in your argument at which you were aware of accommodating your audience
(their knowledge or attitudes). Point out places in which you used
sentences for rhetorical effect.
Why did you choose this
particular arrangement? What would you do differently if you had more
What particularly pleases
you about this argument? What in your writing process has changed since
the beginning of the course?
The questions change according to the nature of the assignments, but
I learned that it is better to limit yourself to just a couple of
questions. Students are able to ponder them more and provide more
revealing answers if they aren't given too many.
I like to interact with their self-assessments in my comments on
their papers, as if we're holding an ongoing conversation about their
own particular writing through the semester. I also learn what I need to
be addressing from their comments.
of Categorizing Grid
Sample provided courtesy
of Robert Mitchell (Biology).
or organ supplies
Arch of the aorta