## Part I: Setting a Standard for Rigor

*have*them.

In K-12 education, we’ve had state-level academic standards
since at least the mid-1990s, and now we have national standards for
mathematics and literacy. We’re in the midst of implementing those standards
right now, and feelings about them are…what shall we say? Mixed? Contradictory?
Occasionally passionate and occasionally ambivalent?

Passionate ambivalence shouldn’t be all that surprising. As
I said, we all have our own sets of standards governing different aspects of
our lives, and sometimes we find that our standards aren’t the same as other
people’s. Whether it’s the state department of education or the authors of the
Common Core, we don’t like it when someone comes into our classroom and says,
“You don’t tell me what’s good; I tell you.”

One problem with implementing new learning standards is that
so many of the things calling themselves “standards” aren’t standards at all;
they’re just To-Do lists. Teach

*this*to your 3^{rd}graders. Make sure 4^{th}graders do*that*. This is curriculum mapping, not a set of standards against which we can measure and understand student learning.
We are told that the Common Core is “upping the rigor” on
our teachers and on our students, but the standards alone can’t get us to that
Promised Land, because the standards alone don’t tell us what rigor looks like.

*We*need to decide that. It’s teachers and principals, working at the school level, who have to*use*the standards to define what constitutes rigorous work, and then create a culture that advocates and enables that kind of work across all subject areas.## What do we talk about when we talk about standards?

When we talk about standards, we are talking about two
different things: what a person is doing, and how the person is doing it. We talk
about content (the What) and we talk about form. But as it turns out, form has
two components to it. Bound up in form are process (the How) and performance,
or extent (the How Much).

As Grant Wiggins has pointed out in multiple blog
posts, and as Paul Bambrick-Santoyo has demonstrated in his excellent book,

**, our learning standards do a good job of telling us***Driven by Data**what*students are supposed to do (the content), but they rarely define either piece of the “form” puzzle:*how*they’re supposed to do it (the process) or the*extent*to which they need to do it to prove proficiency (the performance).
Wiggins, as usual, uses sports to create an interesting and
insightful analogy. If I was judging an Olympics trial for high jumpers, I
would have in my mind a description of what a proficient high-jumper does. I
would compare each athlete against this mental standard. Did he execute the
right steps? Did he do them correctly, and in the right order? That’s the content.
Did he execute these steps with proper, accepted form? With beauty and grace? That
may feel a little qualitative and mushy to you, but we’ve seen a lot of
athletes in our time. We know what a high jump looks like when it’s done well.
We know what a beautiful dive looks like. Our TV announcers freeze the video
frame and show it to us again and again. Look—she sliced through the water
without creating so much as a splash. Perfect form. Nobody disagrees. So there
is clearly a standard in place.

Next, we have to deal with performance level or extent. If
you can execute a perfect and beautiful high jump when the bar is set at nine
feet, and I can execute an equally beautiful jump, but only at three feet, then
you are clearly and indisputably the better jumper. At some point, we set a
standard for what height a professional, top-rated high-jumper should be able
to clear.

Do our learning standards set this kind of standard for process
and performance? You decide. Here’s a sixth grade mathematics standard from the
Common Core:

**Find a percent of a quantity as a rate per 100 (e.g., 30% of a quantity means 30/100 times the quantity); solve problems involving finding the whole, given a part and the percent.**

**CCSS Math Content Standard 6.RP.A.3c**

Now, borrowing an example straight from Bambrick-Santoyo’s
book, let’s look at some questions “aligned” to this standard. Which
question(s) would help us determine if a student was able to “meet the standard?”

**1. Identify 50% of 20**

**2. Identify 67% of 81**

**3. Shawn got 7 correct answers out of 10 possible answers on his science test. What percent of questions did he get correct?**

**4. J.J. Redick was on pace to set an NCAA record in career free-throw percentage. Leading into the NCAA tournament in 2004, he made 97 of 104 free-throw attempts. In the first tournament game, Redick missed his first five free throws. How far did his percentage drop from before the tournament game to right after missing those free throws?**

The answer is: we don’t know! The content standard doesn’t
make clear to what

*extent*a student needs to be able to apply his knowledge, as a sixth grader, or even what form (simple equation vs. complex word problem) we’re looking for. This is because, as it clearly states, it is just a*content*standard. Aligning textbooks and curriculum maps to these standards alone is no guarantee that anyone in your school is working at the same level of rigor, much less the desired or required level.## How can we set performance standards?

If we want to define more clearly what standards for
performance the Common Core is looking for, or the world of college and careers
require, we need to work together. If every teacher in the school sets her own
standard for excellence, for rigor, for “CCSS-ness,” there is no standard at
all. One teacher may be aiming for
Question 1 from the example above, while another teacher aims for Question 4. We
need commonly understood and accepted exemplars or anchors that tie performance
to content.

Fortunately, we are not working in a vacuum. There is a lot
of useful material

*surrounding*the Common Core State Standards. We just need to make sure we’re aware of it, and know how to use it.**Literacy**

The authors of the literacy standards have provided a lot of
exemplar material to help us understand proficiency at different grade levels. Appendix B to the
standards provides text exemplars by grade band, listing appropriate literary
and informational texts, and also providing excerpts that teachers can use in
class to practice close, analytical reading. The exemplars even distinguish
texts that are appropriate at the independent reading level from texts that
should be read aloud. Personally, I find these exemplar texts far more useful
than the three-part equation for calculating text complexity. When I look at
the excerpts, I get a real, visceral feeling for what rigorous means—and I can
go out to hunt for other texts that measure up.

Appendix B also includes sample “performance tasks” at
different grade levels. We know that the complexity of a particular text
depends, in part, on what you ask students to do with that text. Here, the
authors give us some rich examples of what textual analysis should look like at
different grade levels.

Finally, Appendix C
provides us with exemplars of student writing at different grade levels, with
explanatory annotations to show us what makes the sample a true exemplar of
good work.

The exemplars, tasks, and writing samples give us a real
standard of performance against which we can measure not only our students’
work, but also our

*own*work, as educators. Are we assigning similarly rich and rigorous texts? Are we asking students to work at high levels of complexity with these texts, performing tasks of analysis, evaluation, and synthesis? When we ask students to write, are we seeing work similar to what we see in Appendix C? Are we asking for and looking for these same kinds of things that the annotations are pointing out? If we are teachers, these are questions we should be asking in our PLC or department meetings. If we are school leaders, these are things we should be looking for during observations and walkthroughs. Not to punish each other, but to help each other as we define a new standard of work and find ways, as a team, to reach the standard we have set.**Mathematics**

The Common Core mathematics standards do not include tasks,
exemplars, or other performance-related anchors, which is a problem, as we saw
above. However, there are two places where we can learn about form—not just the
steps of the high-jump, but what it should look like, and how high it should
reach.

First, there are the Standards for Mathematical
Practice—the eight standards that define

*how*math should be used, regardless of grade level or specific content. These standards speak to things like abstract reasoning, constructing and defending arguments, working with precision, and making use of structure. In broad terms, the eight statements set a standard against which we can measure current practice and current materials. For example, it’s pretty clear that Question 1 from the Bambrick-Santoyo example above does not ask students to do any abstract reasoning or complex problem-solving. Even Question 2 is fairly basic, asking for nothing beyond computation. But which of the other two questions—the two word problems—is set at the right level of rigor for sixth grade? That, we can’t really determine from the practice standards.
However, there is another place we can look for examples. The
two major testing consortia have worked hard to define grade-level performance and
show us what it looks like. You can find sample test items from PARCC, the
Partnership for Assessment of Readiness for College and Careers, here, and items from the
SMARTER Balanced consortium here.
Both sets of sample items make issues of form (process and performance) pretty
concrete, and both sets do a nice job of depicting, in a visceral and
inescapable way, what the combination of math content and math practice looks
like. They can be enormously helpful when having discussions at department or
PLC meetings about rigor and the Common Core. Are we asking students to

*use*their math in the ways these test items are doing? If not, what’s the difference? What aren’t we doing? Is the difference simply in content difficulty, or is there something in the application that we’re missing? Perhaps we’re providing too much explanation and scaffolding. Perhaps we’re relying too much on classic equations and aren’t asking students to*find*problems themselves.
Whatever your subject area, and wherever you look for
examples and exemplars, it’s clear that we cannot be efficient and effective at
moving students towards higher levels of rigor unless we work together to set clear
standards for what rigorous work looks like—standards that everyone can see,
understand, and use to measure student performance. If anyone watching TV can
know enough to judge high jump or a high dive, why can’t anyone in our school
know enough to judge the quality of student work hanging on a bulletin board?