Week X3: Numeracy, Assessing Process

  • Comment on our discussion on numeracy in general and our discussion on the relationship between numeracy and mathematics in particular. (and etc on numeracy tasks)

Okay, so Prof. Liljedahl has a very particular idea of what numeracy is and what it isn’t. I’m convinced, but the challenge will be whether the rest of the world chooses to understand the term this way.

What numeracy isn’t: being able to add, subtract, multiply and divide. knowing your multiplication tables.

What numeracy is: stepping up with whatever mathematical tools you’ve got and getting the job done.

Pithy, but requires unpacking.

The ‘job’ in this case would be, any kind of messy situation where math may be useful. Like are you willing and able to take this weird scheduling situation and apply some mathematical reasoning to it? Juggle the numbers around and make sense of it? Even when there isn’t an easy “right” answer?

Peter has a set of “numeracy tasks” available on his website, and they all tend to have a few things in common:

  1. Low floor, high ceiling.
  2. HUGE degrees of freedom.
  3. Fixed point – an “obstacle”.
  4. Intentional ambiguity.

The obstacle helps rein in the huge degrees of freedom by giving a creative constraint. The ambiguity and the degrees of freedom will, in some problems, draw out questions of personal choice and value judgments into the solutions. For example, we worked on one problem to do with how fundraising dollars were split among students who had done uneven amounts of fundraising, and whose ski trips would cost varying amounts. Questions of individual responsibility, offsetting the expenses for the underprivileged or those new to the sport, and generally trying to make sure no one would be left feeling taken advantage of were all themes that came up in our solutions. Just by asking us to sort out how to split some numbers fairly!

So, okay. Back to the definition of “numeracy”. This isn’t how everyone uses the word. Often ‘innumeracy’ is associated with poor estimation skills, inability to mentally process differences in large numbers, or simply being unable to multiply. Peter’s take on those cases is that they’re not about numeracy, they’re about “number-acy”. Which, okay, also probably not a word that everyone else will buy into, but it gets the point across. Being numerate is more than arithmetic in the same way that being literate is about more than spelling or grammar quizzes. You have to be willing to engage with the world through the medium of language to be literate. Likewise, you have to be willing to engage with the world through the medium of, and with the toolset of, mathematics to be numerate. If you aren’t willing and able to grapple with life using the mathematical tools you’ve got, that’s akin to being unable to handle situations in life that require reading.

The tools we’re talking about are important here as well. This isn’t supposed to be about grappling with applied problems involving the latest mathematical idea you’ve just recently learned. This is about pulling out the tried-and-true skills you probably learned a couple of years ago.

My own experience with numeracy tasks in my classes has been pretty good, although I feel like everyone should beware of the design-a-school task just a bit. (Don’t start with that one! Just don’t.) It’s important to know what you’re looking for with these tasks, and it tends to butt up against your assessment framework if you don’t already have a space for assessing the process (or mathematical competencies / values in general), since you’re not meant to find freshly-learned content in students’ work here!

We also talked that same day about a 2×2 grid of assessment possibilities. The axes were (in my words):

  • Process / Product
  • Non-invasive / Disrupts flow

So for example, a traditional test would assess the Product of learning, and totally disrupts flow. I mean, okay, some of us freaks manage to achieve flow mid-exam and get a kick out of it in the end, but for the most part tests are isolating, jarring experiences that do not tend to be part of a smooth mathematical flow-experiencing problem solving group-kicking-butt session.

What else can we do? Well we looked at a number of examples: having students submit separate rough / good copies of a solution; using variations on the placemat idea; having students tell the narrative of how they solved a problem; creating a comic (with panels dedicated to the math) on how they solved the problem, showing beginning, “Aha!” moment, and solution.

All of these manage to live in some other area of the graph by either including (or focusing on) the process of how a problem was solved, and/or by being created after the fact of a problem-solving session. In some cases they still do an effective job of assessing whether someone has individual understanding of a problem, so they can be used to assess understanding in a similar way to a quiz or test. In other cases, what you’re assessing is either more than or orthogonal to the content – you want to know whether they are willing and able to engage with a problem they don’t have a clear solution to immediately, and how they do so.

Whew, okay, this is long.

Here are some more course-requirement bits tacked in at the end!

  • Comment on the debate last week. How did that activity help you to engage with the content of the book?

Ok, so the book in question was Experiencing School Mathematics by Jo Boaler, in which she describes her PhD research – a multi-year study tracking two groups of math students with similar backgrounds but who were in very different school environments.

For this course, we had an essay response due on the same day as the in-class debate. The essay was centered on how we read ourselves “into” the book as we studied it – where did we see reflections of our past, present or future teaching practice? The personal spin on the essay was great in my case. The debate in class was on “Phoenix Park vs Amber Hill”, the two name-changed schools the study was on.

Planning for the debate in-class was challenging; some of our group members were gleefully competitive, but there was a lot to talk about and we had to learn how to organize ourselves for something of this nature. (We were given the debate structure only about 45 min before the debate began.) My biggest takeaway, other than to learn a little bit more about what gets my classmates riled up about protecting students (ahem Louise), was seeing other people’s perspectives of the book and how it compared to my own. The narratives we took away about the teachers highlighted in the research, for example, were drastically different at times. (eg. Seeing Phoenix Park’s math teachers as ‘winging it’ vs understanding them as having made very thoughtful plans that are designed to be flexible)

  • What are your thoughts on my comments from class:
    • “You have to evaluate what you value.”
    • “Evaluation is a double-edged sword. When we evaluate our student they evaluate us. What we choose to evaluate tells them what we value.”

My thoughts are … yeeeeeeah I agree with this and it cuts me deep. My values have been hard for students to read this year, and that’s a problem.

It’s also encouraging, though. When evaluation and assessment are just Things You Have To Do, it’s easy to lose motivation. But when it’s viewed as part of the communication process, when it’s a way to tell students what really matters to you about their math education, suddenly it’s a lot easier to step up to bat and get it done.

  • Experiment with co-constructing and using a rubric to help improve a behaviour.

I’ve given this one try before the break, and it’s in my plans for next week for sure. Our co-constructed rubrics were a good way to both get students to reflect on what’s important, make sure they’re being evaluated using student-centered language, while also letting them know what matters to me as their teacher.

I tried this out with a ‘Curricular Competency’ rather than a behaviour; it worked okay for that but I’m going to push it a little more into classroom values and have us build a rubric on Collaboration.

Truth be told, I put off reading this one because it felt like preaching to the choir. I’d love to speculate a little further than this article went, though. *Why* are we so disposed to thinking that our assessments are more objective than, say, an essay grade? Because when assessments are only valuing computation, “full marks” is easy to measure – you got the correct answer. As said in the article, the question of what meaning the assessment is actually measuring is something worth considering there.

Aaaaaaand now my mind is spinning on big assessment questions and I’m just going to move on.

 


Posted

in

by

Tags: