“Let me tell you about what I think she’s really learning…”
As a young teacher of adolescents in Montessori environments, I found student assessment, summary reporting, and parent conferences discouraging. I couldn’t find a way to describe and legitimize the kinds of growth I was most excited about seeing in my students.
I had a belief that absolutely any academic content can offer the potential for personal growth, but the quality of the student’s interaction with the content was decisive, and the resulting growth might not be academic. I felt certain that, by reducing our reports to quantifiable data points (content covered, follow-ups completed, and concepts apparently mastered), we were describing the vehicle, rather than the students’ journey.
As a teacher, I always used schools’ standard report cards as I was expected to, but I likewise always found myself apologizing, during parent conferences:
> “What’s not clear from this report card is that we’re really teaching the whole child.”
> “I want you to understand that there’s much more important growth happening than what this report suggests.”
> “Actually, what you see on this report card belies a few significant concerns I want to address with you.”
Later in my career, when I was hired to lead a team of adolescent educators as an embedded Program Director, I worked with other faculty to enhance the quality of our reporting. In particular, I encouraged an amplified commitment to the narrative sections of our report cards, deliberately endowing qualitative data with greater institutional value.
It did not take long for usto feel that our reporting was gaining integrity.
Over time, though, the lack of consistency was consistently disappointing – not only across the faculty, but also across successive reports written by any given teacher. Despite rich observations, thoughtful individual instruction, and purposeful, well-designed, and highly personalized support processes, there was little rigor or coherence (and therefore, little credibility) in our reporting narratives. One teacher’s effusive commentaries would make another teacher’s less verbose reports appear to be lacking in effort. There were few threads of consistency to be found across the student population in the adolescent program, and none across the school’s Pre-K – 12th grade age span.
In fact, there was no real clarity about what we were observing for, or why. The entire qualitative reporting structure was built on the intuition of a handful of talented and well-meaning adults. There was little explicit agreement among us about the attributes we were cultivating, and nobody on the outside could tell how we spelled success. We had no idea whether we were trying to nurture the same traits in all the young people, or focus entirely on the individual. We had no explicit understanding of whether we were focused on filling gaps, “correcting weaknesses,” or fostering growth in areas of strength.
Though I had been away from the classroom for over a decade to adopt and raise two children (between my first teaching experiences and this leadership role), I had never stopped thinking about how parents and teachers work together to recognize and encourage growth. When my own kids were old enough, I toured and observed far and wide for a school whose values I could fully embrace. In addition to observing physical environments and classroom interactions, I found myself increasingly drawn to schools’ reporting tools.
Far beyond what I’d expected, school report cards offered insights into the fundamental values of the learning communities I was comparing.
I felt like I’d found the place where the rubber meets the road: the school’s explicit assertions, for parents, about what had been accomplished while students were at school. Studying school report cards became a hobby for me.
I had my first “Assessment Aha!” moment in 2009, when I got my hands on a multi-page rubric developed by Pat Baker, one of the founders of The Bixby School – a private, progressive Pre-K through 5th grade learning community in Boulder, Colorado. I was so excited by The Bixby School’s assessment rubric (along with other aspects of the program) that I immediately registered Kofi to attend 1st grade.
(Ironically, by the time I stumbled on the rubric, 2 or 3 years into its use, The Bixby faculty had come to find it cumbersome, and parents were pushing for quantitative reporting consistent with broader trends in education. The school discontinued use of the rubric after the following school year.)
I, on the other hand, was hooked on key elements of the format. Specifically, I was enamored with the idea of developing a collection of non-academic attributes, grouped into several domains, each characterized by a narrative continuum of behavioral descriptors. I set about adapting and modifying the instrument right away, and had been tinkering with it for several years before I found myself back in an institutional setting where I had the authority to pilot its use.
While I immediately envisioned possibilities for creating rigor and consistency in observation practices across entire school communities, at no point did I hope to arrive at the tool to end all tools. I wanted always to protect the flexibility to customize, individualize, and create local, contextual relevance.
I began to consult Maria Montessori’s writing for guidance in defining dimensions of observable growth, and a set of attributes representative of each dimension, whose acquisition could be behaviorally “mapped.”
My intention, from the outset, was to develop an accessible and flexible rubric through a recursive cycle of deliberate implementation, reflection, and modification.