Lessons Learned on the Challenge to Diagnosing What Students Know

Can We Bring the Magic of Pandora to Personalize Learning for Students?

When I log on to Pandora, the Internet radio service knows to build a playlist of songs with basic rock structures, subtle vocal harmonies, major key tonality, vocal-centric aesthetics and mixed acoustic and electric instrumentation. Over the years, the playlists Pandora recommends for me have gotten extremely good, offering me a happy mix of my lifelong favorite artists and new bands that have stretched my musical interests. The reason I rarely skip a song that Pandora suggests today is because the hundreds of inputs I’ve made (thumbs up, thumbs down, add artist, add song) enable an accurate diagnosis of the music I love.

Similarly, personalized learning is more effective with an accurate diagnostic assessment of the skills and concepts students grasp. Yet, developing assessment tools that provide educators with an exact picture of a student’s level of understanding across hundreds of skills is difficult. To begin with, educators must first specify exactly what each student is expected to know. (To do this, New Classrooms developed a skill map, which you can learn more about here and here.) And then once the universe of student knowledge is established, they still need to come up with accurate and informative ways of measuring that knowledge. At New Classrooms, we’ve spent thousands of work hours developing a dynamic, continuous system to assess what students understand to tailor what, when, where, and how they learn.

The Quest for a Great Diagnostic Tool

In the traditional classroom, the tools available to teachers to understand their students’ starting point are crude and often reflect outdated information. Teachers might look at students’ performance on previous standardized tests, but these are at least four months old and don’t account for summer learning loss. Additionally, teachers might turn to a student’s grades in a particular subject from previous years, but those say nothing about knowledge on a skill-based level. Alternatively, some teachers will try to prepare their own assessment tools at the beginning of the school year. But short of having students spend weeks of class time on testing, these pre-assessments would still only provide limited information about students’ needs on specific skills and concepts.

Over the last five years, we’ve discussed and designed many approaches to diagnostic assessments. For the School of One summer pilot, we made decisions about what skills a student should study first by poring over available student data from the NYC Department of Education. Besides the fact that most available data were outdated, we realized that we needed to assess students on the skills on our scope and sequence and couldn’t rely on information from external sources.

Teach to One is delivering on the promise of personalized learning.
In 2010, we began using static paper-based tests at the start of the program year. Our team spent hours and hours manually aligning questions to skills on our skill map. When the tests were complete, we had thousands of answers to analyze to determine a student’s personalized learning plan. This experience confirmed the importance of diagnostics that are specific and aligned to our skill map.

Later still, we introduced a two-tiered assessment tool that assigned every student two of three tests. Students would start with the middle test and based on their own results the kids chose the next test (either easier or more challenging) that they were supposed to take. The results gave us better information, but logistically this was challenging to coordinate and many students ended up taking the wrong test. We knew we needed to continue our search for a better diagnostic tool.

Because of these early experiments, we’ve developed a dynamic, ongoing system to refine our understanding of what students in Teach to One know. At the beginning of each year, we now administer two diagnostic assessments to students. First, students take the Measures of Academic Progress (MAP), developed by NWEA, which serves both as an assessment of the growth a student has made throughout the school year and a broad stroke diagnostic tool. The MAP is online and adaptive, so we have results quickly and receive a high-level analysis on the strength of students along particular mathematical strands, such as Geometry, Algebra, or Numeracy. To complement this tool, students in our model also take an internally-developed assessment, which targets key pre-grade level skills that students are expected to have learned so we can understand gaps in their knowledge.

An Iterative Approach to Diagnostic Assessments

Using these two assessment tools at the beginning of the year, we build a unique skill library for each student, which contains all the skills we expect them to be able to learn in the school year.

But, as any educator will tell you, tests, no matter how adaptive, are static snapshots of student understanding. A single assessment at the beginning of the school year might suggest a student doesn’t know a skill she knows. Or vice-versa. Also, because it’s impossible (not to mention undesirable) to test students on every skill they are expected to know, we must make inferences based on the data collected from assessment tools. If incorrect, these inferences limit the effectiveness of personalized learning plans. While our skill library builder has grown increasingly sophisticated, there are still inferences we make about a student’s knowledge that may need refined. So, we look for opportunities every day to refine and update our understanding of what skills are in a student’s skill library, and we make adjustments to a student’s skill library periodically throughout the school year (e.g. at the end of marking periods).

Based on a student’s performance on daily Exit Slips, 4-6 question assessments on the skills they study each day, we may question our initial inference about their lack of knowledge on a particular skill and recommend a “Try It.” A Try It is a short quiz based on a particular skill. If they pass, the skill will be marked as passed and their skill library is updated accordingly. Additionally, like Pandora’s thumbs up, we also give students the opportunity to weigh in on what they know. If a student sees a skill in their library that they believe they know, the can request a “Prove It,” a short quiz on that particular skill.

While I can leave my Pandora running for hours at a time without touching it, personalized learning can’t run on autopilot. On our quest for the best diagnostic, we’ve learned that there is no single perfect assessment; learning is a dynamic and complex process that requires educators to constantly update and refresh our understandings of student progress and knowledge. That’s why we’ve built diagnostics into our model as a key foundation of personalized learning. And it’s why we continue our quest to reach as perfect of a process as we can.

Susan Fine

Susan Fine, Ph.D. is the Chief Academic Officer at New Classrooms.