This is the second of a two-part blog post (Part I is here) about ABA Accreditation Standard 314, which requires that law schools use both summative and formative assessment methods. The two posts, taken together, attempt to answer the question whether multiple-choice tests can be formative assessment. In short, in my view it is not a bright-line test, but the answer is generally no – at least not the way multiple-choice tests are most commonly used. However, a multiple-choice test can do these sorts of "formative" things for a student:
A score on a multiple choice test can give the student a sense of how they are doing in retaining substantive knowledge, and depending on the question and its form, perhaps even how they are doing in developing a skill (such as legal analysis and application.)
If, when the score is received, a list of the correct answers is also provided, together with a description of why a certain question is correct, and why others are wrong, this can also be formative of learning for the student. But at the course level, to be formative such tests have to be conducted during the course (such as a midterm) so the student has an opportunity to improve their understanding before the summative (final) exam.
If all that is being taught in the typical law school classroom is content, then perhaps several low stakes multiple-choice tests, with correct answer explanations provided afterwards, can fit that definition. And of course if faculty are using such tests to identify where students are struggling – in content and skills - so they can address problems immediately in the course, well, that's further evidence that the purpose of the test was formative, for the student and the teacher.
But, of course, that's not all that is taught in the typical law school classroom. The much harder thing to reveal in a multiple-choice test is places where the "thinking and linking" is missing or off. (How many times have we heard students share the lament after a final exam in which they did poorly: "But I knew the material so well!!") In fact, what I am suggesting is that multiple-choice questions are not good at this (particularly ones developed generically by vendors rather than individual teachers for their own students), and the great temptation is to think they are, and skip over the intended benefits of formative assessment for learning and continuous improvement.
Here is an illustration of a multiple-choice test that is not formative: take a hypothetical law professor who teaches a certain standard 1L course. That professor gives a multiple-choice midterm in the course, and provides only scores to the students. Further, the content in the midterm is no longer ever addressed or used the rest of the semester, including on the final exam. So this "midterm" is really a summative exam for a portion of the course, and it has very little formative value for the student.
The term "formative assessment" refers to something deeper and more individualized – in its best form - than the sorts of things that multiple choice questions can do for students. Its most important and effective use is in providing qualitative feedback - as opposed to just scores - that is focused on the details of the performance with tailored guidance for each student on how to improve. It seems to me that all of the automated tools provided by vendors – such “formative assessment tools” seem to be the current hot item in the law school publishing space - will inevitably fall short of the true meaning of the term "formative assessment."
Finally, as noted in Part I of this post, an important purpose of formative assessment is for the teacher, who uses evidence of student achievement to make adjustments to the course and methods of instruction, thus creating and contributing to a cycle of continuous improvement.
Here is an example from my own teaching of Administrative Law some years ago. I have long worried about the utility of the "Review Class" - typically the last class in the semester for a "casebook" course. What often happens in those classes is that a few "gunner" students monopolize the discussion with the professor asking fairly obscure questions that the professor does not actually intend to test in the final exam. (Am I right?) But it is hard for the professor to resist the temptation to address the questions being asked - it is ostensibly the point of the review class to address questions that students have, they do not really want to give away that the question is not on the test, and they might like answering the more obscure (and perhaps even creative) questions that their students offer (who doesn't?). Unfortunately, what often happens is the rest of the class spends much of the review class wondering some version of: "Oh crap - I have no idea what they are talking about. Is this going to be on the final?"
Instead of this approach, I spent the penultimate class conducting a no-credit multiple-choice test using clickers. The test had 50 questions on it, testing knowledge and application of the central aspects of Administrative Law that we had addressed in the class (and which I planned to test on the final). The software I used had a really cool feature - I could watch their progress as they took the test, so I could immediately see how fast they were taking the test, and which parts were slowing them down.
Then, at the "Review" class (the last class of the semester), I addressed myself to each question on the test, spending much less time on the ones most students got right, and more on the ones many of them struggled with. Each time, I did not just provide the correct answer, but explained why it was correct, and why the others were not, and then reviewed the administrative law principles that we had studied through the course that applied and related to those questions.
While this effort could have been more individualized, at least each student knew which questions they got right or wrong, so they could tailor the instruction to their own learning gaps as I went through the test and the topics being tested. There are better types of formative assessment, but this one I think qualifies as a formative assessment multiple-choice test because it did the two fundamental things they should do to be formative: 1) it provided each student with feedback for their own self-motivated improvement of their learning of the material before the summative exam, and 2) it tailored my teaching and summarizing of the subject (at least on that day) based directly on the results of the assessment.
I believe that when the ABA reviews law schools for compliance with the new Standard 314, they will be looking for evidence of such formative assessments, and mere multiple-choice exams alone - even those conducted in the middle of the semester - will not be sufficient.