Florida, we have a problem.
Shots were fired over a year ago as to this topic. On the one side was Dean Allard of Brooklyn Law School, leading the charge - "the bar exam is too hard! It's unfair!" On the other side was the NCBE - "current students are 'less able'! Buckle down!" If anyone should be on the side of mathematical analytics, is would be Bloomberg.
As is often the case in many things, the truth is likely somewhere in the middle. Given my own participation in the debate over the years and my own leanings, it is safe to say I am still in the scamblog camp - this is, I believe the charges of bar exam difficulty are heavily weighted in the "hey, this affects my livelihood so stop looking at the data so hard" direction, not the more-politically-palatable proclamation that "diversity is super-important, how dare you try to interfere with that using an arbitrary test" direction. When the Law School Cartel has gone from laughingly-dismissing the scambloggers to calling them demons over the years, especially in the face of falling bar passage rates, I get a bit cynical. Then again, I am a product of the self-same system, so the Cartel reaps what the Cartel sows, I guess.
That said, let's talk about the major rejoinders from the Cartel, and what I find to be a fascinating debate over at the Faculty Lounge. Per my reading, they fall into three basic categories:
-more below the fold-
(1) Bar passage rates have a numerator (student ability) AND a denominator (test difficulty). Scambloggers say it is the former, we, the learned academics, say it is the latter.
(2) The bar exam does not properly measure or reflect the aptitudes necessary to be a successful lawyer.
(3) Various State Bars have been hiking difficulty either intentionally or unintentionally, but the fact remains.
As for the first argument, yes, it is safe to assert that there is more than one variable that can derive a calculated result. In fact, if we could quantify them reliably and accurately, I would bet there are several variables, not just a numerator and denominator. The mere existence of a variable does not mean that the variable is definitively in play, however. Kyle McEntee at LST, David Frakt, Paul Campos, Matt Leichter, the NCBE, various scambloggers, and many, many others have discussed and provided evidence of declining test scores and aptitude scores. Merely stating that, "well, the test could also be harder" is possible, but not proof. The current proof (test scores, performance metrics) bears statistical and historical weight, and are valuable for that reason alone. If, for example, raw MBE scores are not an "accurate" function box, what other measurement does one propose to try to draw inferences from the past to the present?
As for the second argument, yes, there could be better items to test. Deborah Merritt, who I consider a reasonably staunch law school reformer and general friend to the scamblog camp (though she may disagree), itemizes several ideas that could be tested:
1. Ask multiple choice questions that are a realistic measure of what every lawyer needs to know by heart before beginning practice...
2. Alternatively, maintain the current level of detail but make the MBE open book...
3. Replace essays with more MPT exercises...
4. Find a way to test for basic skills in interviewing, communicating with clients, negotiating, and other skills that everyone agrees are essential for a lawyer to have...
My only issue with these is that, well, possibly they are valid considerations, but the are not exactly "mathematical" nor tied to objective measurements. Again, the charge from the Cartel is that scores are down due to difficulty, and the subsequent statistical analysis of cut scores, equivalency mappings, harsh subjective grading, etc. generate a false negative that wrongly impugns the Cartel and student performance. Again, a worthwhile debate, but a distraction from the overall charge. Testing different metrics does not directly address what the issue is with current declining metrics, metrics that have been in place for decades.
As for the third charge, possibly State Bars have indeed made passing requirements more strict. Note that is not the fault of the MBE, for example, but decisions made by various State Boards of Bar Examiners. If Law School X believes that State Y's Bar Exam evaluation is unfair, that has little to do with the "math" of the Bar Exam. Maybe it has to do with essays, or MPTs, or the weight items carry, or something else. Perhaps Law School X, and others similarly situated, should dig in and have some real discussion with their State Bar concerning how they are interpreting the results, how they are determining passage rates, and the like. Maybe Law School X would be surprised by the results, or perhaps they would be "vindicated" and State Y will change their ways. Again, this is not necessarily "math" as far as the Bar Exam proper is concerned.
In any event, read the comments for yourselves, as the voices there articulate the issues better than myself. See what persuades you, personally.
In closing, one can always posit a theory. Demonstrating the validity of the theory requires evidence and supported inductive reasoning, which in my opinion is sorely lacking on the part of the Cartel defenders as of this date. Especially where, as here, pocketbooks are ultimately concerned.