In a lot of the commentary about declining LSATs and declining bar examination scores this past summer, there is, shall we say, a lack of the sort of rigor that normally attaches itself to peer-reviewed publications.
For recent examples of this, please see the work of
Derek Muller (Pepperdine) and
Jerry Organ (St. Thomas), both somewhat supporting the "damn you, MBE!" thesis advanced by Brooklyn Dean Nicholas Allard. (
more recent nonsense here).
Simple foundational statistics and healthy skepticism can do much to completely demolish these ideas. For example, here's Organ:
[A] comparison of the LSAT profile of the Class of 2014 with the LSAT profile of the Class of 2013 would suggest that one could have anticipated a modest drop in the MBE Mean Scaled Score of perhaps .5 to 1.0. The modest decrease in the LSAT profile of the Class of 2014 when compared with the Class of 2013, by itself, does not explain the historic drop of 2.8 reported in the MBE Mean Scaled Score between July 2013 and July 2014
And here's Muller making a similar claim:
[W]e see a fairly significant correlation between my extremely rough approximation of a projected MBE score based on the LSAT scores of the matriculating classes, and the actual MBE scores, with one exception: this cycle.
Just one problem with all of this:
LSAT year-over-year comparisons are more or less baseless and have no predictive value by themselves.
About the LSAT
To learn why, we need to try to understand where LSAT scores come from, and understand that they have little connection to objective reality. An LSAT score is derived from the raw number of questions on the LSAT that one correctly answers. The administrators of the LSAT then "scale" the scores from 120 - 180 depending on the difficulty of the test, which is pre-determined using a metric that is based on prior recent LSAT administrations (LSAC uses what is called "Item Response Theory" to model the test to individual performance on questions instead of assuming all questions equal as your 7th grade math teacher probably did). They "normalize" or "equate" the test to to even things out over administrations. It's not
entirely transparent (if it is somewhere in clear fashion, please point it out and I will correct anything erroneous herein), but it's fairly clear that the median over several administrations is around the 150 mark by design.
The idea is that students who take a "hard" LSAT should not be punished relative to students who take an "easier" LSAT, and therefore the former will have a more forgiving curve. The "curve" is set in advance because the questions are "pretested" by previous examination takers, so LSAC "knows" how hard that particular test is.
It should be obvious that this approach makes what might be a serious mistake, and certainly an assumption that invalidates its extrinsic utility: it assumes that the students taking each administration of the LSAT are roughly equivalent in aptitude on a year-over-year, aggregate basis.
In statistical terms, they assume any given set of administrations is a fairly representative sample of a fairly constant population of pre-law students. This would seem to severely undercut any idea that the LSAT has any sort of non-relative value; after all, if item response theory evaluates prior responses to questions, and the prior students were either significantly brighter or significantly dumber than the current group taking the test, how can the test possibly have any year-over-year validity outside of a comparison relative to one's class?
A simple hypothetical:
In year 1, 60,000 separate students take the LSAT. The economy was especially brutal for straight-out-of-college hires, and so a large cluster of elite students decide to try law school. The median IQ of the group in 115, and there are a spate of applicants from the Ivy League and comparable schools. An IQ of 105 would roughly put someone in only the 25th percentile.
In year 2, the economy is moving much better, the elite graduates have found something else to do, and the median IQ of 60,000 LSAT takers is 105. In this group, a 115 IQ puts one in the 75th percentile.
Going strictly by the three-year percentile charts, in year 1, the student with the 115 IQ is going to score in the low 150s. In group 2, the 115 IQ is going to be in the high 150s, gaining 6 or 7 points just by being with a dumber group overall. In group 1, the 105 IQ scores in the mid-140s. In group 2, that same student shoots up into the low 150s. In the world of bar predictors, this same low-scorer just greatly increased his bar exam probability by simply sitting down a second time at a later point.
You can claim that the certain tests will be above or below the percentile mean because of the variances in item response theory and "equating" working themselves out over the administrations, but when the overall mean is set at 150 and there's a nice bell curve around it, the conclusion is inescapable that a 150 in 2009 will not necessarily equal a 150 in 2014. A 150 in 2009 could be a 142 or a 163 in 2014
depending on who else has taken the test recently and who else is in the room.
So:
- There is no inherent connection between IQ/reasoning ability/brainpower and one's LSAT score.
- The raw number of LSAT high scorers is dependent almost strictly on the number of people who take the LSAT within a certain timeframe. A school's declining LSAT numbers is more of an indication that there are fewer fish in the pool overall, and really nothing more. In an alternative universe, it is possible that a school's LSAT percentiles would drop and their bar passage rates actually increased. LSAT scores are dropping almost everywhere outside the top rank of schools. This, by itself, should make us question any extrinsic value that the test may have.
- No matter how dumb a large cohort is (say, a series of years where going to law school is a ridiculous idea), a certain percentage (2-3%) of students are going to inevitably score over 170, at a minimum because the test will at some point self-correct for the "difficulty" of the test, and LSAC apparently aims for a 150 median over time. That does not mean they all have equal abilities in terms of navigating law school and the bar examination, or in subsequent law practice. In a few years, I expect to see "new associates aren't as sharp as they used to be" writings from law partners who have had their heads in the sand.
- There is no true transparency anywhere in this industry, not even on something as basic as an entrance examination based on what appear to be otherwise-sound mathematical models.
- It is entirely possibly to have a four-year period where there is a steady decline in the quality of students taking the examination, but only a slight or modest decline in LSAT scores. This is basic math.
Historically, there may have been a vague correlation between LSAT scores and bar exam performance because classes were relatively stable in their distributions, and thus LSAT scores more or less mimicked a generalized measure of intelligence for relatively consistent sample populations of potential law students.
After 2008, and in wake of the most recent bar results, those bets have to be called off. To put it bluntly, it's entirely possible the law schools slowly started enrolling collectively dumber students, and we really have no way of knowing that from LSAT medians.
It is absurd for any serious claim or inquiry about one cohort's abilities to be based on the LSAT where the LSAT has no real connection to actual real-world aptitude beyond providing a relative measurement against one's peers. It does not - and cannot - answer the question of whether one's peers are abnormally bright, normal, or abnormally dim.
Because of that fatal flaw, LSAT scores have no predictive value whatsoever when it comes to a slightly different population taking an unrelated test that has separate controls for year-over-year validity.
As a concluding point, here's a daily koan for you: why do Allard and friends not go back in time and ask about how the LSAT got scored?
Assuming Unknowns and Constants
Another fundamental flaw in the analysis provided by Organ, Muller, and others is even considering the 25th, 50th, or 75th percentile LSAT scores as usable in determining anything about how a portion of that group will do on a subsequent examination where the differences are significant at much smaller levels than 25 percentage points (such as bar exam pass rates). The numbers between those guide posts can be highly variable, and the statistics manipulators are basically assuming there's some constancy or uniformity in the unknown numbers in drawing their conclusions.
Consider a law school entering class of 10 students.
Class A: 161, 161, 161, 156, 156, 156, 156, 154, 147, 140
Class B: 160, 160, 160, 155, 155, 155, 155, 153, 153, 153
Now, which group has the higher LSAT scores and will likely be ranked higher in the magazines? Now, which group would you bet on to have the higher bar passage rate, if all ten students take the bar?
Multiply this little exercise by hundreds and you can quickly see where not knowing what's at the tail end of the curve (or the middle portions of the curve) is a huge problem. We (and that includes Muller, Organ, and their peers) have no way of gauging just how terrible the bottom-tier students are at these institutions, notwithstanding any issues with the LSAT itself. 130? 125?
What about the allowance for people who haven't even taken the LSAT?
There are, of course, other variables the MBE critics are ignoring. One is students who drop out or transfer (in or out). If students with median LSATs drop out because law school is a losing bet, the school's bar pass rate is more likely to drop than not. Similarly, if a year has abnormally high transfers, either an exodus from lower-ranked schools by high-scoring students or an influx of lower-scoring students at higher-ranked schools (both are possible), it's going to throw off any correlation between the LSAT and bar pass rates.
There's far too many variables that cannot be accounted for by law professor statistics.
Conclusion
The admissions-department heuristic that LSAT scores can predict future bar exam success is misplaced in an act of statistical misunderstanding similar to the classic "correlation, not causation" mistake from Stats 101. Allard, Muller, Organ, et. al see historical correlation, assume causation, and then cry foul when a "predicted" result doesn't happen (and, in a surely-unrelated aside, it hurts their institutions).
Out of all the measurements we have available, the bar exam is probably the most consistent in terms of measuring raw aptitude on a year-over-year basis given that they're looking for a minimum competence bar and not a "hey, let's get snowflake into law school" motivation.
It's likely not the problem; it's almost certainly the students these law schools are enrolling, and no manipulation of statistics and empty claims of MBE chicanery can alter that. There may be a problem with the test, but given that other, more simple explanations seem more likely and there is no reliable proof of an error, I don't think it's much of a credible thesis.
Ultimately, this is - yet again - number manipulation by the law schools and their friends, this time to support the idea that their open admissions policies should have as little repercussions as possible (it's basically a salvo in the coming battle over bar passage numbers the lowest-ranked schools may have with the ABA). As a concerned member of the bar, I oppose their efforts, and I oppose any effort to make the bar exam essentially match percentages with the LSAT, as all that does is ensure that a set percentage of each class WILL pass the bar no matter how dumb the cohort or three-year cohort or whatever. We can talk about the bar exam's utility elsewhere, but if we're going to have it, it needs to mean something beyond what the law school deans want it to mean.
They have no way of supporting any claim that the class of 2011 was just as bright as previous classes, and most available evidence suggests the opposite (for one, they went to law school in fall 2011), but the schools will be damned before they let what is likely the sorry truth get in the way of blaming someone else for the mess they've ultimately created.