How Admissions Tests Hinder Access to Graduate and Professional Schools

The Chronicle of Higher Education

From the issue dated June 8, 2001

How Admissions Tests Hinder Access to Graduate and Professional Schools

 

By PETER SACKS

 

University of California President Richard C. Atkinson rattled cages across academe recently when he formally proposed that

the university system scuttle the SAT as an undergraduate-admissions requirement. Atkinson's move was the right one at the

right time for the nation's most populous and diverse state. But his proposal is not bold enough.

 

Consider Atkinson's reasons for dumping the SAT. First, he argued, the so-called SAT I -- which the Educational Testing

Service has described as a "reasoning" test -- is too far removed from the mathematics, science, and language courses that

students actually encounter in high school. Therefore, it ought to be replaced with the SAT II subject tests that more closely

correspond to the subjects California students actually study.

 

Furthermore, Atkinson suggested, undergraduate-admissions staffs at U.C. campuses should evaluate applicants far more

comprehensively and rely less on test scores and grades. After all, what says more about a student -- her modest SAT score,

or the fact that she has earned good grades in challenging courses and is the first in her family to attend college?

 

When one takes into account the concerns about the SAT as a useful predictor of college performance, the wide disparities in

SAT scores by class and race, and the U.C. Board of Regents' 1995 decision to end affirmative action in admissions, it's a

wonder that selective universities have gone this long without seriously questioning the assumptions behind the SAT's

entrenched status in higher education.

 

But for all the innovation of Atkinson's proposal at the undergraduate level, why stop there? Why not also challenge the

entrenched status of admissions tests that serve as gatekeepers to graduate and professional education? Indeed, following the

demise of affirmative action, the need for admissions reform for advanced study is even more pressing, given the major role that

tests like the Graduate Record Exam, the Law School Admission Test, and the Medical College Admission Test continue to

play in admissions and financial-aid decisions at the University of California and institutions across the nation.

 

The recent enrollment trends for underrepresented minorities in graduate and professional schools at U.C., compared with

undergraduate enrollments for those groups of students, tell the story. In the fall of 2000, freshman enrollment of

African-American students was 12 percent below 1995 levels, when affirmative action was still in effect. Fall-2000 freshman

enrollments of Mexican-American students, after declining slightly during 1997 (the first year the affirmative-action ban went

into effect), actually increased 6.8 percent over 1995 levels. By comparison, first-year law-school enrollments of

African-American students dropped 68 percent between the fall of 1995 and the fall of 2000. First-year enrollments of

Mexican-American law students declined 33 percent. And so it goes for medical and business schools, as well as for other

programs of graduate study.

 

A persistent institutional faith in cognitive tests such as the GRE and LSAT as an efficient way to sort and weed applicants has

simply worsened the problems of widening access to graduate and professional education. Indeed, the goals of equal

opportunity and diversity in higher education are hampered not only by attacks on affirmative action but by an entrenched

ideology about merit and how to measure it.

 

Surely Atkinson, a testing expert and eminent cognitive psychologist, knows that the cognitive screening tests used by graduate

and professional schools share virtually all the problems of the SAT I that he is proposing to eliminate for undergraduates.

Indeed, just like the SAT I, standardized tests such as the LSAT and GRE focus on verbaland quantitative-reasoning skills that

bear little resemblance to what undergraduates actually study in college. But that's the least of the tests' problems.

 

Take, for instance, how well -- or, rather, how poorly -- graduate and professional tests predict performance in academic

settings and beyond. Data from 1,000 graduate departments covering some 12,000 test-takers show that GRE scores could

explain just 9 percent of the variation in grades of first-year graduate students. That's according to data compiled by E.T.S.,

which produces the GRE. By comparison, the SAT I accounts for an average of 17 percent of the variation in first-year college

grades. Another, independent meta-analysis of 22 studies covering 5,000 test-takers over nearly 40 years found that GRE

scores could explain only 6 percent of the variance in first-year graduate performance. The finding prompted the study's

authors, Todd Morrison at Queens University and Melanie Morrison at the University of Victoria, to report in Educational

and Psychological Measurement that the predictive validity of the GRE was so weak as to make the test "virtually useless."

 

Even when scholars have tested the GRE against broader measures of graduate-school success, or have adjusted for the

"restriction of range" phenomenon (such adjustments may be necessary when the variation in the admitted students' grades and

test scores is less than applicants' grades and scores), the GRE's predictive validity for graduate-school performance remains

weak. For example, Leonard Baird, now an education professor at Ohio State University, compiled an exhaustive report in

Research in Higher Education a number of years ago -- in which he examined the relationship between the standardized-test

scores and actual workplace performance of scientists, business executives, and other high-level professionals -- and found that

the GRE's relationship to job and career success was nonexistent in most cases. In fact, in some studies, the correlations were

negative.

 

The predictive powers of the LSAT for law school and the MCAT for medical school aren't much better. An LSAT score

accounts for an average of 16 percent of the variance in first-year law grades. According to the Law School Admission

Council, the variance explained by the LSAT can be as low as zero for some law schools and as high as 38 percent for others

-- but even that leaves two-thirds of the variance unexplained. The LSAT does a poor job of predicting graduation rates, and

there's meager evidence that performance on the exam bears any relationship to one's later performance as a lawyer.

 

As for medicine, evidence has suggested that MCAT scores are modestly predictive of science grades during the first two

years of medical study -- explaining 9 percent to 16 percent of the variance in medical-school grades. But the exam's predictive

punch quickly vanishes as students progress to their clinical rotations in the final two years of study.

 

At the same time, the various cognitive tests do sort candidates quite capably along class and race lines. Indeed, test-score

gaps between white students and nonwhites on exams for graduate and professional study are frightening. According to 1999

data from E.T.S., for example, a white male has a 117-point advantage, on average, over a white female on the verbal,

quantitative, and analytical parts of the GRE. A white woman gets a 108-point boost over a black woman on the quantitative

portion of the exam alone. A white man can expect, on average, a 139-point advantage over a black man on the GRE's

quantitative test alone.

 

What's more, parental education and wealth are powerful predictors of test performance, not just on the SAT, but also on the

GRE and similar tests. If the parents of applicants are doctors, lawyers, or scientists, chances are good that the performance of

those applicants on admissions tests for medicine, law, and science will, in turn, be rewarded. One E.T.S. study found that

among high-scoring white students who take the GRE, some 44 percent of their fathers had graduate or professional degrees.

More than half of high-scoring Asian-American applicants had fathers with advanced degrees.

 

The situation at the University of California at Los Angeles School of Law vividly illustrates how prevailing views of merit

perpetuate the advantages of highly educated families, even at public institutions. Among those students admitted according to a

"predictive index," or "P.I." -- a common method at selective universities of using a formula to combine LSAT scores and

undergraduate grades into a single index -- nearly 50 percent of their parents had incomes of $80,000 or more; 54 percent had

fathers with graduate or professional degrees; and 41 percent had mothers with such advanced degrees. "The parents of the

P.I.'s are a true educational aristocracy," says the U.C.L.A. law professor Richard H. Sander, reporting the findings in the

Journal of Legal Education.

 

To be sure, many cultural, economic, and political factors may cause such cognitive screening tests to remain an entrenched

aspect of the admissions process for advanced study. Not the least of those reasons, I suspect, can be traced to the earliest

roots of intelligence testing in Europe and the United States at the turn of the last century. If nothing else, the nascent technology

of mental testing came packaged with an ideology -- which largely survives to this day -- that one's ability to perform any given

type of work or academic subject can be predicted with a single instrument that tests one's general mental prowess.

 

This ideology gets translated in the following way in the selection of candidates for, say, law: Rank candidates according to their

performance, in substantial part, on a cognitive screen like the LSAT. Mechanically select those with top scores, automatically

reject those at the bottom, and only provide a full review of a candidate's application file for those in the middle. Indeed, the

Law School Admission Council reports that just such an approach, known as the "presumptive admissions" model, is used by

more than 90 percent of law schools.

 

However, if one were to design a selection system based on a completely different assumption, focusing on qualities of

applicants that might predict actual performance in the jobs of scientists, doctors, and lawyers, then the selection system would

look radically different. For example, several years ago, innovators at the Joyce and Irving Goldman Medical School at Ben

Gurion University of the Negev, in Beersheba, Israel, turned the traditional model of selecting medical students upside down.

Instead of choosing students on the basis of who might have the best chance of someday winning a Nobel Prize, admissions

committees focus on qualities of applicants that will help the school in its mission to train very good doctors of community

medicine. Yes, the school uses a cognitive test, but the screen is set at a lenient 60th percentile, as opposed to the 90th

percentile or so that most medical schools use. The school remains highly selective but it accomplishes the lion's share of sorting

applicants with a highly structured and rigorous series of personal interviews with members of the admissions committees. By

picking students based on what they have actually accomplished in college or work, consistent with the institution's mission, the

medical school has become known throughout Israel and the world as a center for excellence in community medicine.

 

As the Beersheba experiment and similar, if less ambitious, efforts at the undergraduate level in the United States have clearly

demonstrated, the prevailing ideology of merit that continues to guard access to much graduate and professional study isn't set

in stone by some intangible higher power. In fact, graduate and professional schools could overturn their test-score-laden

admissions models even while enhancing academic quality, despite the dire warnings of conservative critics that placing less

emphasis on standardized tests will lead to the ruination of our great universities. What's more, institutions such as Bates

College, the University of Texas, and others have retooled their merit-based-admissions systems by placing less emphasis on

standardized tests -- either from ethical imperative or legal mandate -- and happily discovered they can still meet their goals for

ethnic and socioeconomic diversity, as well.

 

Atkinson tapped into a larger and more generous spirit of how merit should be defined than the prevailing norms of the past 75

years when he said: "The strength of American society has been its belief that actual achievement should be what matters most."

Whether a high-school junior hopes to be the first in her family to attend college or whether her ambition is to study law after

she graduates, let's judge her not on abstract tests of mental agility but on her actual accomplishments on endeavors of

substance.

 

Peter Sacks is the author of Standardized Minds: The High Price of America's Testing Culture and What We Can Do to

Change It (Perseus, 2000).

 

Copyright 2001 by The Chronicle of Higher Education