One issue sparking off from the fiery debate around the police shootings of black men is the extent to which Americans simply react negatively to seeing black – whether it is a police officer making a life-and-death split-second decision about the threat a black man poses, a store clerk tracking a black customer in a store more intently than she would a white one, or an online shopper preferring to buy a device shown in a white hand rather than a black hand.
Explicit racial discrimination, often subconscious, is rarer than it was once was. And such discrimination does not explain most of the black-white gaps in life circumstances such as lifespan and wealth; those largely grow from historically deeper and convoluted roots, further fed by institutional inequalities. Still, the effects of plain old racial aversion are real – accounting, according to one recent analysis, for perhaps a third of the difference between black and white wages (pdf). And such racism certainly takes an emotional toll.
Two recent publications present yet more systematic evidence that plain old racial aversion persists and matters — despite the belief among many whites, perhaps most, that reverse discrimination is just as big a problem. (An earlier related post is here.)
For years now, researchers and government investigators have used “audit” studies of discrimination, especially in the labor and housing markets. These are largely staged experiments to see if gatekeepers make race-based decisions. Fake candidates apply – in person, on the phone, or by internet – for jobs or housing. They present identical credentials and differ only in being either black or white (and more recently, Latino). Investigators repeatedly find that decision-makers, on average, tend to choose white over the otherwise identical black “applicants.” In one of the most well-known of these studies, sociologist Devah Pager found that employers were more likely to welcome white job-seekers with criminal records as they were to welcome blacks with no criminal records.
Critics have noted some methodological concerns in these studies, but the investigations’ number, variety, and increasing sophistication make it hard to avoid the conclusion that many decision-makers, consciously or not, decide with a racial tilt (see here and here).
In a recent New York Times article, economist Sendhil Mullainathan briefly recounts a number of such experimental studies dealing with interactions ranging from job-seeking to medical treatment.
His own study, conducted with economist Marianne Bertrand, focused on hiring. In 2001-02, they mailed out 5,000 resumes in response to over 1,300 employment advertisements in the Boston Globe and the Chicago Tribune, varying at random many features of the “applicants” such as their purported work experience.
Centrally, they varied the names, using ones that were commonly either black or white. (The researchers did a small, casual survey to confirm that people associate the names with either blacks or whites.) Applications with typically white names were notably likelier to get responses than those with typically black ones. Close to 10 percent of “whites” received an answer, compared to about six and a half percent of “blacks.”
Such racial discrimination showed up across different kinds of industries. (By the way, job-listers who labeled themselves as “Equal Opportunity Employers” showed the same race gap in responding to applications.)
The Stanford-Riverside difference
Sociologist S. Michael Gaddis, in a just-released article, reports on his own large-scale audit study of the job market, adapting some of the careful techniques used by Bertrand and Mullainathan. He explicitly looks at whether racial discrimination is mitigated when job candidates clearly have sterling credentials. The answer is no.
Gaddis targeted online job listings, analyzing employer responses to about 1800 realistic job applications that he e-mailed. For example, Gaddis used actual home addresses. He systematically varied several candidate attributes. One was race, indicated by first names that tend to be more common among blacks versus whites – e.g., Lamar v. Charlie; Nia v. Aubrey. The key innovation he introduced was the prestige of the college that the applicant had presumably graduated from (with honors) — Harvard v. U. Mass., Amherst; Stanford v. the University of California, Riverside; and Duke v. UNC, Greensboro.
“Applicants” from the elite colleges received an answer 1.7 times more often than those from less elite colleges (15% versus 9%). White-named “applicants” received an answer 1.5 times as often as black-named ones (15% versus 10%). The results suggest that having a typically white rather than a typically black name is worth about as much as graduating from an elite rather than a good college.
Importantly, the racial factor is probably underestimated, given that employers have to read those names as racially distinctive for them to matter, a reading which is not as obvious as college prestige. Even among the elite-college “applicants,” race made a substantial difference. Looked at another way, black-named “applicants” from elite colleges were about as likely to get a follow-up as white-named “applicants” from non-elite colleges.
In the real world of job-hunting, blacks face a far greater burden than these studies imply, because they are, on average, less likely to have the same kind of resumes as white applicants. The heavy hand of history, generations of restricted education and poverty, shapes blacks’ fates today. On top of that, then, plain old racial aversion adds insult – and sometimes consequences far more fateful.
Cross-posted from Claude Fischer’s blog, Made in America: Notes on American Life from American History.