Opinion, Berkeley Blogs

What Rolling Stone magazine should learn from social science

By Bruce Newsome

The most recent scandal about avoidably erroneous reporting is symptomatic of a trend towards agenda-driven research and away from evidence-based research.

On 19 November 2014, the magazine Rolling Stone published a 9,000-word article alleging that on 29 September 2012 seven men sexually assaulted a fellow undergraduate student (pseudonym “Jackie”) at a fraternity on the University of Virginia’s campus (UVA), after which her friends and UVA failed to help adequately.

UVA suspended all fraternity and sorority activities, and opened an internal investigation. Local police opened a criminal investigation (“Jackie” had not filed a complaint). The fraternity named in the story was subject to attacks, and its members left campus for their own safety.

The story was extensively reproduced, often placed as a factual contribution to worthy campaigns to “break the silence” about assaults on campuses.

However, a few journalists, some of them burnt by previous journalistic errors, asked, why did the article have only one source? Other journalists found that “Jackie’s” friends disputed her story. The fraternity proved that it had no party on the day she alleged.

Over months, journalists progressively revealed more falsities and contradictions in the article, while those responsible for the article generally avoided and obfuscated the issues, until, on 22 December, Rolling Stone asked outsiders to investigate its own reporting.

On 23 March 2015, local police announced that they had found no evidence to corroborate “Jackie”, and plenty of contradictory evidence.

On 5 April 2015, at Rolling Stone’s request, a three-person team from Columbia Journalism School released a 12,000-word report, whose essential finding was that “The failure encompassed reporting, editing, editorial supervision, and fact-checking.”

The investigators did not attempt to explain why the responsible reporters and editors collectively slipped into such blindingly obvious errors.

The agenda

The tragedy illustrates a growing problem in public discourse, from which academia is not immune: agenda-driven research.

The author of the Rolling Stone article, Sabrina Rubin Erderly, found her source by telephoning a member of staff at UVA who was working on sexual assault issues. According to her notes, Erdely asked for a single case that would show "what it's like to be on campus now … where not only is rape so prevalent but also that there's this pervasive culture of sexual harassment/rape culture".

Base rate neglect

A single case cannot show anything “prevalent” or “pervasive”, since a single case cannot be generalized across all or even some cases, unless it can be shown to represent those other cases. Erderly did not ask for a representative case. Nevertheless, she intended to use a single case as if it were representative - as if it would show something “prevalent” and “pervasive.”

Treating a single case as representative is a common unconscious error. Human beings tend to be anchored in the most recent, proximate, and emotionally or visually captivating events, rather than the overall population of events. Thus, people can think that the frequency (or rate) of crime has gone up just because they happened to see a crime, not because they have any evidence for the overall rate of crime (the base rate). (They can be misled even when they see a fictional crime – people who watch more crime dramas over-state the base rate even more.)

This error is often termed “base rate neglect.” It is one of the many biases and fallacies that interfere with objective perception of risk, and contribute to distorted attention to objectively low risks, while neglecting higher risks.

Erderly was not a victim of an unconscious fleeting neglect of the base rate, but of months of agenda-driven errors of commission: she made the call to UVA in July. After she published the article in November, she gave an interview that reveals that she believed that the case was representative of not only the known base rate, but of a larger population of unknown cases. She said:

Part of the reason why I chose University of Virginia is because I felt that it was really representative of what was going on at campuses across the country. When I spoke to experts, they told me that this — that, really, the scary truth is that, if you dig deep enough really in any campus, this is probably what you will find, that what happened at the University of Virginia is probably not the exception. It’s probably, this is the norm.”

Taking her literally, Erderly believed that this case was probably “the norm” – in other words, normal, or describing more than 50 per cent of “what was going on at campuses”, which implies that it was happening to more than 50 per cent of students on these campuses.

The actual rates

The actual rate of sexual assault amongst American college-age students is 0.6% - more than 83 times less frequent than Erderly implied. Moreover, the base rate on campuses has been declining since trend analysis began in 1997.

Sexual assault, like all crimes, is probably under-reported – the official data suggests that four times as many students do not report sexual assaults as report, which would raise the true base rate to 3% - still 17 times less frequent than Erderly implied.

The rate of sexual assault amongst students is higher than for the general population – as it is for all persons of young adult age, but the rate is lower for students of that age than for non-students, which means that young people are at lower risk on campus than off campus.

Agendas distort risks

Sexual assault is a terrible crime, and deserves increased attention, but distorting any crime distorts our response to the crime, encouraging the allocation of more resources to an area where the risk is lowest rather than where it is highest.

In addition is the separate issue of the injustices caused to the fraternity or campus that is falsely reported to have a “prevalent” or “pervasive” risk.

While popular culture converges around a few issues, many more issues are neglected. For instance, while women are safer from sexual assault on campus than off campus, men are not. 17 per cent of student victims are male, compared to 4 per cent of non-student victims. I could not find any journalistic investigation of this risk.

Selection bias

Naturally, after revelations of Erderly’s mistakes, plenty of people have been critical of her reporting, but while many are wise after the event about her procedural errors – such as her failure to contact the alleged perpetrators - I could not find anyone criticizing her biases, which contributed to her errors.

She seemed to be searching for one of the more shocking, captivating cases, which is therefore even less representative. This motivated selectivity is one manifestation of selection bias.

Confirmation bias

In the interview excerpted above, Erderly suggested that her belief that more than 50 per cent of students were victims arose when she “spoke to experts.”

We know empirically that most people search for confirmation from people with similar agendas, whom they might rationalize as trusted, but may just be the most available, accessible, or agreeable – this is another example of selection bias leading to confirmation bias.

Perhaps most experts on sexual assault lack an agenda other than to help victims, for which I am grateful to them. However, Erderly’s agenda survives with unscientifically-minded “experts.” For instance, when asked about the damning report on Erderly’s article, Alison Kiss, Executive Director of the Clery Center for Security on Campus, worried about the report’s effects on “the most under-reported crime on college and university campuses and across the board,” but dismissed concerns that “about 2 to 10 percent of reports are false reports” because “we do know they’re scrutinized.” Why is she sure that reports of actual crimes are under-reported, while the false reports are not? She did not explain.

Agenda-driven versus evidence-based research

The Rolling Stone article did not contribute to public understanding of a terrible crime, but certainly contributed to fashionable distortions of it. The real significance of the article is not in what it says about sexual assault on campuses, but what it reveals about fashionable agenda-driven research.

Campuses have seen the same trends towards fashionable agendas - away from the scientific skills that could prove or disprove these agendas. Agendas tend to be founded on opinions, beliefs, conventional wisdom, assumptions, anecdotes, biases and fallacies, and self-interests – not the evidence.

From science to evidence

The word science refers to a replicable way to verify knowledge. In practice, this usually involves realizing some observations, developing theories that could explain the observations, and looking for evidence to support a theory – all in a replicable way.

Some researchers do not think of themselves as scientists and are critical of what they see as narrow scientific approaches. Indeed, science is not necessarily appropriate in creative, interpretive, or philosophical endeavors. Philosophy (the reasoned study of fundamental issues) is not necessarily replicable or even factual. Subjective interpretations or experiences are not perfectly replicable. Genuinely original creations are usually protected from replication (ethically and legally).

However, one should apply science wherever one wants to be replicable or evidence-based, rather than merely creative, interpretive, or philosophical – or opinionated.

One does not need to be a hard scientist to use scientific skills: we were developing scientific skills as children when we tested how different objects interact; and we demonstrated scientific skills whenever we presented evidence during an argument or pondered how to explain the world around us.

Scientific skills are demanded in professions and endeavors without any explicit reference to science. For instance, managerial skill sets routinely include “performance measurement”; much research now is differentiated as “evidence-based”. In each case, the approach is fundamentally scientific; if we could not replicate it, how would we know whether performance is being measured effectively or whether the research is truly evidence-based?

Agendas in social science

Unfortunately, social issues are inherently more agenda-driven; therefore, reports on social issues, like academic research into social issues, even within the social sciences, are frustratingly non-scientific, but still want the legitimacy suggested by the claim to be social scientific.

Social science as a practice is the application of science to the study of human society. Most formal professions and academic disciplines fall within the scope of this definition, including the formal social sciences (economics, politics, psychology, sociology, anthropology), some of the humanities (academic disciplines that study human culture, such as history) and liberal arts (the traditional core disciplines, such as philosophy and literature), and the professions (such as law and business).

The hard sciences or natural sciences (such as physics, chemistry, and biology) are easier for laboratory experimenters, but science can be applied anywhere.

The trouble with the social sciences is that most social phenomena are observed as issues (things that need to be resolved). The issue that prompts research also usually prompts an agenda. By contrast, the natural sciences tend to start with observations of physical phenomena, so they are naturally more evidence-based.

Evidence-based research takes disciplined commitment to a set of skills that are not innate and do not come easily, which is why I have advocated teaching social scientific skills in the first year of every undergraduate student’s education.

In recent decades many new disciplines have been recognized on campuses, such as “war studies,” “peace studies,” and “ethnic studies.” Usually they arise from dissatisfaction with an established social science’s neglect of some field or issue. In other words, they arise from agendas. Some of these agendas are justifiable: “womens studies” arose from neglect of female issues. However, agendas tend to agenda-drive research, which tends to neglect evidence, tends to neglect other issues, and tends to reverse the prejudices that the original agendas had rebelled against.

Additionally, a fashion emerged in recent decades for opposing scientific skills for supposedly repressing subjective creativity and experiences and for perpetuating traditional “power” (because research does indeed consume resources, and can benefit from privileged access to sources, data, or software).

Conveniently, many people refute scientific conclusions with which they disagree by mis-characterizing the conclusions as products of the powerful. (Another ironic convenience is that "power" is one of those abstract concepts that is practically impossible to observe.)

Scientific skills are not to blame; only people have agendas. Scientists, like anyone, can be biased, but they have more accountability than people who simply give in to agendas and don't insist on the evidence.