Accessibility links

Breaking News

Could Mining, Analyzing Social Media Posts Prevent Future Massacres?


Attendees pass a wooden cross as they arrive at a candlelight vigil for the victims of the shooting at Marjory Stoneman Douglas High School in Parkland, Florida, Feb. 15, 2018.
Attendees pass a wooden cross as they arrive at a candlelight vigil for the victims of the shooting at Marjory Stoneman Douglas High School in Parkland, Florida, Feb. 15, 2018.

In multiple online comments and posts, Nikolas Cruz, 19, the suspect in the Valentine's Day high school shooting in Florida, apparently signaled his intent to hurt other people.

I want to "shoot people with my AR-15," a person using the name Nikolas Cruz wrote in one place. "I wanna die Fighting killing…ton of people."

As investigators try to piece together what led to the school shooting that left 17 people dead and many others wounded, they are closely examining the suspect's social media activity, as well as other information about him.

The focus on Cruz's digital footprint highlights a question that law enforcement, social scientists and society at large have been grappling with: If anyone had been paying attention to his postings, could these deaths have been prevented?

The FBI was contacted about a social media post in which the alleged gunman says he wants to be a "professional school shooter."

A video monitor shows school shooting suspect Nikolas Cruz, center, making an appearance before Judge Kim Theresa Mollica in Broward County Court in Fort Lauderdale, Florida, Feb. 15, 2018.
A video monitor shows school shooting suspect Nikolas Cruz, center, making an appearance before Judge Kim Theresa Mollica in Broward County Court in Fort Lauderdale, Florida, Feb. 15, 2018.

However, though the commenter's username was "Nikolas Cruz" — the same name as the shooting suspect — the FBI couldn't identify the poster, according to the Associated Press.

But what if an algorithm could have sifted through all of Cruz's posts and comments to bring him to the attention of authorities?

Data mining

In an era where data can be dissected and analyzed to predict where cold medicine will most likely be needed next week or which shoes will be most popular on Amazon tomorrow, some people wonder why there isn't more use of artificial intelligence to sift through social media in an effort to prevent crime.

"We need all the tools we can get to prevent tragedies like this," said Sean Young, executive director of the University of California Institute for Prediction Technology.

"The science exists on how to use social media to find and help people in psychological need," he said. "I believe the benefits outweigh the risks, so I think it's really important to use social media as a prevention tool."

Despite the 2002 movie Minority Report, about police apprehending murderers before they act based on knowledge provided by psychics known as "precogs," the idea of police successfully analyzing data to find a person preparing to harm others is still a far-off scenario, according to experts.

Predictive policing

Increasingly, police departments are turning to "predictive policing," which involves taking large data sets and using algorithms to forecast potential crimes and then deploying police to the region. One potential treasure trove of data is social media, which is often public and can indicate what people are discussing in real time and by location.

Predictive policing, however, comes with ethical questions over whether data sets and algorithms have built-in biases, particularly toward minorities.

A study in Los Angeles aims to see if social media postings can help police figure out where to put resources to stop hate crimes.

"With enough funds and unfettered data access and linkage, I can see how a system could be built where machine learning could identify patterns in text [threats, emotional states] and images [weapons] that would indicate an increased risk," said Matthew Williams, director of the social data science lab and data innovation research institute at Cardiff University in Wales. He is one of the Los Angeles study researchers.

"But the ethics would preclude such a system, unless those being observed consented, but then the system could be subverted."

Arjun Sethi, a Georgetown law professor, says it is impossible to divorce predictive policing from entrenched prejudice in the criminal justice system. "We found big data is used in racially discriminating ways," he said.

Using Facebook posts

Still, the potential exists that, with the right program, it may be possible to separate someone signaling for help from all the noise on social media.

A new program at Facebook seeks to harness the field of machine learning to get help to people contemplating suicide. Among millions of posts each day, Facebook can find posts of those who may be suicidal or at risk of self-harm — even if no one in the person's Facebook social circle reported the person's posts to the company. In machine learning, computers and algorithms collect information without being programmed to do so.

The Facebook system relies on text, but Mark Zuckerberg, the company's chief executive, has said that the firm may add photos and videos that come to the attention of the Facebook team to review.

Being able to figure out if someone is going to harm himself, herself or others is difficult and raises ethical dilemmas but, says Young of UCLA, a person's troubling social media posts can be red flags that should be checked out.

XS
SM
MD
LG