The role of social media in the spread of hate speech

Screen Shot 2019-03-22 at 4.16.47 AM

Remembering the victims in New Zealand. (Wikipedia photo by “Natecull”)

Following the murder of 50 people in Christchurch, New Zealand, by a white supremacist from Australis, China Radio International devoted its weekly news round-up on March 22 to issues raised by the massacre of Muslims during services at two mosques. The discussion focused on the role of social media in the spread of hate speech and the power of the internet to radicalize the disaffected around the world.

Here are my answers to a series of questions posed by the hosts:

Q: Lots of tough questions are being asked about the role of social media in the wake of the horrific shooting at two New Zealand mosques. First of all, do you think this is an attack made by and for the internet?

A: Yes, this is a modern terrorist attack using the true definition of terror. It is designed to create fear and anxiety in the global public by making people think such mass murder and mayhem can happen anywhere. If it happened in New Zealand, is my own city safe?

The gunman was introduced to white supremacy hate material on the internet and was radicalized via the internet and social media. He chose New Zealand because it was a soft target and because he thought an attack there would have maximum impact.

Q: As the gunman decided he was going to use his camera as he began this terrible massacre, was there anything in social media to stop him?

A: Facebook live and other live-streaming sites cannot be blocked in advance. The only way it could have been stopped in advance is if his account had been suspended. After the attack was underway, police called Facebook and the live-streaming was stopped. But millions of video clips of the massacre had been shared. Such reactive measures don’t solve the problem.

Q: We know that underneath it all is white nationalism or white supremacy, a kind of racism that has always existed. What’s so special about the social media’s role in this?

A: Social media makes it easy for haters or all political ideologies to meet like-minded people and to reinforce their worst tendencies. Radicalization, whether it is Islamic extremism, Hindu extremism or white Christian supremacists, is easier on the internet. While government around the world, from the U.S. to Russia to China, have focused on potential Islamic terrorist threats, there has been little attention from governments on white supremacists in the U.S., Europe and the European colonial diaspora.

Q: What role is social media playing in the spread of extremism in today’s world?

A: Social media makes it easier to target fellow haters and share material with them. The problem is exacerbated by the algorithms of such platforms as Facebook and YouTube that suggest posts similar to the ones you are reading. Facebook and YouTube make money from the advertising, so they have little incentive to act as responsible corporate citizens. As a result, white supremacists can view one hate-inspiring video on YouTube, and YouTube abets their radicalization by suggesting other videos. I did research on anti-Jewish videos on YouTube and discovered how the YouTube algorithm opens door after door with Russian anti-Semitic videos and Middle Eastern and North African anti-Jewish diatribes.

Q: Association of New Zealand Advertisers and Commercial Communications Council said in a statement, quote “The event in Christchurch raise the question, if the site owners can target consumers with advertising in microseconds, why can’t the same technology be applied to prevent this kind of content being streamed live?” How do social media platforms like Facebook take down videos? Is it that they could not stop this or did not stop it?

A: This sharing of hate can be combatted. It requires two things. Social media platforms must spend more money and hire more humans to monitor hate speech and take down posts and videos that foment radicalization. And the platforms must be more aggressive at fighting white supremacists. Thus far, they are not nearly as committed to fight Christian extremists as they are Muslim extremists. Both are deadly and anti-social.

Q: Critics of the companies say that Facebook and YouTube have not done enough to address the white supremacist groups on their platforms. There was a time when ISIS videos and ISIS content and propaganda were proliferating on all of these platforms. They have been quite successful at tamping down on that content and making it far less a problem. Critics cite this as proof that the problem is well within the power of the companies. It’s just that they haven’t prioritized the problem of white supremacist content. Do you think that’s really the case? And why is that?

A: White nationalists in the U.S. have launched a public relations campaign, aided and abetted by Donald Trump, accusing Facebook, Twitter and YouTube of being liberal, anti-conservative and anti-Christian. One far-right American congressman recently sued Twitter for $250 million and accused it of anti-Republican and anti-conservative bias. The platforms must ignore these critics and their misdirection attempts and be as aggressive in combatting white supremacists as they are Islamic radicals. White nationalists have been responsible for far more deaths in the U.S. — of Jews, Muslims, Sikhs and Christians, both white and black — than Islamic terrorists are. As you noted, it can be done. They are just not doing it effectively so far.

Q: These companies are American companies, and Islamophobia is somehow widespread in the US right now. Should we buy the argument that the business model will inevitably lead to this type of content no matter what?

A: I disagree that Islamophobia is widespread. It is contained within a narrow group. But it is encouraged by the hate tweets of Donald Trump and the irresponsible television propaganda of most Fox News shows.

Q: Do you think media, especially social media, has demonized the image of Muslims since 911?

A: No. Not most media. Remember that then-President George W. Bush went to a Mosque in Washington shortly after the September 11th attacks and called for brotherhood and understanding. There’s no doubt that anti-Muslim sentiment in the U.S. increased after 9/11, directed mostly at Saddam Hussein, who was not responsible for the attacks, and Saudi Arabia, which was home to most of the attackers and finances a radical brand of Islam. And, yes, there were sporadic attacks against Southwestern and South Asians, including a number of Hindus and Sikhs from India. It’s always bad to think of individuals as members of a group, whether they are Palestinians from Gaza or Uighurs from Xinjiang. That thinking, demonizing groups because of the misbehavior of a few, creates a risk of overreaction.

Q: Do these social media platforms see their responsibility as stopping this kind of material from being spread? Do they have an incentive to let extremist content remain on their platform as long as it’s profitable for them? (There’s a growing concern that the algorithms that determine what people are likely to see have become tilted toward promoting extremist content.)

A: Social media platforms must remember that they are corporate citizens and citizens of their nations and the world. Yes, they want to make money, and they have a human right to make money. But they also have a responsibility to the society at large. At this time, the scales are unbalanced and favor profits over social responsibility. That must change through persuasion and, if necessary, government regulation. That’s a dangerous road to go down, but it can’t be ruled out if self-regulation doesn’t work.

Q: New Zealand’s prime minister, Jacinda Ardern, had some strong words for the social media companies that enabled the shooter to broadcast his massacre. She said: “They are the publisher, not just the postman.” That’s a challenge to the American view on social media. The Communication Decency Act originally passed in 1996 designates internet forums as carriers like a telephone company or postal worker rather than a publisher. What do you see as the role of social media platforms like Facebook or Twitter? Should they be held responsible for the speech that occurs on their platforms?

A: American laws are outdated. I covered that 1996 debate for Business Week, and the 1996 law was outdated almost as soon as it was signed into law by then-President Bill Clinton. Telecommunications companies wanted, and received, protection against lawsuits. As the proliferation of internet and social media hate speech has shown, Google and Facebook and Twitter and Weibo and WeChat are publishers and not just mail deliverers. Facebook has replaced local newspapers, taking their readers, and even more, their advertisers. At a minimum, people who suffer damage as a result of their posts should be allowed to recover damages. The economic threat of damages might prompt the companies to enact reforms that they have not yet adopted because they face little economic risk for allowing hateful content to thrive on their platforms.

Q: There’s similar debate in the US. Republican Devin Nunes is suing Twitter and three users of the platform for defamation, claiming the users smeared him and the platform allowed it to happen because of its political agenda. He’s challenging the Communications Decency Act which protects internet service providers from defamation claims. How do you look at this lawsuit?

A: The suit is absurd on its face. It is not illegal to make fun of politicians and to criticize them sarcastically. This meets the definition of a frivolous lawsuit. That doesn’t mean that the Communications Decency Act of 1996 shouldn’t be changed to remove the protections written into it by a previous generation of internet giants, when there were no Facebooks, Twitters, YouTubes or even Googles.

Q: Some see the responsibility of social media companies as providing a platform for free speech. Do they have an obligation to remove the extremism content? Should there be a balance between the protecting the right to freedom of speech and preventing harm it can cause?

A: They have a moral responsibility to remove extremist hate speech. Most of the world, including the United States, protects freedom of speech. But the freedom of speech is not unlimited. You can’t threaten the life of a president or conspire to violate laws. What’s harder is to find these haters in the dark recesses of the internet and snuff out their dark conspiracies.

Q: Will it be a problem if social media platforms are given too much power over speech and thought online?

A: Yes, too much power in private hands is dangerous, as is too much power in government hands. But there’s a difference between controversial speech, like advocated Communism in the U.S. or feminism in China, and hate speech. There can be near-universal agreement that plotting violence, sharing information on building bombs or creating guns with 3D printers, or advocating violence against non-whites or non-Muslims, crosses the line into impermissible speech. Social media platforms have a moral duty to self-regulate when it comes to hate speech and violence.

Q: What do you make of the phenomenon of online radicalization? Should social media bear all the blame, or do you feel there are some deeper social problems behind this that’s perhaps too large for tech companies to fix on their own?

A: There are deep social problems. Radicals, include white supremacists in the U.S., have been emboldened by the statements of politicians like Donald Trump and Congressman Steve King. The tech companies can’t fix the problem on their own. Congress must act. But that doesn’t mean that social media platforms shouldn’t do their part and shouldn’t be leaders in encouraging a new era of civility.

Q: People used to conceive of “online radicalization” as distinct from the extremism that took form in the physical world. But do you feels that nowadays more extremists are getting radicalized online? If we look at how ISIS used social media to spread their propaganda, and how the “Yellow Vest” movement in France flourished on the social network.

A: As I tell my multimedia journalism students, digital platforms are merely a means to deliver your message. The root of hate speech is the same, whether it is shared in terrorist training camps in Pakistan or Somalia, in troll factories in Russia, or in basements and garages in rural America.

Q: Have Extremist groups in recent years been using social media as a recruitment tool? Who are their targets?

A: Their targets are alienated people, many of them young, who feel that they’ve been left behind by society, and they blamed others. Most of these people are less educated and many are struggling financially. Social media is an easy way to find a community of like-minded thinkers who make you feel better about yourself and point you toward groups to blame for your problems.

Q: An Op-Ed on Wall Street Journal by Peggy Noonan said: “Social media is full of swarming political and ideological mobs. In an interesting departure from democratic tradition, they don’t try to win the other side over. They only condemn and attempt to silence.” Do you think that’s a fair judgement of the online environment today?

A: Yes, Peggy Noonan makes a good point. These haters are not trying to convert people, they are trying to convince converts to act on their worst impulses.

Q: Do you agree with government intervention in preventing online extremism or hate speech on social media?

A: It’s always dangerous for governments to become involved in free speech, but hate speech is not protected anywhere, so a combination of government action and self-regulation by tech companies is needed.

Q: What do you make of the role of social media in today’s politics? Take Donald Trump, the twitter president, for example, some say he has weaponized the social media, using it not just to reach the masses but to control the news agenda through bluster and distraction. What’s your thought?

A: Trump has weaponized social media. I strongly believe that there is not more prejudice in America today than when Trump became president, but the haters and provocateurs who were there before have been emboldened and empowered by Trump’s words and actions. When he defends Nazis in Charlottesville, Virginia, by saying there are good and bad people on both sides of the white supremacy debate, that sends a message not only to neo-Nazis but to far-right Christians. When he called for a ban on all Muslims entering the United States, something the American courts would not allow because it is an illegal religious test, he is sending a message to white supremacists. When he calls Mexicans rapists and drug-dealers, he is sending a message. Some of this is bluster. Some of it is an attempt to dominate each day’s news cycle. But the overall message is that white supremacists have a safe space to operate in corners of Trump’s America.

Q: President Trump claimed on Tuesday that social media companies are biased against Republicans. Is that really the case? Why is he saying that?

A: Every time a far-right media personality is sanctioned by social media authorities, Trump repeats this claim. It’s specious. But he has his right to free speech. Lying is not against the law, unless you do it to the Congress or the FBI or other law enforcement agencies.

Q: How do you see the social media’s impact on how politicians raise money and communicate with voters?

A: One of the good things about social media is that it helps you build communities of like-minded people. It has been a very effective tool for a few politicians, led by Donald Trump. On the Democratic side, social media has allowed Beto O’Rourke, the former Texas congressman now running for president, to raise more campaign money in one day than all of the better-known candidates such as Senators Bernie Sanders and Elizabeth Warren. It has made freshman Representative Alexandria Ocasio-Cortez to become the most-followed member of the U.S. Congress and to give voice to her brand of Democratic socialism. And it has allowed a humorous parody site called Devin Nunes’ Cow to have more followers than the California congressman it is skewering with its humor.

Q: There are of course positive aspects of social media, say, transparency, respect for individual rights and rejection of power imbalances. If we look at the bigger picture, how is social media transforming the use and misuse of power?

A: Like all forms of media, social media has good and bad. Think of the power of previous media such as radio and television. Radio brought entertainment to the masses in their own homes, and it allowed American president Franklin Roosevelt to reassure Americans at the depth of the Great Depression of the 1930s. But it also helped bring Hitler to power and to maintain his power. Television was hailed for its potential as an educational tool, but later become known as “the idiot box” for stupid programming. And the internet made research and communication easier than they had ever been, but it also monetized pornography and enabled terrorist groups to organize and thrive. Social media builds communities, but it also tears at society’s social fabric. All forms of media are a reflection of human beings, in their glory and their capacity for evil.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s