In my last column, I wrote about how rapid change in how we communicate is changing society and encouraging fear.
Fear easily morphs into anger and hatred and there’s no shortage of either on the internet. I’d love to write that the online ag community is free from such nastiness. But the ag community is affected by the same movements that are shaping the rest of society.
I’m not writing here about tweets that are simply offensive. I’m referring to things like hijacking a popular ag hashtag and filling it with sexually explicit images. Or making a rape “joke” about a female advocate with a strong opinion.
There are, unfortunately, plenty of other similar examples on Ag Twitter. One of the few positives is that other people in ag intervene when the attack is aimed at a specific person.
What exactly is going through people’s heads when they say this stuff online? Some are trolls — a now over-used term that once referred to people who ignited arguments by making outrageous statements purely for their own enjoyment. There was a strong trolling culture on some internet forums (specifically 4chan and Reddit). The original trolls weren’t necessarily political. They loved angering lefties and right-wingers alike. Of course, people do this offline, too. Think of the kid you knew in junior high who couldn’t resist baiting teachers and classmates just to get a reaction.
Those trolling techniques have been picked up by people whose agenda extends beyond ticking everyone off for fun. A recent report by Whitney Phillips, titled The Oxygen of Amplification, examines how people used these same methods to manipulate the public discourse during the 2016 U.S. Presidential election.
The report mainly focuses on white supremacists in the U.S., but much of what they’re doing applies to other groups, too. For example, they’ll use memes and jokes to spread their message. They love nothing better than being condemned by a public figure, as this leads to more media coverage. They hijack other cultural symbols, unrelated to white supremacy, and give them a neo-Nazi spin in memes.
I don’t know of any neo-Nazis within ag, but there are people who employ some of the same techniques. Women and mental health advocates are frequent targets. Some people will use jokes and memes to say derogatory, or even threatening, things.
How does this information help the people who just want to talk agronomy, or maybe even argue a bit without drawing threats or pornographic memes? It helps to have an idea of a person’s motive when trying to decide how to respond to a nasty comment or threat. Will arguing just give them what they want?
There’s always the block button, which prevents someone from interacting or even seeing your Twitter feed. Twitter has added a mute button, which is not a bad way to quiet certain people pre-emptively. It removes all that person’s tweets from your timeline, and they don’t receive a notification that you’ve muted them. You can unmute them later, if you have a change of heart.
Twitter has policies against hateful conduct, which includes things like threatening people or inciting violence. There are also rules against abusive behaviour — for example, summoning a mob to harass someone.
I tried to get an interview with someone at Twitter about how they enforce these policies, but was told they would only do an interview off the record. Their website outlines several enforcement options, ranging from limiting the visibility of a tweet to booting the person off Twitter permanently. Twitter has done a poor job of enforcing its own rules in the past, but perhaps it will get better.
It’s also worth noting that online threats of violence might meet the threshold of criminal charges. Document threats if you’re the target, even if you don’t plan to go that route.