I wrote a letter to a news media company about distasteful content. Here’s their response.

My email (sent September 4):

Hi, I’d like to formally complain about this article you published, regarding a serious assault that took place in South Korea.

http://www.newshub.co.nz/home/world/2017/09/woman-cut-off-husband-s-penis-because-he-played-golf-too-much.html

 newshub.PNG

The article concerns a man whose wife cut his penis off.

The accompanying image you elected to use – a vegetable being cut up, essentially makes a joke of the assault, trivializing it, and doesn’t demonstrate the sensitivity that is appropriate.

Domestic violence is a serious issue, once that is exacerbated by cultural attitudes that prevent people from seeking help about it. As a media outlet, it is irresponsible to contribute to this attitude, by treating it like a joke.

It is also insensitive to any similar victims of domestic violence seeing their circumstances being treated this way.

Please treat this email as a formal complaint.

Can you please check your policy/guidelines for how you handle cases domestic violence, and tell me if your standards were adhered to in this case.

Yours sincerely,

David Johnston

Their response (Recieved September 29):

Dear David,

Thank you for you complaint about the Newshub article at the attached link. The MediaWorks Standards Committee agrees that the stock image used for the article was inappropriate and we apologise for the offense it caused you. Immediately after receiving your email we raised your concerns with the editor of Newshub online. The image was removed and the staff member responsible counseled.

We thank you for bringing this matter to our attention and once again apologise for the offense the article caused you.

Kind regards,

Robert Dowd

For the MediaWorks Standards Committee

I’m pleased with this response. It’s unequivocal, and it mentions that the staff member who published the article faced some kind of feedback. I would suggest that the email I sent may very well have a tangible effect on how domestic violence is portrayed in the media.

 

Who’s to blame? The media.

This part of a series where I hypothesise how Donald Trump came to be elected.

tv

During the Republican primaries, I think the media saw Donald Trump as boon.

Not seen as a serious contender, I think he was seen as a good source of interesting soundbites that would draw viewers and sell advertising. Trump was always saying something crazy, such as attacking one of the other Republicans, which the media would then pick up and make the focus of the news that night.

Essentially – the media the fed the troll – giving the air of attention to the person saying the most outlandish stuff.

This has been part of an ongoing trend of dumbing down in the media. I think the media have got lazy and instead of providing intelligent, informed news, they’ve been providing the public with news that the public wants to see. This is reflected in the clear trend of soundbites getting shorter.  A provocative Donald Trump insult is a more attention grabbing sound bite that an informed policy outline can fit in nine seconds.

Addendum: The public is also to blame. The media produces inflammatory content, because that’s what the public chooses to view. If the public were more discerning with what they choose to watch or click, the media would be more incentivised to produce more intelligent content.

 

The future is meta news.

In the wake of the Donald Trump presidency, there’s been a sudden new ‘fake news’ narrative appearing in social and media discourse.

The discourse arising tends to point at two things:

  • Individual people exist in echo chambers, and seek to consume and share content that fits their worldview, without regard for truth.
  • At the same time – there is also a general distrust in mainstream outlets, and the fake news dog whistle is actually used to criticise the mainstream media – by suggesting that it’s the mainstream media that is reporting things wrong.
    One only needs to look at the responses to @WashingtonPost’s Twitter account to see examples of this.
    fake news.PNG
    Caveat: It’s hard to tell if accounts like this aren’t troll bot accounts.

While I don’t think that the mainstream media is out and out producing lies or fully factually incorrect content, I think it is fair to say that the media has a vested interest in producing certain kinds of content, and it does seem that a lot of what we see on the media now is more opinion or ‘analysis’ – which isn’t something that needs to withstand basic fact checking.

Recently, I’ve taken an interest in watching RT (Russia Today – a Russian state run media outlet). It’s interesting to see the difference in what RT says about particular issues, as opposed to say Fox News.

For example, let’s look at Allepo:

Fox News

alepo fooooox.PNG
The Fox News narrative mentions deaths caused by rebels and government forces.

RT

syria rt.PNG
The RT article  mentions ‘liberation’ and no mention of civilian casualties by government forces.

So we have two problems:

  • People are just going to share whatever suits them.
  • The media have their own agendas which influences the content they produce.

Now we have a problem – how do we decide what content to consume?

Also – we’re not just concerned about the actual truth of the matter – but we also need to know what other people are thinking or reading.

The answer: meta-news. 

Instead of reading news from your favourite news site, whether that’s RT, Fox News, Al Jazeera, The Guardian, The Washington Post – you read a factual, algorithmic aggregate of all news websites.

How this would work, is that some kind of web crawler will read and view news content as it released, and analyse the frequency of certain words, the general meaning etc. It then presents that story  with a breakdown of the various narratives being presented, who’s presenting them etc. For example, on the subject of Aleppo, as well as giving the facts of what happened (and who’s reporting what facts), it would report which outlets are using the term ‘liberate’ and which are focused on civilian deaths by government forces etc.

The tool could be also be used to report sentiment on social media. For example, as the story breaks, it can report ‘users on twitter are saying …’. Further investigation can show that ‘Users that say this about x subject, are saying such and such about y subject’.

This tool isn’t a solution to finding the actual truth about a matter, that still depends on journalists publishing the truth. It does however, reveal a different kind of truth, and is reliable at that (if you trust the algorithm) – the what the world is saying about certain subjects. Perhaps that’s a way of breaking free of our echo chambers.

Stop sharing your outrage about Brian Tamaki.

It started like this – a tweet by New Zealand journalist appears on my Twitter newsfeed:

duncan.PNG

If you don’t know, Brian Tamaki is a relatively fundamentalist preacher in New Zealand who follows the ‘megachurch’ model of collecting donations from his parishioners and living a lavish style.

From context, we can see that he’s said something offensive, and here we go we’ve got a reaction going from people outraged by it.

Let’s look at Brian Tamiki’s twitter:

brian 1.PNG

Who does he sound like? Rich, has a large following,  says hateful things, believes there’s a media conspiracy against him?

d.PNG

And it looks like he’s a fan too!

brian 2.PNG

I would argue that a huge part of Donald Trump’s success was that he would say controversial things – which would then get air time, and he grew in popularity.

I would say the best way to combat this kind of hate speech is to not give it the air of attention. 

Let’s think about it – what’s actually wrong with what Brian Tamaki, or Donald Trump said? It disrespects people? It encourages hate?

Ultimately it comes down to words hurt. 

If that’s the case – then repeating those words further perpetuates the hurt. It reminds me of a joke – where a man walks in to a bar holding some dog shit and says ‘Look what I almost stepped in!’.

What seems to happen is that some hateful person says some hateful thing – people see that – get outraged, and repeat the hateful thing to express their opposition to it – more people see it and they become outraged!

This twitter flurry does highlight the pervasive spread of hate though:

Step 1. Someone initially says something hateful (hate).
Step 2. People respond with outrage. (outrage).
Step 3. People like myself respond with outrage to the outrage. (meta-outrage).
Step 4. No doubt there’s somebody out there annoyed that I’m complaining about this. (hyper-meta-outrage).

And in the end we’re all feeling agitated. Ffs.

zen.jpg