My name is David Johnston, I’m a 31yo New Zealand born citizen.
I have a bachelor’s degree in computer science and four years commercial experience as a data transformation and web developer. I spent two years at Acme transforming documents to and from a spine, and the last two as more general frontend and full stack web developer.
With the recent election of Donald Trump and Russian manipulation of social media, I’m feeling I’d rather be working in this space than my current job.
What I want to do is a create an API for scraping news websites and social media, gathering sentiment, and presenting it wordcloud format or similar.
I don’t have a particularly theoretical approach in mind – I’m not educated in linguistics; my interest in the software engineering solution.
Have a look at this web application I made: www.blacksheepcode.com – this is something I threw up pretty quickly and haven’t refined – but it shows the possibilities of web browsers as good application interface.
I want to create an open source API, free for the world to use, but something you might find useful.
Essentially, a good way to do this would be for me to go to university, and make this project the subject of my postgrad thesis.
What I’m ask of you, is if you offer sponsorship or scholarships to allow me to go this. I’d be looking for $60,000 living costs/year + study costs.
I hope this finds you well, and I’ve attached my CV if you’re more interested in my technical experience.
At the same time – there is also a general distrust in mainstream outlets, and the fake news dog whistle is actually used to criticise the mainstream media – by suggesting that it’s the mainstream media that is reporting things wrong.
One only needs to look at the responses to @WashingtonPost’s Twitter account to see examples of this.
Caveat: It’s hard to tell if accounts like this aren’t troll bot accounts.
While I don’t think that the mainstream media is out and out producing lies or fully factually incorrect content, I think it is fair to say that the media has a vested interest in producing certain kinds of content, and it does seem that a lot of what we see on the media now is more opinion or ‘analysis’ – which isn’t something that needs to withstand basic fact checking.
Recently, I’ve taken an interest in watching RT (Russia Today – a Russian state run media outlet). It’s interesting to see the difference in what RT says about particular issues, as opposed to say Fox News.
For example, let’s look at Allepo:
So we have two problems:
People are just going to share whatever suits them.
The media have their own agendas which influences the content they produce.
Now we have a problem – how do we decide what content to consume?
Also – we’re not just concerned about the actual truth of the matter – but we also need to know what other people are thinking or reading.
The answer: meta-news.
Instead of reading news from your favourite news site, whether that’s RT, Fox News, Al Jazeera, The Guardian, The Washington Post – you read a factual, algorithmic aggregate of all news websites.
How this would work, is that some kind of web crawler will read and view news content as it released, and analyse the frequency of certain words, the general meaning etc. It then presents that story with a breakdown of the various narratives being presented, who’s presenting them etc. For example, on the subject of Aleppo, as well as giving the facts of what happened (and who’s reporting what facts), it would report which outlets are using the term ‘liberate’ and which are focused on civilian deaths by government forces etc.
The tool could be also be used to report sentiment on social media. For example, as the story breaks, it can report ‘users on twitter are saying …’. Further investigation can show that ‘Users that say this about x subject, are saying such and such about y subject’.
This tool isn’t a solution to finding the actual truth about a matter, that still depends on journalists publishing the truth. It does however, reveal a different kind of truth, and is reliable at that (if you trust the algorithm) – the what the world is saying about certain subjects. Perhaps that’s a way of breaking free of our echo chambers.
This tutorial outlines a technique using the Chrome browser, that allows you to map webapp resources to local copies of your resources.
In this tutorial – I will demonstrating this technique for the purpose of modifying .scss and .css resources.
Effectively – the Chrome browser will use your local copies of the resources to render the webpage. This will allow you to modify your version controlled resources in the browser developer tools, and see the changes immediately.
No more tweaking in the browser and then copying to version controlled resources. No more redeploying each time want to see your changes. This may greatly quicken your development process.
For this tutorial I assume you are generally familiar with creating webapps, deploying to web servers and working CSS and SCSS. If you don’t know Sass/SCSS don’t worry – it’s very simple, and you should learn it, because it’s awesome.
In this tutorial I’m deploying a Java webapp to Tomcat 8, and I’m using Maven for my dependency management and deployment.
If you’re not using these technologies, don’t be too worried, so long as you know how deploy your own webapp.