I flicked over a Google research paper last week that I think some of you might find interesting. In the study researchers analyse sharing and its relationship to a video’s popularity, and while the whole paper is worth a read, I found the discussion on the ‘socialness’ of popular videos to be the most interesting.
The key takeaways from the discussion (section 6.1 if you’re interested) were:
1) Not all popular videos are highly social
2) Most videos become popular on YouTube through search and related videos (not through sharing/referrals).
3) Viral videos rarely make it into YouTube discovery mechanisms such as search/related videos.
3.1) The data suggests the way YouTube computes related videos does not apply well to viral videos.
If you want to read more check out the full paper.
It’s no secret that content farms are crap-holes, but love em or hate em they can still teach us a thing or two. Let me explain:
Journalists and media pundits like to think that only quality content sells. They think that anything whipped up by a drone (human or otherwise) can’t draw an audience and can’t make money. A lot of these pundits are oblivious to amount of churn and burn their own outlets do, but more than that I think this approach is short sighted because it shuts down some interesting entrepreneurship in machine writing.
A case in point is Forbes. They’ve been experimenting on this front for some time, and there’s definitely potential to ramp up that approach in other financial news outlets. Whether it’s aggregating earnings forecasts or writing up press releases, most of that stuff is bland and formulaic, and it’s not hard to envisage an algorithm putting it together automatically.
For a more recent example we can look at how the LA Times was able to beat its competitors on a breaking news story due to its robot writer.
If I had the resources, I’d be putting a lot of time and money into exploring how we can use algorithms and machine writing to introduce an element of classy content farming. If we can leverage this technology, it would mean outlets could have their journos spending more time on quality reporting without shirking the boring stuff.
It would mean more content, more pageviews and more quality.
Earlier this week I read a great discussion on the problem of online abuse and what publishers are doing to clean up their comments and social chatter. While the article itself was a great read, for me what was even more exciting was the fact that it had made it online in the first place.
Online comments and their quality, or lack thereof, are a perennial topic of debate in media circles, but for me the quality and direction of this article at Journalism.co.uk represented a turning point for the topic. Twelve months ago the idea that you could clean up even the dirtiest threads (and that this was a worthwhile pursuit) would have been almost laughable at many traditional outlets. But study after study (not to mention common sense) has made it clear that discussion/debate is a valuable resource, and cleaning it up is an achievable outcome.
All of this is happening at the same time as several new media heavyweights roll out some exciting initiatives. The most obvious are Nick Denton’s latest tweaks to the Kinja system, which Matthew Ingram labelled one of 2013′s most disruptive move in online media. But Denton is not alone. Quartz has also been rethinking how reader comments work, and broader but no less impressive changes have been afoot across a larger swathe of Atlantic Media titles. Medium’s new approach to user comments has also been taking hold in other parts of the web.
There are dozens more I could add to this list, and I think it’s plausible that the culmination of these efforts will put a number on the days of troll infested communities. There is still much work to be done, and it is by no means a problem that can be completely eradicated. Nevertheless community management has reached a crossroads, and all publishers need to make sure they’re part of this new wave.
Around a year ago I opened an account on Buzzfeed with the goal of shaping a post that would make the front page, and I succeeded on my first attempt after spending an hour browsing cat videos on YouTube.
It’s hard to draw any wide reaching conclusions from that post, but I still think it hints at a few truths about Buzzfeed and its audience… truths that Jonah Peretti has spent a lot of time and effort obscuring.
Truth 1: Buzzfeed likes to build up mystery and hype surrounding how it writes for its audience, but the cold hard truth is that filtering through a ‘cat videos’ search on YouTube (my exact keyphrase) is sometimes enough to garner 20K+ views and make it onto the homepage.
Truth 2: Data science and research has helped define broad categories that work on Buzzfeed, but there’s no algorithm that dictates its content. Buzzfeed has not ‘cracked the code’ for making viral content… it relies on guesswork and creativity (within well-researched boundaries) to produce its content. Sometimes it works and sometimes it doesn’t, and this is only slightly different to how traditional newsrooms have worked since time immemorial.
I’m not saying it’s always this easy, and compared to the staff posts my cat-post metrics are modest to say the least. But I think Peretti’s rhetoric about the science behind Buzzfeed needs to be recognised for what it is, which is nothing more than a marketing pitch. What worked about my post was not that it had the backing of Buzzfeed’s viral magic, but that it was fresh content that hit the buttons of a niche audience (luck also plays a big part). It’s not easy but it’s not rocket science, and it’s not something that Buzzfeed does better than anyone else.
I’ve been working on data mining Twitter for a while now, and while it’s taken a fair amount of blood, sweat, and tears… the results have been worth the effort. Below is a snapshot of the interactions between ~2,000 Twitter users over seven days. It maps out a week of discussion on the #BBAU hashtag (from 2012) and you can explore the full dynamic map online here (it takes a minute or so to load).
This visualisation won’t mean much to you unless you watched the show, and it’s still not completely finished. It’s a good proof of concept though, and as a close watcher of the community I was amazed how much more sense it made when looking at it like this.
It’s fascinating to see the how users cluster around influential users to form micro-communities within the broader picture. It goes without saying that this kind of visualisation has great potential, and aside from that it looks pretty cool :).
I’ve put together an interesting map showing the overlap between native title claims and mining operations. The map is based off data from Geoscience Australia, and it shows active mines, deposits, and historical operations on one layer, and native title claims on the other (you can click on points of interest to get more info).
It’s interesting to note the extent of active native title in Australia. Also of interest are the claims that extend into the ocean above Cape York.
With it’s multiple layers and datasets, this visualisation is starting to test the limits of what Google Maps can achieve. Pretty cool stuff.