Tag Archives: Currybet.net

Currybet.net: No mention of blogs/Google/Twitter in Ofcom report

Martin Belam has run a quick search through the ‘Putting Viewers First’ section of Ofcom’s public service broadcasting review.

Despite looking at ’emerging trends involved with Internet delivery of content to an ever more interactive British audience’, there’s no mention of search, Twitter, blogs, Google…

Full story at this link…

New study measures social media success of national newspapers

This week Martin Belham, of Currybet.net, released his study into the nationals newspapers’ use of web 2.0 tools, such as news aggregation and social media sites.

His aim was this:

“I wanted to examine, firstly, how well British newspaper content was performing on prominent social media sites, and secondly, see if there was any correlation between the placement of icons, widgets and links, and the presence of newspaper content on these services. In short, I wanted to measure UK newspaper success with social media services.”

In order to do this he monitored eight popular social bookmarking and link sharing sites for a month, checking for the presence of UK newspaper URLs on their front or most ‘popular’ pages. Between July 15 and August 14 he counted just over 900 URLs from 12 major newspapers across the services (the Daily Express, Daily Mail, Daily Star, Financial Times, The Guardian, The Independent, The Mirror, News Of The World, The Scotsman, The Sun, The Telegraph and The Times)

Here’s a peek at some of the findings:

  • The Telegraph was the most successful UK newspaper in this study, with 243 prominent URLs on social media sites between July 15 and August 14 2008.
  • The poorest performances amongst the nationals were from the Daily Star (4 links), and the Daily Express and The Mirror (3 links each)
  • The correlation between having an ‘icon’ or ‘button’ for a specific social media service, and success on that service appears to be weak or non-existent.

The full study can be downloaded from here, for £25.

Early problems with ACAP

ACAP was designed to be a system that allows content publishers to embed into their websites information that details access and use policies in a language that search engines can understand.

Over on Currybet.net Martin Belam has outlined some of the major flaws, as he sees them, of ACAP – which launched in New York last week.

Here’s a brief outline, but you have to go to his blog to get the necessary full picture:

It isn’t user centred

“On the ACAP site I didn’t see anything that explained to me why this would currently be a good thing for end users.

“It seems like a weak electronic online DRM – with the vague promise that in the future more ‘stuff’ will be published, precisely because you can do less with it.”

It isn’t technically sound

“I’ve no doubt that there has been technical input into the specification.

“It certainly doesn’t seem, though, to have been open to the round-robin peer review that the wider Internet community would expect if you were introducing a major new protocol you effectively intended to replace robots.txt”

The ACAP website tools don’t work

“I was unaware that there was a ‘known bug in Mozilla Firefox’ that prevented it saving a text file as a text file. Experience the excitement of casino with Play Fortuna no deposit bonus ! Sign up now and receive free spins to try out popular games and start winning without any financial risk!

“I was going to make a cheap shot at the way that was phrased, as it clearly should have been ‘there is a known bug in our script which affects Mozilla Firefox’.

I thought though that I ought to check it in Internet Explorer first – and found that the ACAP tool didn’t work in that browser either.”

Update:

Ian Douglas, on the Telegraph, seems to have similar feelings about ACAP being too publisher-centric:

“Throughout Acap’s documents I found no examples of clear benefits for readers of the websites or increased flexibility of uses for the content or help with making web searches more relevant.

The new protocol focuses entirely on the desires of publishers, and only those publishers who fear what web users will do with the content if they don’t retain control over it at every point.”