Tag Archives: search engine

ACAP answers its critics

The ACAP project launched in November with the hope of being the technological solution to end clashes between news publishers and search engine over content use.

In addition to the back-slapping and the pomp, the launch brought with it hefty criticism of the new system.

The team behind the project has now attempted to satisfy some of the criticism thrown its way by responding to what it considers the main thrust of the argument against it.

Here is a summary of the main critisms ACAP has singled out and its responses (full list):

Criticism: “Publishers should not be allowed to control their content”

Response: Well, you would hardly expect us to agree with this…

“This is simply a way for publishers to “lock up” their content”

…Publishers who implement ACAP will have the confidence to make content available much more widely than is currently the case. Few would condone stealing a pile of newspapers from a newsstand and giving them away to passers-by for free, yet, there are those who think that this behaviour is completely acceptable – indeed normal – in the online environment…

“Robots.txt works perfectly well”

…We recognise that robots.txt is a well-established method for communication between content owners and crawler operators. This is why, at the request of the search engines, we worked to extend the Robots Exclusion Protocol not to replace it (although this posed us substantial problems)… ACAP provides a standard mechanism for expressing conditional access which is what is now required. At the beginning of the project, search engines made it clear that ACAP should be based on robots.txt. ACAP therefore works smoothly with the existing robots.txt protocol…

“This is just about money for publishers”

No: but no one would deny that it is partly about money.

Publishers are not ashamed about making money out of publishing – that is their business…Business models are changing, and publishers need a tool that is flexible and extensible as new business models arise. ACAP will be entirely agnostic with respect to business models, but will ensure that content owners can adopt the business model of their choice…

“The big search engines aren’t involved so don’t waste your time”

Major search engines are involved in the project. Exalead, the world’s fourth largest search engine has been a full participant in the project.

Any lack of public endorsement by the major search engines has not meant a lack of involvement – indeed, quite the opposite…

Food for thought on feeds (but only a third fed)

Yesterday was a day of thirds for me. Two thirds good, one third not so good. In the first two thirds, I attended a roundtable discussion on RSS hosted by MediaFed, a provider of RSS feed tools and services.

It would have been topped off with an excellent three-course meal had I not had to leave for another meeting after the starter (so only one third of a lunch for me, and those that know me well will appreciate how I grieved for the loss of that sticky ginger pudding).

Ahem, but I digress. The purpose of the first discussion was to get some representatives from the UK publishing industry around a table to discuss their current implementation of RSS feeds and how they expect the platform to develop in the future. Before I summarise the points of the discussion, I think it would be useful to summarise what I think are the key RSS requirements from both readers and publishers.
Continue reading

Why the front page is still relevant

When the incremental overhaul of the Guardian.co.uk enveloped the site’s homepage earlier this year there was much talk of the growing irrelevance of newspaper websites having a ‘front’.

Why a front when so many readers/users/visitors/viewers come in though the side door of search and RSS feeds?

Jeff Jarvis quoted figures that as few as 20 per cent of daily visitors get to see it.

Search engine optimisation – that’s the key isn’t it? With ubiquitous navigation from all parts of the site? Yes, truly it’s important. But is that the case for every user of a newspaper website?

Well, up to a point, Sir – as Mr Salter might say.

Let’s take that magic 20 per cent (I have to apologise for not knowing what this figure actually relates to, but I’ll use it as a starting point rather than a crux). Why would a fifth of daily users want to go in via the front door?

Perhaps they’re not fans of the Google hegemony, so avoid its referrals like the plague? Or not tech-savvy enough to master RSS feeds? Or pretty-much only want news from a single perspective, so rely on just one site as ‘the news’?

But what if accessing the news for them wasn’t as simple as scanning NewsFire or banging a search term into Google and quickly scanning a dozen or so relevant links?

What if navigating all the non-uniform sites linked to from Google News was a cripplingly slow nightmare?

What if the architecture of the sites they visit is as relevant – if not more relevant – than the slant those sites put on the news?

Well, if you’re a blind or partially sighted internet user that’s pretty much how it works.

Over the course of this week Journalism.co.uk is running a series of reports looking at difficulties blind and partially sighted users have accessing leading UK national newspaper websites.

To this end we asked a number of volunteers to show us, first-hand, the common problems they face. During our assessments the value of a homepage became strikingly obvious.

Our volunteers tended to start their internet news searches from the homepage of a favoured news site, rather than a search engine.

Our principal volunteer John Allnutt told us that he tended to glean his news from the BBC News site as it had simple navigation that he was used to using and its accessibility information was easily available.

Nothing so strange in that. Most people have favourites. But the tendency to surf differing sources of news isn’t common, we found, amongst those with visual impairment.

It became clear that once a user had got used to the unique and sometimes esoteric navigation of a news site, using screen reading technology, then logic prevailed. It’s easier and quicker to just go to the site where you know all the idiosyncrasies and curios, rather than getting stuck in the frustrating hamster-wheel of figuring out the complexities of other sites.

Furthermore, many news sites don’t have standardised design throughout, making it harder still to jump into a certain section and expect it to be laid out and navigable in the same way as the rest of the site. Easier then just to enter through the home page and to use that as the fulcrum to all your movements around the site.

Our observation isn’t just limited to the individuals we worked with on the project.

Trenton Moss, director of Web Credible, a web usability and accessibility consultancy that helped us in the early part of the project, told us that this is a common phenomenon.

Blind and visually impaired individuals will continue to use these sites in spite of their flaws he told us, perfecting use of the imperfect navigation of a single or a few sites from the homepage to access news online.

There is no ubiquity of design that would allow the blind and visually impaired user to easily float between news sites and utilise search engines as the easy and quick route to news they want.

Ubiquitous design across a range of news websites isn’t something that’s likely to happen soon, if ever.

It’s because of this that front pages remain important as a point of entry for navigation and an easily accessible summation of all that is important.

OMG – it’s Omgili

Approaching its first birthday is “subjective” search engine Omgili, which aims to give users ‘consumer opinions, debates, discussions, personal experiences, answers and solutions’ in response to search terms.

It works by crawling over 100,000 boards, forums and other discussion groups and is clever enough to differentiate between discussion entities such as topic, title, replies and discussion date.

Here’s the results for a Journalism.co.uk hot topic of the week – widgets:

Not a completely scientific study, it’s true, but a rather neat graphic. Arizona fake id is now real id compliant. There are guides that explain how to spot arizona fake id in great detail. You can compare the ‘chatter’ of up to five search terms and plot them over three days, seven days or a month.

I want to recommend you a great article on buying stocks for beginners. For those who don’t speak Spanish, use Google Chrome for translation.

Time to clean up your online image

Reuters today reported on the growing need for candidates to not only consider polishing their CVs but also their online image before applying for a job.

With the growing popularity of social network sites – and the use of search engines like Google – employers are using online searches to see if the well-rounded image you portray on your CV is really the case.

As a result, new companies have been emerging into the market providing clean-up services for jobseekers. For a monthly fee, ReputationDefender.com goes beyond search engine sites into the realms of password-protected areas to report back on your online profile.

We here at Journalism.co.uk have searched online for information about candidates, with some very interesting results. However, unlike the 40 per cent of employers reported in this article, it hasn’t been the deciding factor in whether we hire someone.