ACAP was designed to be a system that allows content publishers to embed into their websites information that details access and use policies in a language that search engines can understand.
Over on Currybet.net Martin Belam has outlined some of the major flaws, as he sees them, of ACAP – which launched in New York last week.
Here’s a brief outline, but you have to go to his blog to get the necessary full picture:
It isn’t user centred
“On the ACAP site I didn’t see anything that explained to me why this would currently be a good thing for end users.
“It seems like a weak electronic online DRM – with the vague promise that in the future more ‘stuff’ will be published, precisely because you can do less with it.”
It isn’t technically sound
“I’ve no doubt that there has been technical input into the specification.
“It certainly doesn’t seem, though, to have been open to the round-robin peer review that the wider Internet community would expect if you were introducing a major new protocol you effectively intended to replace robots.txt”
The ACAP website tools don’t work
“I was unaware that there was a ‘known bug in Mozilla Firefox’ that prevented it saving a text file as a text file.
“I was going to make a cheap shot at the way that was phrased, as it clearly should have been ‘there is a known bug in our script which affects Mozilla Firefox’.
I thought though that I ought to check it in Internet Explorer first – and found that the ACAP tool didn’t work in that browser either.”
Ian Douglas, on the Telegraph, seems to have similar feelings about ACAP being too publisher-centric:
“Throughout Acap’s documents I found no examples of clear benefits for readers of the websites or increased flexibility of uses for the content or help with making web searches more relevant.
The new protocol focuses entirely on the desires of publishers, and only those publishers who fear what web users will do with the content if they don’t retain control over it at every point.”