Three Birmingham City journalism students have created a hyperlocal news agency for their final year dissertation project.
Newswaves aims to provide content for hyperlocal blogs around the West Midlands and drive traffic in their direction by publishing links and excerpts.
Most people start Hyperlocal blogs purely because of their love of the area and run them as a hobby, meaning that they don’t always have the time or the means to cover all the stories they’d like to; that’s where the Newswaves team come in.
Online journalism innovator Paul Bradshaw has taken voluntary redundancy from his post as course leader for the online journalism MA at Birmingham City University, in what he says was a “complicated decision”.
“It was a very complicated decision,” he told Journalism.co.uk. “There are a lot of opportunities around data journalism that I want to explore and I want to spend more time on Help Me Investigate. I felt it was probably the right time to dive in to more of those opportunities and now I have time to accept offers I have been made. But I am wary of taking too much work on. Part of the point is to invest more time in Help Me Investigate. I plan to start some development work and explore business models soon.”
Bradshaw is also already working on two different books, his own on magazine editing which is set to be completed by the end of the year and another dedicated to online journalism, which he is contributing to with former FT.com news editor Liisa Rohumaa, likely to be out by early next year.
On top of all that, he admits he may keep his toes in the teaching pool.
“I will certainly miss parts of teaching,” he told Journalism.co.uk. “I absolutely, enormously enjoyed teaching the students this year. Some of their work has been the best so far. I may still do a bit of teaching, but I think I have always wanted to keep growing and developing. The students say they are gutted, but they were quite excited and positive about what I am doing. I am experiencing a huge jumble of emotions. I am excited about the possibilities but I am really going to miss the students and staff.”
The panel included: Martin Belam (information architect, the Guardian; blogger, Currybet; John O’Donovan (chief architect, BBC News Online); Dan Brickley (Friend of a Friend project; VU University, Amsterdam; SpyPixel Ltd; ex-W3C); Leigh Dodds (Talis).
“Linked Data is about using the web to connect related data that wasn’t previously linked, or using the web to lower the barriers to linking data currently linked using other methods.” (http://linkeddata.org)
I talked about how 2009 was, for me, a key year in data and journalism – largely because it has been a year of crisis in both publishing and government. The seminal point in all of this has been the MPs’ expenses story, which both demonstrated the power of data in journalism, and the need for transparency from government. For example: the government appointment of Sir Tim Berners-Lee, the search for developers to suggest things to do with public data, and the imminent launch of Data.gov.uk around the same issue.
Even before then the New York Times and Guardian both launched APIs at the beginning of the year, MSN Local and the BBC have both been working with Wikipedia and we’ve seen the launch of a number of startups and mashups around data including Timetric, Verifiable, BeVocal, OpenlyLocal, MashTheState, the open source release of Everyblock, and Mapumental.
Q: What are the implications of paywalls for Linked Data?
The general view was that Linked Data – specifically standards like RDF [Resource Description Format] – would allow users and organisations to access information about content even if they couldn’t access the content itself. To give a concrete example, rather than linking to a ‘wall’ that simply requires payment, it would be clearer what the content beyond that wall related to (e.g. key people, organisations, author, etc.)
Leigh Dodds felt that using standards like RDF would allow organisations to more effectively package content in commercially attractive ways, e.g. ‘everything about this organisation’.
Q: What can bloggers do to tap into the potential of Linked Data?
This drew some blank responses, but Leigh Dodds was most forthright, arguing that the onus lay with developers to do things that would make it easier for bloggers to, for example, visualise data. He also pointed out that currently if someone does something with data it is not possible to track that back to the source and that better tools would allow, effectively, an equivalent of pingback for data included in charts (e.g. the person who created the data would know that it had been used, as could others).
Q: Given that the problem for publishing lies in advertising rather than content, how can Linked Data help solve that?
Dan Brickley suggested that OAuth technologies (where you use a single login identity for multiple sites that contains information about your social connections, rather than creating a new ‘identity’ for each) would allow users to specify more specifically how they experience content, for instance: ‘I only want to see article comments by users who are also my Facebook and Twitter friends.’
The same technology would allow for more personalised, and therefore more lucrative, advertising. John O’Donovan felt the same could be said about content itself – more accurate data about content would allow for more specific selling of advertising.
Martin Belam quoted James Cridland on radio: ‘[The different operators] agree on technology but compete on content’. The same was true of advertising but the advertising and news industries needed to be more active in defining common standards.
Leigh Dodds pointed out that semantic data was already being used by companies serving advertising.
I asked members of the audience who they felt were the heroes and villains of Linked Data in the news industry. The Guardian and BBC came out well – The Daily Mail were named as repeat offenders who would simply refer to ‘a study’ and not say which, nor link to it.
Martin Belam pointed out that the Guardian is increasingly asking itself ‘how will that look through an API?’ when producing content, representing a key shift in editorial thinking. If users of the platform are swallowing up significant bandwidth or driving significant traffic then that would probably warrant talking to them about more formal relationships (either customer-provider or partners).
A number of references were made to the problem of provenance – being able to identify where a statement came from. Dan Brickley specifically spoke of the problem with identifying the source of Twitter retweets.
Dan also felt that the problem of journalists not linking would be solved by technology. In conversation previously, he also talked of ‘subject-based linking’ and the impact of SKOS [Simple Knowledge Organisation System] and linked data style identifiers. He saw a problem in that, while new articles might link to older reports on the same issue, older reports were not updated with links to the new updates. Tagging individual articles was problematic in that you then had the equivalent of an overflowing inbox.
Finally, here’s a bit of video from the very last question addressed in the discussion (filmed with thanks by @countculture):
Birmingham City University’s School of Media will exhibit final projects from its media and communications students at an event this week – viewing on June 11 for media professionals; public view on June 12 – at Fazeley Street Studios in Birmingham.
The event will highlight the skills and abilities acquired by this year’s graduates during the three-year course.
Students will show their final year production projects from their specialist areas, which include television, radio, PR, journalism, photography, new media and music industries.
“I think this is a good opportunity to make ourselves known to the media industry and to meet potential professionals who are interested in our work,” says Mohammed Adnan (final year student on the media and communication degree).