Tag Archives: simon willison

Future of News meet-ups in Brighton and Birmingham

Inspired by the first UK Future of News meet-ups in London, a couple of regional nests have been formed, with the Brighton and West Midlands groups holding their inaugural meetings last night.

My colleague Laura Oliver live-blogged some notes from our Brighton event, which featured the Argus online editor, Jo Wadsworth and the Guardian’s software architect, Simon Willison.

Willison, who was the lead developer for the Guardian’s crowd-sourced MPs’ expenses projects, talked about the ups and downs of user-driven information gathering; and about his latest collaborative launch, Wildlifenearyou.com, a project that collects users’ animal photographs for an online wildlife mapping project. Users can rank and identify photographs, building their site profiles. The feature allowing users to pick their favourite picture of two (for example, what’s your favourite meerkat?), accumulated more than 5,000 votes within a few hours.

Group breakout time at the #bfong on TwitpicAs Laura notes, a specific version of Wildlifenearyou.com, Owlsnearyou.com launched just a few weeks ago. Getting the site some extra coverage, Owlsnearyou cannily “piggybacked” on the Superbowl hashtag on Twitter by creating “Superb Owl Day”… Geddit?

Willison also told the group about OpenStreetMap, the first free, wiki-style, editable map of the whole world. He said that the project has become adept at responding to crises.

OpenStreetMap was given some high resolution photographs of Haiti, when the earthquake occurred, and the team traced them to create the best digital map of Haiti available. It has become the default map for rescue teams, Willison added.

Read Laura’s full post at this link…

UK Future of News gets local

Future of News group organiser Adam Westbrook has summarised last week’s meet-up on his blog and also updates on the birth of three UK splinter groups: in Brighton, South Wales and the West Midlands. Full post at this link…

On Sarah Booker’s suggestion, I set up a page for the Brighton group: places are filling fast for our first meeting on 8 February, featuring developer Simon Willison (behind the Guardian‘s MP expenses crowdsourcing project and wildlifenearyou.com) and the Argus online editor, Jo Wadsworth. So put your name down quickly!

Reblog this post [with Zemanta]

Journalism Daily: Candy Box billboards; Chicago Tribune’s new innovators; VentnorBlog reports Vestas

Journalism.co.uk is trialling a new service via the Editors’ Blog: a daily round-up of all the content published on the Journalism.co.uk site.

We hope you’ll find it useful as a quick digest of what’s gone on during the day (similar to our e-newsletter) and to check that you haven’t missed a posting.

We’ll be testing it out for a couple of weeks, so you can subscribe to the feed for the Journalism Daily here.

Let us know what you think – all feedback much appreciated.

News and features

Ed’s Picks

Tip of the Day

#FollowJourn

On the Editors’ Blog

#datajourn: Simon Willison’s ‘hack day’ tools for non-developers

The Guardian’s second (internal) hack day is imminent; the development team, members of the tech department and even journalists get together to play and build.

Read about the first one here. Remember this effort by guest hacker, Matthew Somerville: http://charlian.dracos.co.uk/?

In preparation for the second, Simon Willison (@simonw), the lead developer behind the Guardian’s MPs’ expenses crowdsourcing application, has helpfully put together an (external) list of tools for non-developers: “sites, services and software that could be used for hacking without programming knowledge as a pre-requisite. ”

Full list at this link…

NewsInnovation videos from @newsmatters: featuring @kevglobal, @currybet, @markng, @simonw, @willperrin

The Media Standards Trust has finished uploading content from its NewsInnovation event, held in association with NESTA and the WSRI, earlier this month to its YouTube channel.

[Previous Journalism.co.uk coverage at this link]

We’ll embed the first segment of each session, and for further installments follow the links below each video.

Part 2, Part 3, Part 4, Part 5.

  • Kevin Anderson (@kevglobal) Guardian blogs editor talks about news business models.

Part 2, Part 3, Part 4.

  • Ben Campbell talks about the Media Standards Trust website, Journalisted.

Part 2, Part 3, Part 4.

  • Will Perrin (@willperrin) on digital possibilities for the Chilcot Inquiry into the Iraq War.

Part 2.

  • Simon Willison (@simonw) of The Guardian talks about using the crowd to sift through MPs’ expenses.

Part 2, Part 3, Part 4.

  • Martin Belam (@currybet) information architect at the Guardian on ‘The tyranny of chronology’.

Part 2, Part 3.

Newsinnovation London: Audio from the event

Journalism.co.uk had a great day at Friday’s inaugural Newsinnovation event hosted by the Media Standards Trust (MST).

As well as discussing the MST’s plans with the Associated Press for a new industry standard for story metadata, sessions covered the use of data for newsgathering and storytelling, hyperlocal publishing and communities and open source technology.

Have a read of Adam Tinworth’s posts on the event; watch Kevin Anderson’s video vox pops on the future of news; and check out Martin Belam’s handy list of links that were circulating during the sessions.

Below is some rough and ready audio from a few of the talks from the event:

The Guardian’s Simon Willison on its MPs’ expenses crowdsourcing experiment

Will Perrin on ‘hyperlocal’ and Talk About Local

My Football Writer’s Rick Waghorn on local online advertising system Addiply

Toby Moores and Reuters’ Mark Jones on social media, news and politics

Let the expenses data war commence: Telegraph begins its document drip feed

Andy Dickinson from the Department of Journalism at UCLAN sums up today’s announcement in this tweet: ‘Telegraph to drip-publish MP expenses online’.

[Update #1: Editor of Telegraph.co.uk, Marcus Warren, responded like this: ‘Drip-publish? The whole cabinet at once….that’s a minor flood, I think’]

Yes, let the data war commence. The Guardian yesterday released its ‘major crowdsourcing tool’ as reported by Journalism.co.uk at this link. As described by one of its developers, Simon Willison, on his own blog, the Guardian is ‘crowdsourcing the analysis of the 700,000+ scanned [official] MP expenses documents’. It’s the Guardian’s ‘first live Django-powered application’. It’s also the first time the news site has hosted something on Amazon EC2, he says. Within 90 minutes of launch, 1700 users had ‘audited’ its data, reported the editor of Guardian.co.uk, Janine Gibson.

The Telegraph was keeping mum, save a few teasing tweets from Telegraph.co.uk editor Marcus Warren. A version of its ‘uncensored’ data was coming, but they would not say what and how much.

Now we know a bit more. As well as printing its data in a print supplement with Saturday’s newspaper they will gradually release the information online. As yet, copies of claim forms have been published using Issuu software, underneath each cabinet member’s name. See David Miliband’s 2005-6 expenses here, for example. From the Telegraph’s announcement:

  • Complete records of expense claims made by every Cabinet minister have been published by The Telegraph for the first time.”
  • “In the coming weeks the expense claims of every MP, searchable by name and constituency, will be published on this website.”
  • “There will be weekly releases region by region and a full schedule will be published on Tuesday.”
  • “Tomorrow [Saturday], the Daily Telegraph will publish a comprehensive 68-page supplement setting out a summary of the claims of every sitting MP.”

Details of what’s included but not included in the official data at this link.  “Sensitive information, such as precise home addresses, phone numbers and bank account details, has been removed from the files by the Telegraph’s expenses investigation team,” the Telegraph reports.

So who is winning in the data wars? Here’s what Paul Bradshaw had to say earlier this morning:

“We may see more stories, we may see interesting mashups, and this will give The Guardian an edge over the newspaper that bought the unredacted data – The Telegraph. When – or if – they release their data online, you can only hope the two sets of data will be easy to merge.”

Update #2: Finally, Martin Belam’s post on open and closed journalism (published Thursday 18th) ended like this:

“I think the Telegraph’s bunkered attitude to their scoop, and their insistence that they alone determined what was ‘in the public interest’ from the documents is a marked contrast to the approach taken by The Guardian. The Telegraph are physically publishing a selection of their data on Saturday, but there is, as yet, no sign of it being made online in machine readable format.

“Both are news organisations passionately committed to what they do, and both have a strategy that they believe will deliver their digital future. As I say, I have a massive admiration for the scoop that The Telegraph pulled off, and I’m a strong believer in media plurality. As we endlessly debate ‘the future of news™’ I think both approaches have a role to play in our media landscape. I don’t expect this to be the last time we end up debating the pros and cons of the ‘closed’ and ‘open’ approaches to data driven journalism.”

It has provoked an interesting comment from Ian Douglas, the Telegraph’s head of digital production.

“I think you’re missing the fundamental difference in source material. No publisher would have released the completely unredacted scans for crowdsourced investigation, there was far too much on there that could never be considered as being in the public interest and could be damaging to private individuals (contact details of people who work for the MPs, for example, or suppliers). The Guardian, good as their project is, is working solely with government-approved information.”

“Perhaps you’ll change your mind when you see the cabinet expenses in full on the Telegraph website today [Friday], and other resources to come.”

Related Journalism.co.uk links:

Q&A with an information architect (aka @currybet aka Martin Belam)

Martin Belam, of the CurryBet blog, has recently been appointed as ‘information architect’ for Guardian.co.uk. Journalism.co.uk asked him what he’ll be doing for the site…

For those who don’t know what you do, fill us in your background and the new gig…
[MB] I was at the Hack Day that the Guardian’s technology department ran back in November 2008, and the talent and enthusiasm that day really shone. I’ve really enjoyed the freedom of working as a consultant over the last three years, much of the time based either in Crete or in Austria, but the opportunity of coming to work more permanently for an organisation as forward-thinking as the Guardian is being with initiatives like the Open Platform was too much to resist.

So, an ‘information architect’ what does that mean and what are you doing?
Information Architecture has been defined as ‘the emerging art and science of organising large-scale websites’.

All websites have an inherent information structure – the navigation, the contextual links on a page, whether there are tags describing content and so forth. It is about how people navigate and way-find their way through the information presented on a site.

What I’ll be doing at the Guardian is influencing that structure and functionality as new digital products are developed. It involves working closely with design and editorial teams to produce ‘wireframes’, the blueprints of web design, and also involves being an advocate for the end user – carrying out lots of usability and prototype testing as ideas are developed.

Is it a full-time role?
I’m working four days a week at The Guardian, as I still have some other commitments – for example as contributing editor for FUMSI magazine – although already it feels a bit like cramming a full-time job into just 80 per cent of the time!

It’s not happy times for mainstream media brands: where are they going wrong?
I don’t think it is only mainstream media brands that are suffering from the disruption caused by digital transition, but we do see a lot of focus on this issue for print businesses at the moment. I think one of the things that strikes me, having worked at several big media companies now, including the BBC and Sony, is that you would never set these organisations up in this way in the digital era if you were doing it from scratch.

One of the things that appealed most about joining the Guardian was that the move to Kings Place has brought together the print, online and technical operations in a way that wasn’t physically possible before in the old offices. I’m still very optimistic that there are real opportunities out there for the big media brands that can get their business structures right for the 21st century.

What kind of things do you think could re-enthuse UK readers for their newspapers?
I think our core and loyal readers are still enthusiastic about their papers, but that as an industry we have to face the fact that there is an over-supply of news in the UK, and a lot of it – whether it is on the radio, TV, web or thrust into your hand as a freebie – is effectively free at the point of delivery. I think the future will see media companies who concentrate on playing to their strengths benefit from better serving a narrower target audience.

Do you see print becoming the by rather than primary product for the Guardian – or has that already happened?
I think there might very well be a ‘sweet spot’ in the future where the display quality on network-enabled mobile devices and the ubiquity of data through-the-air means that the newspaper can be delivered primarily in that way, but I don’t see the Guardian’s presses stopping anytime soon. Paper is still a very portable format, and it never loses connection or runs out of batteries.

Your background is in computer programming rather than journalism, will the two increasingly overlap?
I grew up in the generation that had BBC Micros and ZX Spectrums at home, so I used to program a lot as a child, but my degree was actually in History, which in itself is a very journalistic calling. I specialised in the Crusades and the Byzantine Empire, which is all about piecing together evidence from a range of sources of varying degrees of reliability, and synthesizing a coherent narrative and story from there. And, of course, I’ve spent most of this decade blogging, which utilises ‘some’ of the journalist’s skill-set ‘some’ of the time.

Whilst I’d never suggest that journalists need to learn computer programming much beyond a smattering of HTML, I think there is something to be gained from understanding the software engineering mindset. There are a lot of tools and techniques that can really help journalists plough through data to get at the heart of a story, or to use visualisation tools to help tell that story to their audience.

One of the most interesting things about working at the Guardian is the opportunity to work alongside people like Kevin Anderson, Charles Arthur and Simon Willison, who I think really represent that blending of the technical and journalistic cultures.

You’ve spoken out about press regulation before; why do you feel strongly about it?
In a converged media landscape, it seems odd that Robert Peston’s blog is regulated by the BBC Trust, Jon Snow’s blog is regulated by Ofcom, and Roy Greenslade’s blog is regulated by the PCC.

At the moment, I believe that the system works very well for editors, and very well for the ‘great and the good’ who can afford lawyers, but does absolutely nothing for newspaper consumers. If I see something that offends me on TV, I can complain to Ofcom. If I see an advert that offends me in the street, I can complain to ASA. If I see an article in a newspaper that I think is wrong, inaccurate, in bad taste or offensive, unless I am directly involved in the story myself, the PCC dismisses my complaint out of hand without investigating it.

I don’t think that position is sustainable.

The last thing I want to see is some kind of state-sponsored Ofpress quango, which is why I think it is so important that our industry gets self-regulation right – and why I believe that a review of how the PCC works in the digital era is long overdue.