Browse > Home /

#Tip: Remember these data-scraping tools for journalists

April 22nd, 2014 | No Comments | Posted by in Top tips for journalists

The abundance of data in all its forms – from government departments and company accounts to sports measurements and social statistics – provides a constant stream of potential stories for the numerate and curious journalist.

As any experience data journalist will tell you, not all data is easily accessible and will often be uploaded in a less than accessible manner. This is where scraping tools come in.

Shelly Tan spent much of last autumn working on visualisations at NPR and discovered a number of handy web-scraping tools to make the job easier, as she explains in this blog post.

For further information on getting started in data journalism this guide on sources and tools for the job is a great place to start.

Tags: , , ,

Similar posts:

Poynter Online: How to get data from websites ‘without programming skills’

It’s not enough to copy those numbers into a story; what differentiates reporters from consumers is our ability to analyse data and spot trends. To make data easier to access, reorganise and sort, those figures must be pulled into a spreadsheet or database. The mechanism to do this is called web scraping, and it’s been a part of computer science and information systems work for years.

It often takes a lot of time and effort to produce programs that extract the information, so this is a specialty. But what if there was a tool that didn’t require programming?

Michelle Minkoff offers a simple guide for journalists who want to learn how to scrape data from websites, but don’t know how to start, using OutWit Hub – an extension for the Firefox browser.

Full post at this link…

Yesterday Journalism.co.uk attended a Digital Editors Network meeting to discuss data for journalism and journalists – more to follow on Journalism.co.uk

Tags: , , ,

Similar posts:

© Mousetrap Media Ltd. Theme: modified version of Statement