This month, MySociety released the latest metrics from writetothem.com, a civic engagement tool designed to make it easier for constituents to raise issues with their MP. As ever, the stats make for fascinating reading, if not least because they give a glimpse of what accountability in a networked democracy might look like.
There are league tables, metrics on how good MPs are at responding to issues raised, and even graphs showing how well MPs are doing compared to the same time last year. From writetothem.com, visitors can easily jump to a sister site, theyworkforyou.com, to see how the people they elect have voted in the houses too, and what they’ve been saying in parliamentary debates on their behalf.
In short, there’s ample scope for scrutiny here and further analysis, and this level of transparency is a high benchmark for other services in the public sector. MySociety rightly advise caution about what deductions can be drawn from their metrics, and the data they collect is resolutely quantitative — how many letters are written for the first time, what percentage of those writing hear back from the MP within three weeks, or how many letters are written.
Other services use a more qualitative approach for providing feedback. Patient Opinion serves to shine a light on the treatment of patients in NHS trusts, letting them write reviews of their visit on a website and, after moderation, display them online. Ratemyteachers.co.uk allows pupils and parents to grade a teacher’s performance. And in America ratemycop.com is attempting to let the public rate their interactions with individual police officers. One of the projects winning funding at Social Innovation Camp was a project to let families rate how well their prison visit was managed.
All of these services above use an initial ‘user review’ model for public services, much like that of amazon.com – what they do with the reviews varies from site to site. As the technical barriers to creating direct feedback mechanisms get lower, the challenge is to find an elegant solution to make that feedback as valuable as possible and ensure it is submitted in a way that doesn’t cause unnecessary harm.
Simply mapping a five star rating system onto complex social interactions is a seductive idea, because it’s easy to pull out stats from data like this, and give a veneer of objectivity when presenting for funding or support. But if the initial process of collecting feedback isn’t thought through well enough, the data ends up being meaningless, and more importantly, useless for informing any kind of social change.
To use the old programming adage – garbage in, garbage out.