Conference programme out now!

by admin on June 21, 2011

It’s only a week until the How’s My Feedback? conference in Oxford. We now have a preliminary programme for the day with a great line-up of speakers, talks and topics. Specifically, watch out for the world premiere of a brand-new feedback technology — and of course the prototype, which we will share with you shortly.

How's My Feedback? Conference Programme

If you haven’t signed up yet, please do so as soon as possible. There are only a few spaces left.

{ 1 comment }

Registration is now open for How’s My Feedback? – The Technology and Politics of Evaluation, a one-day international conference at the Institute for Science, Innovation and Society, Oxford University.

Tuesday, 28 June 2011
9.00 – 17.30
Saïd Business School, University of Oxford

About the conference:

There is hardly anything these days that is not being evaluated on the web. Books, dishwashers, lawyers, teachers, health services, ex-boyfriends, haircuts, prostitutes and websites are just some examples targeted by novel review, rating and ranking schemes. Used in an increasing number of areas, these schemes facilitate public assessment by soliciting and aggregating feedback and distributing it as comments, ranks, scales and stories. While some have greeted this development as an innovative way of fostering transparency, accountability and public engagement, others have criticized the forced exposure and alleged lack of accuracy and legitimacy, pointing to the potentially devastating consequences of negative evaluations.

Now research is under way to tackle these issues head-on and evaluate the various types of review, rating and ranking schemes in a collaborative design experiment. Under the title ‘How’s my feedback?’, a group of experts, including designers, managers, reviewers, policy-makers, consumer spokespeople, academics and users are currently exploring the idea of a website that allows users to publicly assess their experience with review and rating schemes – a feedback website for feedback websites.

The goal of the conference is to reflect on this process and the emerging prototype. How are we to judge the effectiveness of these schemes? What modes of governance are implicated in their operation? What strategies and methodologies are employed in their development, maintenance and use? How successful is this project as a design intervention? What is it to evaluate the evaluators – and will this business ever end?

Speakers include:


Malte Ziewitz and Steve Woolgar, University of Oxford, in cooperation with James Munro, Patient Opinion


The event is free of charge, but registration is required: REGISTER HERE

Project website:
Download poster:
How to find us:

For more information, contact

The conference is generously supported by an ESRC Knowledge Exchange Small Grant and the Institute for Science, Innovation and Society.

UPDATE 27/5/2011: We just moved registration to a new site. If you already registered, no worries. You are still signed up.


How to make sense of the recent explosion of online review and rating schemes? What are the issues and how can we think about them? At the first expert workshop, I gave a brief introduction to the ‘How’s my feedback?’ project and the puzzles that have got us thinking. This is a brief overview of the argument, which was meant to open up the discussion. Feel free to download the slides, though the text below might give you a better idea of what this was about.

Understanding feedback schemes: A simple model and six puzzles

The presentation starts with an obviously incomplete map of existing web-based review and rating schemes. The purpose of this was not to provide the ultimate taxonomy, but rather emphasise the wide range of areas in which online feedback has been implemented. There are endless ways to order these schemes: who runs them (government, business, third sector), which inputs are required (direct through reviews and ratings, indirect through tracing and tracking), what methodology is used (aggregation, statistical modelling, algorithms), how are results presented (fulltext, snippets; ratios, scores, stars) and organised (chronological order, ranked, filtered), etc. However, what all of them have in common is that they use electronic technologies to generate and mobilise public evaluations.

Now, it is interesting to think about the ways of social ordering these schemes bring about. There is a long history in this line of thinking about evaluation technologies, especially on accounting, audit or polling devices. However, what is interesting about web-based review and rating schemes is that they tend to oscillate between three common claims: (1) transparency: feedback makes hidden qualities transparent and allows better informed decisions; (2) accountability: the publicity of evaluations holds their targets to account, thus facilitating social control, and (3) participation: the capacity to involve a large number of people and foster engagement, empowerment and scale.

Against this backdrop, different schemes emphasise different aspects. While a search engine like Google may run on a transparency ticket (“organize the world’s information and make it universally accessible and useful”), it also invokes ideas of participation (counting links as votes) and accountability (if you want to rank high, produce ‘relevant’ content). Conversely, a website like MyPolice may aim to improve police services (“a neutral space where you can tell your story secure in the knowledge that the people who count will read it”; accountability) by making people’s experiences visible (“understand what the public needs and wants”) and giving everyone a voice (“You pay for your police service. … You’re the one who matters.”).

However, while this simple model may be helpful for understanding the range of feedback schemes and their corresponding claims, the everyday practices of reviewing and rating look quite different:

1. Users sometimes behave strangely: Technology tends to be designed with a certain user in mind. In a sense, it ‘configures’ the user as Steve argued previously. So what happens if these users talk back and use a platform in ways we did not anticipate (or appreciate)? What assumptions about users come with certain schemes – and how do they work out in practice?

2. Not everybody wants to be empowered: A key issue for a lot of schemes is actually attracting a sufficient volume of feedback. For example, a common sight these days is the invitation to “Be the first to write a review!”. What does this tell us about calls for ‘democracy’ and ‘participation’?

3. There is more than one truth: Take a look at Phil’s exchange with a reviewer on TripAdvisor. Who is the expert in this case: the person claiming to be the guest or the owner responding to the feedback? What counts as ‘valid’ knowledge – and to whom?

4. Not all publicity is good publicity: A common set of concerns these days revolve around the risk of libel and exposure. For example, what is it to be named and called a “habitual convict”, “liar” and a “drug addict” as it happens on DontDateHimGirl? A certain degree of exposure may be a nifty way to attract more people to your website. But where does ‘fair’ marketing end and where does ‘unfair’ libel start?

5. Metrics become targets: And not just metrics, also stories, commments and other forms of feedback. Is it a problem when you can buy a review for $5? What if you organise your friends to express their protest against landmines by participating in a poll on the New Tork Times website? What counts as ‘legitimate’ or ‘ethical’ engagement?

6. Resistance is tricky: Finally, there are increasing concerns about due process. Who holds the evaluators to account — and how? A number of strategies can be observed, including legal action (i.e. invoking a ‘higher’ evaluation scheme), counter schemes (see, e.g. Guestscan as a response to hotel reviews) and watchblogs. So how to evaluate the evaluators — and does this business ever stop?

Overall, it seems that many schemes defy the simple logic of the underlying models. Web-based evaluation is not just about ‘data’ and ‘information’, but deeply embedded in and constitutive of social relations. Qualities are not simply measured or described, but negotiated and performed in an ongoing and messy social process. As a result, public evaluation is never innocent, but highly political with sometimes devastating consequences for those reviewed, ranked and rated. And this is exactly where How’s my feedback? comes in — and our task to design and tinker with a prototype that allows people publicly to evaluate their experiences with review and rating schemes.


The first expert workshop is over, and the ideas are piling up on our desks. The range and richness of views and insights was absolutely amazing. This promises to be a great project, and we are already looking forward to the second workshop on 11 April.

The openness and energy of the expert group was quite impressive. Although the workshops bring together a rather diverse group of people, this did not prevent anyone from jumping right into the discussion, sharing and challenging expertise in marketing, monetizing, facilitating, soliciting, moderating, preventing, evaluating, giving and receiving feedback online. We are currently working on a detailed summary to prepare the ground for the design work in the second workshop. So for the moment, this is just a brief overview.

John guides the group through NHS Choices. Coffee keeps feedback experts going.

Introduction and background: In an attempt to introduce the project, I mapped the current landscape of web-based review and rating schemes and sketched six puzzles that had got us thinking in the project team. We will write more about this soon.

Stories from the field: Among the highlights of the afternoon were certainly the talks of three expert group members, who had volunteered to kick us off.

  • First up was Jason Smith, Client Partner at Bazaarvoice, who talked about his experience with designing and managing feedback systems for big companies like Argos and Expedia. Bazaarvoice has only been around for a bit more than five years, but already generated 196,224,118,952 conversations across its platforms (and counting). Jason showed how this data can be crunched and analysed with custom-made tools. This includes an early warning system for identifying product failures or capturing the sentiment of customers to improve marketing strategies.
  • Next, Peter Harris provided us with a fascinating inside view of what it takes to be a top reviewer on Amazon. A quick look at the top reviewer table confirms that Peter knows what he is talking about. Interestingly, it turned out that it is not always a blessing to lead the lot. Being a no. 1 reviewer comes with its own pitfalls and politics, such as receiving more critical comments or being offered free products that do not interest him. Other aspects Peter covered in his talk concerned the differences between country versions of the Amazon website, the implications of the change from the old to the new ranking system and the many ways in which reviewers interact among each other through forums, e-mails and comments.
  • Finally, John Robinson, User-generated Content Lead at NHS Choices, gave a guided tour through the comment functionalities of the government-run NHS Choices website. John talked about the challenges of designing a scheme that meets the expectations of both policy-makers and users. One example is the difficult question of moderation: what is OK to mention on a public health website and what might interfere with complaints procedures or even legal proceedings? How do negative comments affect small GP practices as opposed to big hospitals? And how to make sure that changes resulting from online reviews are sufficiently visible to the patient and the feedback loop is closed?

William and Harry sort things out at the flipchart.Group work: In the last hour of the workshop, we split up into two groups and discussed four questions based on the presentations: What is online feedback for? What are the benefits — and to whom? What are the harms — and to whom? And what counts as a “good” and a “bad” feedback scheme? Again, we are currently working on a summary of the discussion. Quite a daunting task, but an essential step on our way to the prototype.

Many thanks to David Albury, Sarah Drinkwater, Peter Durward Harris, William Heath, Helen Hancox, Harry Metcalfe, John Robinson, Stefan Schwarzkopf, Jason Smith, Marcus Taylor and Elizabeth Forrester at Which? for making the first Expert Workshop a success.