workshop

How to make sense of the recent explosion of online review and rating schemes? What are the issues and how can we think about them? At the first expert workshop, I gave a brief introduction to the ‘How’s my feedback?’ project and the puzzles that have got us thinking. This is a brief overview of the argument, which was meant to open up the discussion. Feel free to download the slides, though the text below might give you a better idea of what this was about.

Understanding feedback schemes: A simple model and six puzzles

The presentation starts with an obviously incomplete map of existing web-based review and rating schemes. The purpose of this was not to provide the ultimate taxonomy, but rather emphasise the wide range of areas in which online feedback has been implemented. There are endless ways to order these schemes: who runs them (government, business, third sector), which inputs are required (direct through reviews and ratings, indirect through tracing and tracking), what methodology is used (aggregation, statistical modelling, algorithms), how are results presented (fulltext, snippets; ratios, scores, stars) and organised (chronological order, ranked, filtered), etc. However, what all of them have in common is that they use electronic technologies to generate and mobilise public evaluations.

Now, it is interesting to think about the ways of social ordering these schemes bring about. There is a long history in this line of thinking about evaluation technologies, especially on accounting, audit or polling devices. However, what is interesting about web-based review and rating schemes is that they tend to oscillate between three common claims: (1) transparency: feedback makes hidden qualities transparent and allows better informed decisions; (2) accountability: the publicity of evaluations holds their targets to account, thus facilitating social control, and (3) participation: the capacity to involve a large number of people and foster engagement, empowerment and scale.

Against this backdrop, different schemes emphasise different aspects. While a search engine like Google may run on a transparency ticket (“organize the world’s information and make it universally accessible and useful”), it also invokes ideas of participation (counting links as votes) and accountability (if you want to rank high, produce ‘relevant’ content). Conversely, a website like MyPolice may aim to improve police services (“a neutral space where you can tell your story secure in the knowledge that the people who count will read it”; accountability) by making people’s experiences visible (“understand what the public needs and wants”) and giving everyone a voice (“You pay for your police service. … You’re the one who matters.”).

However, while this simple model may be helpful for understanding the range of feedback schemes and their corresponding claims, the everyday practices of reviewing and rating look quite different:

1. Users sometimes behave strangely: Technology tends to be designed with a certain user in mind. In a sense, it ‘configures’ the user as Steve argued previously. So what happens if these users talk back and use a platform in ways we did not anticipate (or appreciate)? What assumptions about users come with certain schemes – and how do they work out in practice?

2. Not everybody wants to be empowered: A key issue for a lot of schemes is actually attracting a sufficient volume of feedback. For example, a common sight these days is the invitation to “Be the first to write a review!”. What does this tell us about calls for ‘democracy’ and ‘participation’?

3. There is more than one truth: Take a look at Phil’s exchange with a reviewer on TripAdvisor. Who is the expert in this case: the person claiming to be the guest or the owner responding to the feedback? What counts as ‘valid’ knowledge – and to whom?

4. Not all publicity is good publicity: A common set of concerns these days revolve around the risk of libel and exposure. For example, what is it to be named and called a “habitual convict”, “liar” and a “drug addict” as it happens on DontDateHimGirl? A certain degree of exposure may be a nifty way to attract more people to your website. But where does ‘fair’ marketing end and where does ‘unfair’ libel start?

5. Metrics become targets: And not just metrics, also stories, commments and other forms of feedback. Is it a problem when you can buy a review for $5? What if you organise your friends to express their protest against landmines by participating in a poll on the New Tork Times website? What counts as ‘legitimate’ or ‘ethical’ engagement?

6. Resistance is tricky: Finally, there are increasing concerns about due process. Who holds the evaluators to account — and how? A number of strategies can be observed, including legal action (i.e. invoking a ‘higher’ evaluation scheme), counter schemes (see, e.g. Guestscan as a response to hotel reviews) and watchblogs. So how to evaluate the evaluators — and does this business ever stop?

Overall, it seems that many schemes defy the simple logic of the underlying models. Web-based evaluation is not just about ‘data’ and ‘information’, but deeply embedded in and constitutive of social relations. Qualities are not simply measured or described, but negotiated and performed in an ongoing and messy social process. As a result, public evaluation is never innocent, but highly political with sometimes devastating consequences for those reviewed, ranked and rated. And this is exactly where How’s my feedback? comes in — and our task to design and tinker with a prototype that allows people publicly to evaluate their experiences with review and rating schemes.

{ 0 comments }

The first expert workshop is over, and the ideas are piling up on our desks. The range and richness of views and insights was absolutely amazing. This promises to be a great project, and we are already looking forward to the second workshop on 11 April.

The openness and energy of the expert group was quite impressive. Although the workshops bring together a rather diverse group of people, this did not prevent anyone from jumping right into the discussion, sharing and challenging expertise in marketing, monetizing, facilitating, soliciting, moderating, preventing, evaluating, giving and receiving feedback online. We are currently working on a detailed summary to prepare the ground for the design work in the second workshop. So for the moment, this is just a brief overview.

John guides the group through NHS Choices. Coffee keeps feedback experts going.

Introduction and background: In an attempt to introduce the project, I mapped the current landscape of web-based review and rating schemes and sketched six puzzles that had got us thinking in the project team. We will write more about this soon.

Stories from the field: Among the highlights of the afternoon were certainly the talks of three expert group members, who had volunteered to kick us off.

  • First up was Jason Smith, Client Partner at Bazaarvoice, who talked about his experience with designing and managing feedback systems for big companies like Argos and Expedia. Bazaarvoice has only been around for a bit more than five years, but already generated 196,224,118,952 conversations across its platforms (and counting). Jason showed how this data can be crunched and analysed with custom-made tools. This includes an early warning system for identifying product failures or capturing the sentiment of customers to improve marketing strategies.
  • Next, Peter Harris provided us with a fascinating inside view of what it takes to be a top reviewer on Amazon. A quick look at the top reviewer table confirms that Peter knows what he is talking about. Interestingly, it turned out that it is not always a blessing to lead the lot. Being a no. 1 reviewer comes with its own pitfalls and politics, such as receiving more critical comments or being offered free products that do not interest him. Other aspects Peter covered in his talk concerned the differences between country versions of the Amazon website, the implications of the change from the old to the new ranking system and the many ways in which reviewers interact among each other through forums, e-mails and comments.
  • Finally, John Robinson, User-generated Content Lead at NHS Choices, gave a guided tour through the comment functionalities of the government-run NHS Choices website. John talked about the challenges of designing a scheme that meets the expectations of both policy-makers and users. One example is the difficult question of moderation: what is OK to mention on a public health website and what might interfere with complaints procedures or even legal proceedings? How do negative comments affect small GP practices as opposed to big hospitals? And how to make sure that changes resulting from online reviews are sufficiently visible to the patient and the feedback loop is closed?

William and Harry sort things out at the flipchart.Group work: In the last hour of the workshop, we split up into two groups and discussed four questions based on the presentations: What is online feedback for? What are the benefits — and to whom? What are the harms — and to whom? And what counts as a “good” and a “bad” feedback scheme? Again, we are currently working on a summary of the discussion. Quite a daunting task, but an essential step on our way to the prototype.

Many thanks to David Albury, Sarah Drinkwater, Peter Durward Harris, William Heath, Helen Hancox, Harry Metcalfe, John Robinson, Stefan Schwarzkopf, Jason Smith, Marcus Taylor and Elizabeth Forrester at Which? for making the first Expert Workshop a success.

{ 2 comments }

Sneak preview of the first Expert Workshop

by admin on March 28, 2011

Time has flown by since the project started last fall, and the first expert workshop is only two days away. Everyone who has ever organised such an event (or, in fact, any event) knows the strange tension between happy anticipation and panic attacks caused by unwell speakers or the discovery of an overlooked e-mail at the bottom of the inbox.

Which? Head Office, London

This time, however, there is not much to complain about. First of all, the line-up for Thursday looks terrific. The expert group is now about 20 members strong, covering a broad range of backgrounds and experiences from all shades of business, government and civil society. Social commerce managers, government innovators, academics, reputation consultants, web developers, social media geeks, consumer spokespeople, top-rated reviewers and the targets of reviews — this promises to be an engaging discussion. Furthermore, we are very grateful for the opportunity to meet in a wonderful venue in the heart of London, thanks to the generous support of Which?, the consumers’ association.

Finally, four members of the expert group volunteered to kick us off with short presentations, highlighting different perspectives on online reviews and ratings:

  • Jason Smith, Client Partner at Bazaarvoice, is a long-time expert on consumer feedback in social commerce: “Reviews and Social Commerce: Learnings from 1000 Brands”.
  • Peter Durward Harris is a Top-10 Amazon Reviewer and will share some of his stories: “My experience as a Top 10 Amazon Reviewer”.
  • Chris Emmins, Co-founder of Kwikchex, will highlight the consequences of public evaluations for individuals and businesses: “Online Reviews – The Good, the Bad and the Downright Ugly”.
  • John Robinson, User-generated Content Lead at NHS Choices, will share his experience with user reviews in the public sector: “Introducing patient feedback on NHS Choices: the challenges and what we’ve learned”.

Feel free to have a look at the preliminary workshop programme. We will make sure to take a lot of notes and post a summary soon after the event.

{ 0 comments }