Blog

Can you spare a click for us?

by admin on April 4, 2012

EngageU awards logoIf you have five seconds, you could do us a big favour and vote for us in the EngageU awards. This is a new “European Competition for Best Innovations in University Outreach and Public Engagement”.

Just go to our submission and click on the red box in the top right corner.

And while you’re there, have a look at the other projects, too. There is some really good stuff going on.

{ 0 comments }

Lots of chatter this morning about yesterday’s Channel 4 programme “Attack of the Trip Advisors”. This is an entertaining and informative documentary about TripAdvisor reviews and ratings. As the programme snippet claims:

The British hospitality industry is under attack. Businesses are being assaulted by ever-more nit-picking and abusive reviews. It’s bad for their livelihoods and their sanity.

But it’s not the professional critics who are reviewing their hosts; a nation of virtual AA Gills and Michael Winners are using the Trip Advisor website to get their own back on hotels and restaurants.

With more than 40 million users a month, Trip Advisor is the largest and most powerful travel guide in the world.

But is it a force for good that gives the customer a voice, or an abuse of power that undermines businesses and ruins lives?

How long can Britain’s small businesses cope with relentless criticism before they pack it all in?

This Cutting Edge film reveals Britain’s most meticulous Trip Advisors and meets some of the hoteliers and restaurateurs at war with the site.

The documentary features some interesting cases and highlights many of the issues we have been dealing with in the How’s My Feedback? project. Not surprisingly, the debate has already started in the TripAdvisor section. Opinions and experiences differ wildly, so please join in and share your story. The issues clearly go beyond TripAdvisor, but it is a good case in point to discuss some timely and important problems around online reviews, ratings and rankings.

If you missed the show, you can still watch it here at Channel 4OD.

{ 0 comments }

It took a while, but the videos of the conference talks are now online. Have a look and indulge in some challenging talks around online reviews, ratings and rankings.


The slides are mostly captured in the videos. However, it might be more convenient to watch them separately. Here are the decks, which are currently available (all pdfs):

You can watch all videos also on our Vimeo account.

{ 0 comments }

Thank you very much to everyone who made yesterday’s conference a success. The day was packed with discussions, provocations, insight and — who would have thought — feedback. All sessions were recorded so we will post the videos soon. For the moment, please find below a collection of snapshots and comments.

How's My Feedback?

1. The speakers

At the heart of the conference were five talks, which offered different perspectives on the phenomenon of online reviews, ratings and rankings. After a brief introduction by Steve Woolgar and myself, Stefan Schwarzkopf drew some interesting connections between his attempts to review a hotel and ideas from political theory: “Feedback, democracy and conflicting consumption in a New York hotel: A journey from theory to micro-study, and back”. His presentation was followed by remarks from Daniel Neyland, who challenged the idea of a preexisting object of evaluation and shared a story about his own struggles with a review of a recent book of his. Next, Alex Wilkie from Goldsmiths reported on his research about “User involvement in design” and specifically the role of personas and user assemblages in the design process. In his comments, Tim Webmoor reflected on Danah Boyd’s Twitter debacle and the changing conception of the expert in evaluations.

After the lunch break, Andy Balmer reinvigorated the audience with an autoethnographic account of “Being 6.1: My life on HotorNot.com”, followed by Sally Wyatt’s remarks and provocations, which were partially delivered in the form of a t-shirt. Finally, Malcolm Ashmore reflected on the notion of reflexivity in his talk “What is it to evaluate the evaluators? A fairly formal reflexive analysis”, to which Javier Lezaun responded with some (in his own words) “unfairly informal” comments.

Christine Hine and James Munro skillfully summed up the day and offered concluding remarks, including individual ratings of speakers on a 10-point scale as well as an analysis of the speaker hotel and it’s mixed reviews.

2. The audience

The list of delegates was long and diverse. Besides academics, also practitioners came to Oxford, including Amazon reviewers, university administrators and social entrepreneurs from organizations like How’s My Driving Ltd, GAF Materials Corporation, MMU business school, VU University, Patient Opinion, IE Business School, BestSoftwareIndex.com, Bazaarvoice, BPP University College, ECI, London Business School, Arizona State University, University of Cambridge, University of Bedford, Research in Practice, Infosys Technologies, aporia, University of Tasmania, University of Reading, Scientific Council for Government Policy, University of Lincoln, New York University, HealthUnlocked, University of Kent, Manchester Metropolitan University, University of Leeds, UNED National Distance University of Spain, University of Leicester, The Open University, Imperial College, GHK Consulting Limited, and the London School Of Economics.

3. The worm experiment

In order to get a better sense of the dynamics of evaluation, we engaged in a live experiment, using the latest development in feedback technology. Andy Balmer generously volunteered (and actually was quite keen) to participate in a real-time worm poll. Members of the audience could indicate whether they “Liked” or “Disliked” Andy’s talk at any time during the session. The individual votes were then aggregated into an evolving worm graph, displayed next to the speaker’s slides.

Andy Balmer being worm-polled

The worm experiment offered an excellent opportunity to experience and reflect on what it is to be evaluated as well as to evaluate. In a discussion after the session, participants shared their observations about the effort required to focus on both the talk and the worm, ethical concerns and uneasiness in anonymously judging the speaker, technical difficulties of accessing the school’s Wi-fi, the discussions among the project team about the temporal-spatial arrangement of screens, speaker slides and audience before the conference, as well as moments of gaming when participants chased each others’ movements up or down. While the worm poll may not become a standard feature in academic talks, the experiment generated some challenging questions, which would have otherwise gone unnoticed.

Insight4A special thank you is due to the outstanding developers and designers at Insight 4 Labs, who made the experiment possible. A beta version of the technology is now available for public use. So if you are interested in trying it out, have look at SocialPoll.tv. It is great fun to play with and you will certainly come up with novel applications such as voting on live TV shows.

4. The reactions

As it is quite common these days, the conference was accompanied by a more or less lively backchannel of gossip and commentary on Twitter. Here is a selection of tweets that circulated on the day:

@scottywoodhouse @howsmyfeedback can’t wait to see the results from tomorrow’s #hmfconf

@AndyBalmer I give it a ‘4’ so far, though out of what I don’t know. “@InSIS: #hmfconf kicks off http://t.co/O2weAgf”

@InSIS Stefan Schwarzkopf – online feedback and review systems channel reviews into triviality #hmfconf

@AuntieHelen “All feedback is rubbish and leads to prostitution and bad book reviews.” Daniel Neyland. A good discussion point!

@scmward Thought provoking sessions on evaluation: rating what and for whom? The event included a real experiment using worm technology #hmfconf

@webmoor Interesting real-time experiment in academic lecture at Oxford with real-time feedback projected behind speaker using ‘worm’ rating #hmfconf

@AuntieHelen Now worming Andrew Balmer of Sheffield University… #hmfconf

@valfazel #hmfconf playing around with worms

@patientopinion “You can’t fatten a pig by weighing it” – lively debate on value of online feedback #hmfconf

@AuntieHelen Fascinating – pirate ships in 17th and 18th centuries were early democracies! #hmfconf

@valfazel #hmfconf Javier Lezaun “In ethnomethodological research there are no rhetorical questions”

5. The prototype

Last not least, an important actor at the conference was of course the How’s My Feedback? prototype. If you haven’t done so, please drop by and give us (or others) feedback:

https://www.howsmyfeedback.org

Many thanks to the ESRC, the Institute for Science, Innovation and Society and our project partner Patient Opinion for their generous and ongoing support.

{ 0 comments }

Conference programme out now!

by admin on June 21, 2011

It’s only a week until the How’s My Feedback? conference in Oxford. We now have a preliminary programme for the day with a great line-up of speakers, talks and topics. Specifically, watch out for the world premiere of a brand-new feedback technology — and of course the prototype, which we will share with you shortly.

How's My Feedback? Conference Programme

If you haven’t signed up yet, please do so as soon as possible. There are only a few spaces left.

{ 1 comment }

Project update: What’s new in May?

by admin on May 22, 2011

It’s been a few weeks since our second expert workshop, so here comes a brief update on what has happened since:

  • As you might have noticed, the announcement for the one-day conference on 28 June is out. We have a stellar line-up of speakers, including Malcom Ashmore, Andrew Balmer, Alex Wilkie, Ian Stronach and Stefan Schwarzkopf. While they all come from academic backgrounds, they promise to give some interesting (and, I have been told, entertaining) feedback on the project and issues of online evaluation more generally. If you would like to participate, please register soon. The event is free, but places are limited.
  • We also have been around a bit to talk about the project. Two occasions have been particularly interesting. On 19 April 2011, I participated in a panel discussion at the Internet Freedom Conference in Strasbourg organised by the Council of Europe. ‘Multistakeholderism’ is a popular idea in this context, and the group was particularly interested in the potential of web-based reviews and ratings for fostering participation and engagement in policy-making. The video of Panel 6 is still online.
  • Harvard-MIT-Yale Cyberscholar Working GroupOn a very different occasion, I presented the project at the Harvard-MIT-Yale Cyberscholar Working Group at the MIT Media Lab. This was a great opportunity to get feedback from a very diverse crowd of people, including media designers, HCI researchers, lawyers and social scientists. There was also a second presentation by Nick Bramble, which very nicely highlighted the important legal issue of third-party liability for content posted on review and rating websites.
  • Of course, we have also been working on the prototype. It has been far from easy, given the shoestring budget and tight timeframe we are on. However, while things are moving slowly, they are moving and we hope to have something to tinker with soon. If you think you can contribute anything to the process from design skills to a developer brain, it’s not too late.
  • Finally, a lot of people got in touch and offered their support or simply showed interest in the project. In this context, have a look at other initiatives, such as Eric Goldman’s and Jason Schultz’s new project Doctored Reviews that aims to help people deal with restrictions on online patient reviews.

More updates soon. Again, don’t forget to register for the conference.

{ 0 comments }

Registration is now open for How’s My Feedback? – The Technology and Politics of Evaluation, a one-day international conference at the Institute for Science, Innovation and Society, Oxford University.

Tuesday, 28 June 2011
9.00 – 17.30
Saïd Business School, University of Oxford

About the conference:

There is hardly anything these days that is not being evaluated on the web. Books, dishwashers, lawyers, teachers, health services, ex-boyfriends, haircuts, prostitutes and websites are just some examples targeted by novel review, rating and ranking schemes. Used in an increasing number of areas, these schemes facilitate public assessment by soliciting and aggregating feedback and distributing it as comments, ranks, scales and stories. While some have greeted this development as an innovative way of fostering transparency, accountability and public engagement, others have criticized the forced exposure and alleged lack of accuracy and legitimacy, pointing to the potentially devastating consequences of negative evaluations.

Now research is under way to tackle these issues head-on and evaluate the various types of review, rating and ranking schemes in a collaborative design experiment. Under the title ‘How’s my feedback?’, a group of experts, including designers, managers, reviewers, policy-makers, consumer spokespeople, academics and users are currently exploring the idea of a website that allows users to publicly assess their experience with review and rating schemes – a feedback website for feedback websites.

The goal of the conference is to reflect on this process and the emerging prototype. How are we to judge the effectiveness of these schemes? What modes of governance are implicated in their operation? What strategies and methodologies are employed in their development, maintenance and use? How successful is this project as a design intervention? What is it to evaluate the evaluators – and will this business ever end?

Speakers include:

Organisers:

Malte Ziewitz and Steve Woolgar, University of Oxford, in cooperation with James Munro, Patient Opinion

Registration:

The event is free of charge, but registration is required: REGISTER HERE

Project website: https://www.howsmyfeedback.org/
Twitter: http://twitter.com/howsmyfeedback/
Download poster: https://www.howsmyfeedback.org/poster.pdf
How to find us: http://goo.gl/maps/hLW8

For more information, contact insisevents@sbs.ox.ac.uk.

The conference is generously supported by an ESRC Knowledge Exchange Small Grant and the Institute for Science, Innovation and Society.

UPDATE 27/5/2011: We just moved registration to a new site. If you already registered, no worries. You are still signed up.

{ 2 comments }

Here are some quick impressions from yesterday’s second expert workshop. Again, we met in London at the offices of Which? — this time to dicuss and imagine the prototype we are supposed to built over the next couple of weeks.

Building on insights from the first expert workshop, we focused on a number of rapid design exercises. We split up into two groups and equipped ourselves with pens, paper, flipcharts and sticky notes. Under James’ guidance, we started by devising personas, i.e. concrete individuals who might use How’s my feedback?, and then sketched their journeys to and through the website. This also led us to consider some of the functionalities of the prototype and rethink its scope and purpose.

A cup of pens. Jonathan and Stefan tracing user journeys. Dixon making a point.

As expected, the design process was much messier, but also more interesting than any textbook could have taught. A few examples:

  • Working with constraints: Although it was tempting to assume a world of unlimited resources, we constantly had to remind ourselves of the shoestring budget we are on. For example, why not negotiate cooperations with major operators of feedback platforms? Why not devise an algorithm that would crawl, collect and crunch transactional data to arrive at useful feedback scores for specific platforms? Or why not have an independent team of researchers explore and evaluate the schemes according to a set of universal principles? While all these ideas seemed great, we would hardly be able to pursue any of them with the time and resources at our disposal. So we were forced to think creatively about less expensive alternatives.
  • Increasing ambiguity: Another interesting phenomenon we encountered can be described in terms of Mackenzie’s certainty trough, a schematic representation of how certainty about an established technology might be distributed. Put simply, what looks like a clear case from a reasonable distance is getting more and more uncertain and ambiguous the deeper you become involved. This happened — among other things — when we dived into the website and its possible uses and features. When thinking about ‘How’s my feedback?’ in terms of quite specific situations, we sometimes feared to loose focus and needed to remind ourselves of the reasons we set out to do this.
  • Configuring users: Designing a website requires a lot of imagination. Specifically, imagining individual users turned out to be more difficult than expected. We often slipped back into talking about what ‘users’ or ‘consumers’ generally want, how smart or dumb they are and what problems they have. Built into these claims were a number of rather strong assumptions that did well in strengthening our respective arguments, but less so in helping us think from the perspective of ‘actual’ users. So thank you very much to the imaginary “Caroline (57)”, “John (35)” and “Fantom (43)” for keeping us on track.
  • Steering clear of ‘ideal types’: Another challenge was to steer clear of ideal conceptions of existing schemes and use them as the only reference point for ‘How’s my feedback?’. For example, we often found ourselves inadvertently referring to a ‘Tripadvisor-like’ system, even though the challenge is certainly much broader. This also brought up the critical question of the target of the scheme: what kind of object is ‘feedback’ and how can it usefully be assessed?

Despite (or perhaps because of) these difficulties, we managed to explore a range of cases, uses and functionalities. Among other things, we discovered new aspects like “the potential to cause trouble” or the possibility of becoming a clearinghouse that might help inquiry and research more than decision-making. It is still a long way to go, but discussing the issues with a terrific group of experts is a more than worthwhile (and fun!) part of the journey.

Many thanks to everyone who contributed, especially Lorraine Aziz, Melanie Dowding, Peter Harris, Chris Emmins, Kirsten Guthrie, Helen Hancox, Dixon Jones, Noortje Marres, Stefan Schwarzkopf, Paul Smith, Marcus Taylor, Esther Vicente, Jonathan Wolf.

{ 0 comments }

How to make sense of the recent explosion of online review and rating schemes? What are the issues and how can we think about them? At the first expert workshop, I gave a brief introduction to the ‘How’s my feedback?’ project and the puzzles that have got us thinking. This is a brief overview of the argument, which was meant to open up the discussion. Feel free to download the slides, though the text below might give you a better idea of what this was about.

Understanding feedback schemes: A simple model and six puzzles

The presentation starts with an obviously incomplete map of existing web-based review and rating schemes. The purpose of this was not to provide the ultimate taxonomy, but rather emphasise the wide range of areas in which online feedback has been implemented. There are endless ways to order these schemes: who runs them (government, business, third sector), which inputs are required (direct through reviews and ratings, indirect through tracing and tracking), what methodology is used (aggregation, statistical modelling, algorithms), how are results presented (fulltext, snippets; ratios, scores, stars) and organised (chronological order, ranked, filtered), etc. However, what all of them have in common is that they use electronic technologies to generate and mobilise public evaluations.

Now, it is interesting to think about the ways of social ordering these schemes bring about. There is a long history in this line of thinking about evaluation technologies, especially on accounting, audit or polling devices. However, what is interesting about web-based review and rating schemes is that they tend to oscillate between three common claims: (1) transparency: feedback makes hidden qualities transparent and allows better informed decisions; (2) accountability: the publicity of evaluations holds their targets to account, thus facilitating social control, and (3) participation: the capacity to involve a large number of people and foster engagement, empowerment and scale.

Against this backdrop, different schemes emphasise different aspects. While a search engine like Google may run on a transparency ticket (“organize the world’s information and make it universally accessible and useful”), it also invokes ideas of participation (counting links as votes) and accountability (if you want to rank high, produce ‘relevant’ content). Conversely, a website like MyPolice may aim to improve police services (“a neutral space where you can tell your story secure in the knowledge that the people who count will read it”; accountability) by making people’s experiences visible (“understand what the public needs and wants”) and giving everyone a voice (“You pay for your police service. … You’re the one who matters.”).

However, while this simple model may be helpful for understanding the range of feedback schemes and their corresponding claims, the everyday practices of reviewing and rating look quite different:

1. Users sometimes behave strangely: Technology tends to be designed with a certain user in mind. In a sense, it ‘configures’ the user as Steve argued previously. So what happens if these users talk back and use a platform in ways we did not anticipate (or appreciate)? What assumptions about users come with certain schemes – and how do they work out in practice?

2. Not everybody wants to be empowered: A key issue for a lot of schemes is actually attracting a sufficient volume of feedback. For example, a common sight these days is the invitation to “Be the first to write a review!”. What does this tell us about calls for ‘democracy’ and ‘participation’?

3. There is more than one truth: Take a look at Phil’s exchange with a reviewer on TripAdvisor. Who is the expert in this case: the person claiming to be the guest or the owner responding to the feedback? What counts as ‘valid’ knowledge – and to whom?

4. Not all publicity is good publicity: A common set of concerns these days revolve around the risk of libel and exposure. For example, what is it to be named and called a “habitual convict”, “liar” and a “drug addict” as it happens on DontDateHimGirl? A certain degree of exposure may be a nifty way to attract more people to your website. But where does ‘fair’ marketing end and where does ‘unfair’ libel start?

5. Metrics become targets: And not just metrics, also stories, commments and other forms of feedback. Is it a problem when you can buy a review for $5? What if you organise your friends to express their protest against landmines by participating in a poll on the New Tork Times website? What counts as ‘legitimate’ or ‘ethical’ engagement?

6. Resistance is tricky: Finally, there are increasing concerns about due process. Who holds the evaluators to account — and how? A number of strategies can be observed, including legal action (i.e. invoking a ‘higher’ evaluation scheme), counter schemes (see, e.g. Guestscan as a response to hotel reviews) and watchblogs. So how to evaluate the evaluators — and does this business ever stop?

Overall, it seems that many schemes defy the simple logic of the underlying models. Web-based evaluation is not just about ‘data’ and ‘information’, but deeply embedded in and constitutive of social relations. Qualities are not simply measured or described, but negotiated and performed in an ongoing and messy social process. As a result, public evaluation is never innocent, but highly political with sometimes devastating consequences for those reviewed, ranked and rated. And this is exactly where How’s my feedback? comes in — and our task to design and tinker with a prototype that allows people publicly to evaluate their experiences with review and rating schemes.

{ 0 comments }

The first expert workshop is over, and the ideas are piling up on our desks. The range and richness of views and insights was absolutely amazing. This promises to be a great project, and we are already looking forward to the second workshop on 11 April.

The openness and energy of the expert group was quite impressive. Although the workshops bring together a rather diverse group of people, this did not prevent anyone from jumping right into the discussion, sharing and challenging expertise in marketing, monetizing, facilitating, soliciting, moderating, preventing, evaluating, giving and receiving feedback online. We are currently working on a detailed summary to prepare the ground for the design work in the second workshop. So for the moment, this is just a brief overview.

John guides the group through NHS Choices. Coffee keeps feedback experts going.

Introduction and background: In an attempt to introduce the project, I mapped the current landscape of web-based review and rating schemes and sketched six puzzles that had got us thinking in the project team. We will write more about this soon.

Stories from the field: Among the highlights of the afternoon were certainly the talks of three expert group members, who had volunteered to kick us off.

  • First up was Jason Smith, Client Partner at Bazaarvoice, who talked about his experience with designing and managing feedback systems for big companies like Argos and Expedia. Bazaarvoice has only been around for a bit more than five years, but already generated 196,224,118,952 conversations across its platforms (and counting). Jason showed how this data can be crunched and analysed with custom-made tools. This includes an early warning system for identifying product failures or capturing the sentiment of customers to improve marketing strategies.
  • Next, Peter Harris provided us with a fascinating inside view of what it takes to be a top reviewer on Amazon. A quick look at the top reviewer table confirms that Peter knows what he is talking about. Interestingly, it turned out that it is not always a blessing to lead the lot. Being a no. 1 reviewer comes with its own pitfalls and politics, such as receiving more critical comments or being offered free products that do not interest him. Other aspects Peter covered in his talk concerned the differences between country versions of the Amazon website, the implications of the change from the old to the new ranking system and the many ways in which reviewers interact among each other through forums, e-mails and comments.
  • Finally, John Robinson, User-generated Content Lead at NHS Choices, gave a guided tour through the comment functionalities of the government-run NHS Choices website. John talked about the challenges of designing a scheme that meets the expectations of both policy-makers and users. One example is the difficult question of moderation: what is OK to mention on a public health website and what might interfere with complaints procedures or even legal proceedings? How do negative comments affect small GP practices as opposed to big hospitals? And how to make sure that changes resulting from online reviews are sufficiently visible to the patient and the feedback loop is closed?

William and Harry sort things out at the flipchart.Group work: In the last hour of the workshop, we split up into two groups and discussed four questions based on the presentations: What is online feedback for? What are the benefits — and to whom? What are the harms — and to whom? And what counts as a “good” and a “bad” feedback scheme? Again, we are currently working on a summary of the discussion. Quite a daunting task, but an essential step on our way to the prototype.

Many thanks to David Albury, Sarah Drinkwater, Peter Durward Harris, William Heath, Helen Hancox, Harry Metcalfe, John Robinson, Stefan Schwarzkopf, Jason Smith, Marcus Taylor and Elizabeth Forrester at Which? for making the first Expert Workshop a success.

{ 2 comments }