Rating, reviewing and ranking systems : Part 2

This is a continuation of my previous post on review systems. Having looked at some of things that make your average review system less-than-useful, I started to wonder about how improvements might be made. I completely ignore the problem of dealing with long-form text, so erm… ‘machine-reading’ of actual reviews is outside the scope of this solution.

(updated 2008-01-06).

To recap, a ‘Seeker’ is someone who is looking for information and comparing products or services in ‘research mode’ or ‘decision mode’. I’m not not too concerned with ‘research mode’ behaviour where you’re just browsing open-mindedly through available reviews. I’m more concerned with the goal-oriented, ‘decision mode’ behaviour.

A forward-looking approach to increase the usefulness of an RS.

This is my quick sketch-out of another approach for building an RS:

– a ReviewableItem is any entity that can be reviewed (it implements the ‘Reviewable‘ interface), and there will be several different types of ReviewableItem and certainly there will be tons and tons of instances.

– several relevant Attributes may be tracked for each ReviewableItem. In the simplified scenario, these attributes are actually global and managed centrally (at least initially). This simplifies the resulting architecture at the cost of increased complexity if you wanted to override something like the rating scale for an attribute, on a per-item basis.

– A ‘seeker’ arriving at the system and requesting a ranking needs to first create and provide a ValueSytem to the review engine. Since most people are lazy, perhaps sensible defaults can be put in place. The ValueSystem is simply a collection of weighted attributes; the end-user chooses the weightings and in so doing, indicates to the system what they value most (e.g. I care about taste and color, but I don’t care about price).

– A Review assigns values to the attributes of a particular ReviewableItem, and those values will normally be delimited and one-dimensional (along a predefined scale) or take predefined values. In a more complex scenario, perhaps these values aren’t delimited or predefined, in which case you’d expect to build some type of normalizing engine. But that’s outside the scope of this simplified case. (Also out of scope but interesting is the idea of each RS having its own rules about how Reviewers can add or modify attributes).

Below is a rough sketch of how it might work:

Points of interest:

1 – Unfolding of consensus: Since both Seekers and Reviewers expose value systems, and these can be maintained and updated/expanded over the long term, and because attributes also morph, reviewing and ranking can be an ongoing, realtime process.

Perhaps standing requests for rankings / recommendations can be left with the system, with the owner of that request simply looking in at the results as time goes by (daily, monthly, yearly) to see what, according to the Value System they provided, ranks well and what doesn’t.

2 – Easily ‘mine’ information from different perspectives. This supports the ‘surveyor’ mode of behaviour. Users can create as many value systems as they want (which, btw, would definitely need to be called something else at the front-end. Maybe ‘search profile’ or something?). E.g. if I was buying a new computer I could create a value-system called ‘spendthrift’ and another one called ‘gamer’, which would be based on my interpretation of what each means, not the RS’s interpretation. (Although come to think of it, what’s to prevent the RS having it’s own value system, allowing all parties to discover what it thinks a gamer would want in a machine?)

3 – Open standards: to make it worthwhile to build and populate value systems, it would be nice if the end-user didn’t have to do this from scratch each time they visited a new site / RS. Maybe users can keep their value systems on central servers, to follow them around wherever they go (kind of like the way your Gravatar follows you around).

4 – Trust : Maybe in the future, review systems can be ranked (oh dear… chicken-and-egg alert) according to the validity of their recommendations. Maybe the good ones can put their money where their mouth is and ‘guarantee’ your purchase if it resulted from a recommendation from their own engine? Hmm…

Either way, review systems need to evolve.

*****

Updates 2009-01-06:

  • I think I used the word ‘Perspective’ by accident and since it was capitalized but not ‘defined’ anywhere I got rid of it.
  • Just wanted to clarify (because I didn’t) how a value sysem is different from a search form: The latter is static in that it’s parameters take on fixed values (or fixed ranges of values), whereas the proposed Value System applies a weighting to the ratings / scores provided for each attribute of interest. Search forms generally have no persistence (other than ‘save your search’ functionality that exists on some RSes); Value Systems are built with persistence in mind, and can be applicable to a wide range of searches across different types of Reviewables. To re-use the ‘spendthrift’ and ‘gamer’, examples from earlier, If I created those Value Systems to help me buy a laptop, they may be reused to inform my next (goal-oriented) search, say for a digital camera this time.

Leave a Reply

Your email address will not be published. Required fields are marked *