To LUGNET HomepageTo LUGNET News HomepageTo LUGNET Guide Homepage
 Help on Searching
 
Post new message to lugnet.generalOpen lugnet.general in your NNTP NewsreaderTo LUGNET News Traffic PageSign In (Members)
 General / 39456
39455  |  39457
Subject: 
Re: Why sets receive a ZERO?
Newsgroups: 
lugnet.general, lugnet.admin.general
Date: 
Mon, 18 Nov 2002 18:40:16 GMT
Viewed: 
492 times
  
In lugnet.general, Kerry Raymond writes:
If  we have a 0-100 scale, then assuming some kind of normal distribution, we
would expect the average rating to be about 50 with a standard deviation of
(say) 20. Therefore, it becomes possible to moderate a member's ratings
according. Basically you take the set of actual ratings they have contributed,
and then calculate the actual mean and standard deviation of those ratings.
Then you adjust each actual rating N as follows:

X * N + Y

where X is the relative standard deviation and Y is the relative mean. The
effect of doing this is that the adjusted ratings should produce the desired
mean and standard deviation. Thus *overall* set ratings will have a mean of 50
and s/d of 20.

Heh-- I actually did something very similar for our company when we were
sending out our customer service surveys (rated 1-5). Obviously some clients
were overly thrilled with us and just gave us straight 5's. Some were mad at
us and gave us straight 1's. (In our company's defense we had far more
people who gave us all 5's than all 1's). One averaging system I had
effectively made these people's votes ALL turn into 3's. It was very
interesting. Needless to say I like the idea.

Because the process is entirely mechanical, it requires only programming and >no human intervention.

I'm not entirely sure to what this is referring-- was there a system
elsewhere mentioned that was in part manual?

Note. This system is still based on the assumption that a high rating
indicates a set preferred by the user to a low rating. If people are
reverse-rating sets (through error, malice, or a desire to deliberately
distort the results in some way), then nothing can solve that problems, but
the more extreme results will probably be scaled back to something more
reasonable and the consequent impact on the overall rating of a set will be
reduced.

To some extent this is deal-with-able with the "remove out-of-whack" votes
system. If a set gets 45 100's, 37 90's, 18 80's, 2 70's and 1 0 rating, the
0 rating is sufficiently out of the standard bell curve to automagically
throw it out. Other sets that didn't follow such nicely curved data would be
tougher for ruling out votes that didn't fit, but then again, if a set has
erratic data, it might very well have earned a 100 and/or 0 vote from someone.

Another system I had done some experimenting with was the "Remove the lowest
and highest vote for every N votes that exist for a given set, disregarding
the default '50' vote". Worked ok.

The real issue (I think) isn't in overal individual set ratings, however,
but in set *rankings*. In the above system, for example, the 0 vote brings
the mean down from 92.255 to 91.359.  No big whoop. But when you show the
"Top X sets of all time based on Lugnet rankings", it's enough to bring it
down SIGNIFICANTLY. I've played around with the "remove N votes" system on
the Lugnet guide rankings with some interesting results. Some sets that
"should" be on the top 3 don't even make it to the top 10, etc. IIRC the
first time I tested (with all sorts of values for N), the Guarded Inn ranked
#1 the most times, but in the straight mean ranking (as shown on the Lugnet
page) it rated somewhere around #18. A more recent test (September 16th this
year) revealed the Black Seas Barracuda 'should' have been #1 but was ranked
#16 on Lugnet. Tsk tsk. I'd be curious to see what your suggested system
would reveal...

I've argued elsewhere on LUGnet that most recent
releases get rated (good/bad as appropriate), but only the older *better* sets
get rated (and hence rated favourably) as everyone forgets about the also-ran
sets of the past and doesn't bother to rate them. IMHO this is why older sets
tend to overrank newer sets

I tend to rate according to:
- 40 points for piece selection, price per piece, etc
- 40 points for set design, set features, etc
- 20 points for set appeal (bonus points!)

I usually start each set off at 50, and go from there-- IE a 'regular' set
starts with 20 points for piece selection, 20 points for set design, and 10
points for appeal... But I will admit my own 'average' rating that I've
given is probably somewhere around 70.

One of the flaws of course being what you mention-- people don't rate junky
sets as much as they rate great sets. Personally, I only rate what I own.
And (obviously) if I think a set is junky, I'm not going to want to buy it,
and therefore I probably won't own it. I don't own any of the silly racers--
and if I did, I'd probably rate them at 0-20 each. But I don't own them, so
I won't rate them. As a result, my average (and others assumedly) set rating
is probably somewhere closer a 50 or 60, but we'll most likely never know...

DaveE



Message is in Reply To:
  Re: Why sets receive a ZERO?
 
"Larry Pieniazek" <lpieniazek@mercator.com> wrote in message news:H5qp07.5wD@lugnet.com... (...) If we have a 0-100 scale, then assuming some kind of normal distribution, we would expect the average rating to be about 50 with a standard deviation (...) (22 years ago, 18-Nov-02, to lugnet.general)  

48 Messages in This Thread:






















Entire Thread on One Page:
Nested:  All | Brief | Compact | Dots
Linear:  All | Brief | Compact
    

Custom Search

©2005 LUGNET. All rights reserved. - hosted by steinbruch.info GbR