more aggrevaluation

[this is a comment on Brayden’s post, which got long enough that I’m reposting the comment here. Go read that thread though, it’s quite interesting.]

Ha, I step out for an evening and realize I’m being called out by name, even praised, after I was so unkind to the orgtheory. Brayden you’re trying to make me reconsider…

A few things. First, I’m lumping together into ‘aggregation’ and ‘wisdom of crowds’ a number of different types of activities. Recommendation engines and the Hollywood Stock Exchange (where I’m currently ranked 47943th, with a lifetime ROI of +1,164.95%) are very different, and rather than go with the it’s complicated routine, I lumped a bunch together. I’d go a step further than Lena and say it’s a categorical error to suggest either expert opinion or crowd-sourced outcomes are generated with same logics. I’d like to hear more about the relative value of expert-vs-the crowd across these differences.

Second, design-wise, I might be wrong about exploitation v. exploration. I had in mind that if you put a movie on HSX and try to value it, you will be way off on movies that don’t already conform to existing kinds of movies. Or, if you base your decisions on what kinds of things people like/want, you end up with Playstation 3 and Xbox 360, and miss the Wii – because people didn’t ‘want’ it before it existed. Ugh, I’m getting muddy.

But I might be wrong – I mean, Paul DePodesta (of Moneyball, soon a movie with Brad Pitt and ironically undervalued at $31.75 on HSX) explicitly noted that he used data and no assumptions about existing scouting experts to come up with a new way to assess player value. Clearly exploration through data mining.

(and a propos Mike’s comment above, why 538 and Netflix does well, it seems as though it is something about the difference between regression analysis and Singular Value Decomposition/factor analysis; but that requires more explanation)

But my problems are theoretical, and here I enlist Hahn and Tetlock’s definitive Information Markets: A New Way of Making Decisions. Which for a theoretical basis starts with (on p2) “Why do information markets work as well as they do?” And then references….The Wisdom of Crowds. And then moves on to design. And Surowiecki’s theoretical answer?

“At heart, the answer rests on a mathematical truism. If you ask a large enough group of diverse, independent people to make a prediction or estimate a probability, and then average those estimates, the errors each of them makes in coming up with an answer will cancel themselves out. Each person’s guess, you might say, has two components: information and error. Subtract the error, and you’re left with information…With most things the average is mediocrity. With decision making, it’s often excellence. You could say it’s as if we’ve been programmed to be collectively smart” (p10-11).

Not really obvious at all. There is no answer why a market metaphor would result in something better than experts. But we have the technology to do it, and it seems experimentally to work, and in web 2.0 we can get users to do this all for free! And so screw it – off with the design team and on with the user testing.

Comments are disabled for this post