Aggregation aggravation

It seems to me that one of the fundamental advances and problems with web 2.0 is that it poses expertise against aggregation. The ‘old’ system (and here I would say that these are overlapping, not coterminous ways of doing things) is one of expert reviews, or critics. You want to know what movie to see, so you ask Roger Ebert (though his recent review and ongoing defense of Nicolas Cage’s Knowing strikes me as bizarre). If you want to know what music to listen to, you turn to Sasha Frere-Jones. For consumer goods, Consumer’s Guide. For electronics, David Pogue. And so on.

The point is fractal, incidentally. In this ‘old’ system, for policy advice you would call on sociological experts (naturally, though maybe other lesser social scientific experts if you’re interested in worse advice). In organizations, you would look for marketing advice from your marketing division, operations from operations, finance from finance. Obviously the more general the point I make the more fault you can find with it. And you would be right. But bear with me for a moment.

The ‘new’ system rests on a Wisdom of Crowds knowledge. That is, if you take a bunch of people and ask them their opinions, you can get a better fix on uncertain knowledge than you can with a small number of experts. Now, Surowiecki himself is not this simple: at minimum one must overcome problems of cognition, coordination, and cooperation. But this said, proponents of this kind of system point to rather stark indicators of success. Google’s PageRank (though I find the idea that they use 500 million variables and 2 billion terms absurd); Yelp; the Iowa Electronic Markets; Metacritic. And in the more general point, we see a substitution of ‘market’/crowdsourcing/datamining as a substitution for design, marketing, strategy. Here I mean the A/B testing ad absurdium as a substitute for design. Data-mining as a substitute for marketing. Quantitative finance as a substitute for market forecasting.

This whole edifice actually rests on a kind of efficient markets hypothesis, or more specifically a Friedrich Hayek-type consolidation of ‘adverse’ knowledge (meaning, in this context, private knowledge) via a market mechanism. While Hayek wanted to argue that market-based societies are better than centrally-planned societies, his work has become the intellectual touchstone of all things information market. And really that’s what it comes down to. Crowd-sourcing: a replacement of expertise with market.

However, there are some things to think about here that make this ‘new’ system quite problematic. And I ain’t sayin’ so just because I’m an expert (after all, the policy people really don’t come talking to sociologists, despite my preferences). There are one specific and one theoretical.

The first specific is that some people are just crazy, and aside from creating a tail-end of a distribution curve, it’s not at all clear what these folks contribute to the crowd. Old but still hilarious is Andy Baio’s Amazon Knee-jerk Contrarian Game. Personally, I like the ratings game at Yelp, an often-loved but massively crowd-sourced guide. Take, for instance, the Museum of Modern Art in NYC (i.e., the, or one of the, best modern art museum in the US and the world):

Why 1 star? Its just a horrible place to visit never ever again, screw this contemporary art thing, the exhibits they had going on were……… yeah no way to describe the sheer disappointment in the place. The place is designed to shock and awe you, all it did was bore me.
Most of the exhibits at MoMA are just random objects or B.S. paintings–hardly classifiable as art.
I could just go down to my garage or get a toddler to paint on a canvas to receive the MoMA experience. No crowds or superinflated entrance fees there, either.

I was so jazzed to go there. Many people I know raved about it.
All I came away with from this place was one word: Overrated.
Quality Modern Art is subjective. In my mind, for the hype this place gets is unwarranted. So sad…

So how do these reviews contribute to overall ratings systems? More broadly, what if the feedback/view/idea/opinion from your customers is just wrong? In 2.0 way of thinking about things, this is like saying that a market price is incorrect – it is axiomatically impossible, barring something wrong with the system (an information problem being the first culprit). And there is no ‘expert’ to say otherwise.

More theoretically, it has never really be adequately explained why a ‘market-like’ information crowd-sourcing should work. I understand why markets might produce a price that incorporates most public and private information about a commodity. But the widespread substitution of expertise with data mining and crowd-sourcing is a market metaphor more than a market. Why should a metaphor work? This is at the heart of someone like Daniel Davies’ criticism. And I get that sometimes aggregation does work. But there’s no good reason why.

My own feeling is that, using March’s metaphor of ‘exploitation’ and ‘exploration’ (where the first is the plumbing of existing knowledge/arenas, and the second is the seeking out of new opportunities), aggregation mechanisms are better at exploitation than exploration. They do better with existing standards of knowledge, of tastes, of commodities, than they do with something that is new. You know, Blue Ocean and such. I think there are better solutions for a 2.0 world that combine expertise and aggregation (for instance, Five Thirty-Eight‘s work on the 2008 elections that combined data mongering with theoretically-driven and field-visit-driven analysis). But this post is already too long.

Comments are disabled for this post