Judging the wine judges

judges1

The Royal Sydney Wine Show has successfully completed its annual awards tasting a couple of days ago and the results can be downloaded from the web. When I browse through the long and impressive list of award winners I contemplate the fact that the fate of wine is fickle.

I cannot but think of the recent article by Prof. Robert T. Hodgson about the performance of wine judges on the American Association of Wine Economists website. The title of his paper was: “An Examination of Judge Reliability at a Major U.S. Wine Competition”.

My burning questions is: Are these findings as regards the reliability of wine judges from the USA also applicable to our Australian wine judges or to the Europeans? I am afraid that the most likely answer would be a “yes”.

logo-aawe

And what were the main findings of Prof. Hodgson’s experiment? Well, we can conclude that wine judges were proven to be quite unreliable. Only about 10% of judges could judge wines consistently in the experiments, meaning that they were able to replicate their score within a single medal group. To put it another way: 90% of the judges were not able to come up with the same ranking for the same wine.

The implications of this are clear. The winner of a bronze medal might receive a different award (or indeed no award at all) in the same session by the same judges if the wine had been presented moments later or earlier. This is of course great news. If the costs of participating in wine competitions are low for you, just submit. Otherwise do not bother with these kinds of quality assessments. Just go with your clients: if they are happy and buy your wine why should you care what wine judges say about your product.

logo-aawe

An Examination of Judge Reliability at a major U.S. Wine Competition
by
Robert T. Hodgson
Journal of Wine Economics, Vol. 3, No. 2, 105-113

Abstract

Wine-judge performance at a major wine competition has been analyzed from 2005 to 2008 using replicate samples. Each panel of four expert judges received a flight of 30 wines imbedded with triplicate samples poured from the same bottle. Between 65 and 70 judges were tested each year. About 10 percent of the judges were able to replicate their score within a single medal group. Another 10 percent, on occasion, scored the same wine Bronze to Gold. Judges tend to be more consistent in what they don’t like than what they do. An analysis of variance covering every panel over the study period indicates only about half of the panels presented awards based solely on wine quality. (JEL Classification: Q13, Q19)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: