From the sidelines: Judging the wine judges II

September 9, 2009

The day I received the results from the 2009 Decanter World Wine Awards, I also received an e-mail from Karl Storchmann from the American Association of Wine Economists announcing the newest publication of the Journal of Wine Economics (Vol 4, No 1).

The lead article in this issue of the journal is about the wine judges and their reliability in judging wines. This was the perfect contrast reading to the long list of award winners from Decanter. Again Robert T. Hodgson has summarized some more research about the topic in question.

The title of the article is “An Analysis of the Concordance among 13 U.S. Wine Competitions”. Hodgson followed over 4,000 wines entered in these competitions of which 2,440 wines were entered in more than three competitions. 47% of these wines received gold medals in one show but 84% of these same wines received no medals at all in the subsequent shows. These results indicate that winning a Gold medal is greatly influenced by chance alone. Or to put it another way: Wines which were deemed of an extraordinary quality by some wine judges were assessed as very ordinary by others.

In come the Decanter World Wine Awards 2009 results. What does the study say about these awards. Nothing, of course. My skepticism about wine shows, however, is rather stronger then before.

One may ask the question are the Hodgson findings about wine shows in the U.S. applicable to shows such as Decanter? Can we safely conclude that some of the Decanter award winners would be seen as ordinary by other wine judges at other shows, and that some of non-winners could easily win a Gold somewhere else. Or could the Decanter wine judges replicate the tasting with the exact identical findings? Or what would their “margin of error” or inconsistency be?

What Hodgson’s study also tells us is, that wine judges are just human beings who are inconsistent and unpredictable. But we might rightly assume that they are trying to do the right thing but fail in this from time to time. The judges at the wine competitions surveyed, though, seem to have a shared idea about the wines they do not like and therefore win no awards at all. That’s at least something.

A few wine blogs have taken up the issue as well. Alder Yarrow for instance writes on Vinography about the AAWE paper and calls for a stop of the proliferating state wine shows. He admits, however, that the shows surveyed where, and I quote, “essentially the largest and most prestigious wine competitions in America”.

What does this tell us wine consumers? Well, wine quality and medals won at wine shows are not “orthogonal”, the presence of one doesn’t imply the presence of the other. Therefore, trust your own taste and buy what you like, possibly after tasting and let the wine shows be wine shows.

How about the vintner? Well, it is very nice for any wine producer to be recognized for outstanding quality and performance. So it is nice to win a medal at a wine show or getting a five star winery rating from James Halliday for instance. But it is not the end if this is not coming along. Most important is that ordinary people like to drink and buy your wine; the rest takes care of itself.

papa290131

Cheers folks, trust your own judgment and taste as many wines as you can.


Judging the wine judges

February 21, 2009

judges1

The Royal Sydney Wine Show has successfully completed its annual awards tasting a couple of days ago and the results can be downloaded from the web. When I browse through the long and impressive list of award winners I contemplate the fact that the fate of wine is fickle.

I cannot but think of the recent article by Prof. Robert T. Hodgson about the performance of wine judges on the American Association of Wine Economists website. The title of his paper was: “An Examination of Judge Reliability at a Major U.S. Wine Competition”.

My burning questions is: Are these findings as regards the reliability of wine judges from the USA also applicable to our Australian wine judges or to the Europeans? I am afraid that the most likely answer would be a “yes”.

logo-aawe

And what were the main findings of Prof. Hodgson’s experiment? Well, we can conclude that wine judges were proven to be quite unreliable. Only about 10% of judges could judge wines consistently in the experiments, meaning that they were able to replicate their score within a single medal group. To put it another way: 90% of the judges were not able to come up with the same ranking for the same wine.

The implications of this are clear. The winner of a bronze medal might receive a different award (or indeed no award at all) in the same session by the same judges if the wine had been presented moments later or earlier. This is of course great news. If the costs of participating in wine competitions are low for you, just submit. Otherwise do not bother with these kinds of quality assessments. Just go with your clients: if they are happy and buy your wine why should you care what wine judges say about your product.

logo-aawe

An Examination of Judge Reliability at a major U.S. Wine Competition
by
Robert T. Hodgson
Journal of Wine Economics, Vol. 3, No. 2, 105-113

Abstract

Wine-judge performance at a major wine competition has been analyzed from 2005 to 2008 using replicate samples. Each panel of four expert judges received a flight of 30 wines imbedded with triplicate samples poured from the same bottle. Between 65 and 70 judges were tested each year. About 10 percent of the judges were able to replicate their score within a single medal group. Another 10 percent, on occasion, scored the same wine Bronze to Gold. Judges tend to be more consistent in what they don’t like than what they do. An analysis of variance covering every panel over the study period indicates only about half of the panels presented awards based solely on wine quality. (JEL Classification: Q13, Q19)