I’ve always had a bit of a mixed reaction to the reports published by the Journal of Wine Economics. On the one hand, I love the fact that serious statistical attention is being given to topics like wine awards, in the hopes that scientific examination will help reveal more about how wine and consumers interact. BUT… I’ve also had to deconstruct their lead articles to highlight what I felt to be conclusions that they draw from their analysis that I felt weren’t adequately supported by their data.
Well, now it seems that the American Association of Wine Economists has gone off the deep end.
The latest issue of the JWE (Volume 4, Issue 1, Spring 2009) contains a lead-off article by Robert T. Hodgson titled An Analysis of the Concordance Among 13 U.S. Wine Competitions. After reading the nine-page analysis, I’d go so far as to say that the AAWE’s release is bordering on being totally irresponsible. In my opinion, the science of how the statistics are applied is, at best, specious, and at worst might be downright deceitful.
Heady criticism, right? Let’s get deconstructin’!
The report examines data from 13 U.S. wine competitions in 2003. Here’s a bit of excerpt from the article abstract (emphasis is mine):
“An analysis of the number of Gold medals received in multiple competitions indicates that the probability of winning a Gold medal at one competition is stochastically independent of the probability of receiving a Gold at another competition, indicating that winning a Gold medal is greatly influenced by chance alone.”
Stochastic independence is simply another way of saying that the events are not related. For example, if you roll a 5 on a die, the event of rolling a 5 on your second role are independent. In other words, a wine winning a medal in one competition doesn’t impact what it will or won’t win in another competition. Which is exactly what you’d expect from a different competition, with different judges, and competing against different wines. The problem is that none of those other conditions are detailed in the JWE report.
Ignoring the fact that 13 competitions might not be a statistically relevant sample, not detailing the other factors that would certainly impact the outcome of the wine competitions is a seriously glaring omission.
Things get worse…
Where the AAWE report drops the cork is when it makes the leap (based on analysis of partial data) to a conclusion that inappropriately challenges the validity of the wine competitions:
“An examination of the results of 13 U.S. wine competitions shows that (1) there is almost no consensus among the 13 wine competitions regarding wine quality, (2) for wines receiving a Gold medal in one or more competitions, it is very likely that the same wine received no award at another, (3) the likelihood of receiving a Gold medal can be statistically explained by chance alone.”
The report makes this conclusion by analyzing data that was gathered on gold medals awarded wines at a small number of competitions held in one year in one country and without revealing any details on key elements involved that could significantly impact the outcome of the competitions:
- who the judges were
- what different wines were entered in one competition vs. another
- how many wines were tasted by each judge at each competition…
But they do graph the results against a binomial distribution, which of course sounds and looks official to anyone who didn’t pay attention in university Stats class.
“Examining the form of the distribution of Gold medals received by a particular wine entered in various competitions suggests a simple binomial probability distribution. This distribution mirrors what might be expected should a Gold medal be awarded by chance alone.”
The graph looks compelling, but there’s one problem: the conclusion is probably total bullsh*t.
The problem with this pseudo-scientific view is that it’s a bit like saying that I am always going to be stronger than my friend Bob, because in 13 attempts I jumped an average of fifteen feet into the air, while my friend Bob jumped only 4 feet. Therefore, we can conclude statistically that I am stronger than my buddy Bob. Oh, but we left out little tidbits that might influence our conclusion – like the fact that I jumped from a trampoline on the surface of the moon, while poor Bob jumped from a standstill on paved road in Iowa, while nursing a sprained left ankle.
You get the idea.
I’m not defending gold medal awards at wine competitions. Personally, I don’t pay any attention to them and I certainly don’t use them for recommending wines to others. The competitions may, in fact, be total crap, and the judging in those competitions may in fact border on random. But the latest AAWE report shouldn’t be used as a compass for navigating that kind of judgment.
The data is probably totally legit, but the analysis (as presented in the AAWE report, anyway) ignores far too many factors for the conclusions to be even close to scientific.
Cheers!
(images: 1WineDude, wine-economics.org)
I like your first graph.
That one is based on impeccable science, I can assure you!
I think that comment may have been mine. Have you ever been to one of these competitions? I went to one which was suppose to be for Southern California wines. I wanted to meet a couple of winemakers from Paso Robles, specifically Caliza which is a new winery and makes some interesting stuff. Aside from Paso Robles being considered Southern California, which is weird to start with I noticed very quickly that it was basically a Temecula wine party. If you're only competing against other below average wineries, how are consumers suppose to get a realistic review of the wine quality? There wasn't one wine there that I would have included in my wine clubs, and certainly nothing I would have considered for export.
Good example of why those details cannot be dismissed in analyzing the outcome – at least, not if that analysis is going to be scientific.
Thanks for this, Joe. You certainly know your experimental science. What interests me is a point someone raised on the comments of Alder's blog. It was something along the lines of how many great wineries choose other means to promote themselves than entering competitions, which brings me to this thought: you cannot count on all of the best wines competing in the same place. It could become a result of being the best of the worst based on who enters some competitions. As Tom Wark might say, there's no wine olympics at the moment. And even then, you have to strongly question if a universal wine competition would be an effective guide to the best.
Totally agree – these competitions should be seen for what they are, which essentially is entertainment. Who doesn't love a good competition? But let's not put so much emphasis on the outcomes.
Now, that's different to me than saying that the results correlate to random chance – which I obviously don't believe for even a second :).
Thanks for the analysis of the study. I saw this yesterday on Dr. Vino, but perhaps I should have looked closer. You know what Mark Twain says…"There are three types of lies: Lies, Damn Lies and Statistics" ;-)
I think that Tyler and Alder at vinography.com are making valid points about wine competitions, sparked by the release of this study. I think that's great stuff. BUT… we need to see the study for what it is, and be clear about what it isn't. and science it ain't!
Your chutzpah is staggering (I now see why you use the word Dude in your name). I intend to wade through your critique, but for you to dismiss the work of Prof. Bob Hodgson, who taught stat at the University level, as pseudo scientific is pretty damn insulting. Why not just engage the merits or demerits of the argument without the putdowns.
taught for 35 years, that is.
Tom, it's not the stats or the data that are the problem, it's the interpretation that's the problem.
Those findings, as described in the report, do not support the conclusions. In some respects, I feel like the AAWE is pulling a fast one on us here – and that's insulting to their readers!
The state-run alcohol distro. monopolies do the same thing with their stats – they omit conditions and that manipulates the outcome but presents it under the guise of objectivity. That's not science – it's more like propaganda.
I should note that I'm happy for them to chime in here and prove me wrong – but I suspect the report would need to be amended for that to happen.
Joe, you should really stop talking about the "AAWE report" They issued no such report. They sponsor an academic journal with an editorial board which publishes papers by members and others. A "them" won't chime in. Bob Hodgson wrote the paper. It would be like saying the American Economics Association issued a report when an economist had a paper published in the American Economic Review. Your seeming inability to make this fundamental distinction would seem to undermine the credibility of your general critique.
Thanks, Tom – I do appreciate you pointing out the distinction.
Not sure how it undermines my argument (aside from me being stupid, that is! ;-).
I thought he taught geology (or geography?)
Oceanography was his main field. But as with many profs in the Cal State system, he was called on to teach core courses like statistics.
You're right Joe, there really is no reason to expect the same results when you evaluate a wine in front of different judges at different times in different places with different wines in different weather etc. I don't see how that makes the results invalid *or* valid. It just means they are what they are! (Did I mention I got a Double Gold in the SF Intl Wine Comp?:)
Speaking of double golds. Apparently that meant that all the judges on the panel unanimously voted to give a wine a gold. Does that have statistical merit? Its hard to imagine 6 people all saying a truly crap wine was good, that being said, depending on the conditions it *could* happen. (Sorry, stole your use of the "*")
Steal away, my man!
The point is that the analysis is way, way too simple, and while it's tough but necessary in this case to reduce wine competition judging down to a result that can be analyzed statistically, this study goes too far in that direction.
I have not read the JWE article so, unfortunately, I don't feel I have much to chime in on Joe's article specifically. I can, however, imagine 6 people saying a truly crap wine is good. Hell, I can imagine with the greatest of ease, 600 people saying a truly crap wine is good! This comment in no way means to imply anything negative about yours or anybody's wine in particular. I am just saying.
One question that I have which I think is relevant is this: Are these competitions relative judgments? In other words, are the wines judged compared to the other wines entered or are they judged on their own merits alone? In the case of the relative judgments, the best wine is only better than the other wines entered. The trophy, or gold, is only as credible as the competition. Even that is dependent on who the judges are and their tastes or expertise. So many variables.
It's exactly that sort of thing that isn't touched on in the report, but I'd argue that simply ignoring it isn't viable (and that seems to be what happened in the report)…
joe:
it seems like your only argument here is that conditions vary in every wine competition. if that's the case then (as you say) the result is exactly what you might expect. that argument supports the overall conclusion: that wine competition results are closely related to chance! it's chance that you get a set of grumpy judges or you have bottle variation or that somebody farts while the judges are smelling your wine. that's the point!
i read the paper this morning, and while i also have problems with it, your "conditions" argument isn't one of them.
My argument is that the report treats competitions with varying conditions as if they were controlled and were similar. They're not.
The report's conclusions may be spot on. But their data fundamentally does NOT support those conclusions. If they wanted to do that, they would actually try to hold an experiment that *did* offer that kind of control.
What I'm saying is that I think the AAWE report is offering a shiny veneer that looks like true scientific analysis to the layperson. And it's not.
In that respect, it's not science, it's manipulation. And you should have a problem with it.
nowhere in the paper does it imply that conditions are similar and/or controlled. indeed, the fact that conditions (judges, palates, wines entered) vary so much is likely the *cause* of the chance finding.
YOU are jumping to conclusions that are not supported by the paper, and complaining that they're not supported! namely, that by not considering factors that are largely intangible and uncontrollable, the conclusion is bogus. In fact, the conclusion in the report has nothing to do with the circumstances of the individual competitions. It IMPLIES that perhaps these circumstances lead to inconsistencies.
the article says nothing about the consistency of individual judges (though that has been called into question in other works). furthermore, the experiment you refer to would be nearly impossible to set up and would shed light on a different problem, whether under the same judges, circumstances, etc. that wines get different results. this is NOT what the report claims.
this work was peer-reviewed by two referees (likely professors themselves) and the editor. i'm sure they would have made sure the science was right before publishing a piece that would be so heavily talked about.
as for your trampoline analogy, you are right in that conditions are always different. but in this case, what the article claims is that the height that you jump will vary based on the conditions. those conditions are so variable that any strength difference between you and bob could never be assessed fairly.
basically, the noise outweighs the signal. That's why the binomial distribution is used, since randomness is noise.
Great point about the binomial distro., Tom.
Having said that, even a larger sample size would lend more credence to the conclusions, precisely because there are so many uncontrollable factors involved. In other words, just because the distribution of medals appears random in their analysis on that sample does not mean that the likelihood of a gold medal being awarded is random, anymore than a statistical analysis of the grass in my backyard could be used to make conclusions about grass worldwide.
The consistency of the judges isn't even a factor unless they were the same judges – and it assumes (incorrectly) that all judges are of equal ability and all 13 competitions are of equal quality in terms of how they are executed, which *can't* be correct.
Just read Joe's point, and I intended to post essentially the exact point that Tom states articulately above. Joe, you've got to be able to concede a point here: Your conclusion about randomness actually supports the "report." Tom is right. The very fact that the judges and conditions vary so wildly only increases the chance component. I have no idea how you would seek to control those factors, because it will never happen, and I also can't figure out why you can't see how your points only support the report that you're maligning.
Hey, was wondering where you've been! :-)
The key difference for me is that the report doesn't logically make that point. I might be supporting a conclusion that you and Tom made from that report, but not one that was made in the report itself.
The hang up for me is that some of what is being referred to as the "noise" being filtered out of the data is not noise – it's essential context for adequately analyzing the results.
If it's causing variation to that degree, then it is noise from a statistical standpoint. None of it is relevant to the point of the paper: From a relatively normal external perspective, once a wine rises to the level of "decent", awards are essentially random.
i like this guy ^^
:-)
That be it… {8^D
Implicit in Hodgson's critique is the premise that major wine competitions should produce the same results in a high percentage of cases. But because the variables are truly different as you note then the awarding of gold metals is left to the vagaries ("chance") of the make-up of a small panel. Which is why wineries spend the big bucks to enter a number of competitions hoping that the roulette mode of judging results in one or two golds. The paper puts the lie to the position held by Arthur and Clark Smith and the American Appellation crowd that those serving as judges across different competitions can identify some relatively objective standard for assessing how close a particular wine gets to some platonic ideal of a wine from a particular place. Hell, judges can't even be consistent in their own evaluation, giving different marks to the same wine entered two or four times in the same flight!!
I'm with you on this, Tom. The assumption that the medal distribution should be similar just because the same wine was entered into different competitions is not valid, unless those competitions had the same wine competing against one another.
It's refreshing to see (as a scientist in my day job), just how righteously you rip apart this heap of statistically scientifical heap of crap. Thanks for reminding me why I ignore this group, Joe!
Just trying to help the little guy – 'cause let's face it, I'm a little guy! :)
I like your trampoline reference. LOL.
I should note that I've never actually done that… :)
Do people really expect wine judging to be consistent? Given bottle, serving temp and palate variations I would imagine you would always have sizeable variations in subjective quality assessments. That's the beauty of these competitions, you just keep submitting samples to them until you win a gold medal. Then your tasting room staff can pour it and say "This one also won a gold"…trouble is that they've heard that line at the last five tasting rooms they've been to.
I think what you're hitting on here is that wine competitions are too plentiful and lack standards, so their awards are diluted. For a consumer, the awarding of a medal to a wine is growing increasingly meaningless. And I'd agree with you!
Hey Joe!
You got quoted/cited by Jerry Hirsch in the LA Times!
http://www.latimes.com/business/la-fi-wine4-2009s…
Good for you!
Thanks, bro. I'm glad that the article got across some of the main points I had… I was kind of worried that he was looking for soundbites and that my points weren't quite being heard, 'cause that guys is a FAST talker on the phone! But it looks like he totally got it.
Just to set the record straight – the article states that I'm a Certified Wine Educator. I'm not – that's a different certification from the Certified Specialist of Wine (though from the same organization).
CWE, CSE, what's the difference? Maybe we should talk about blogger accredidation again…..
Nooooooooooooooooooooooooooooooooo
From my side, I don't care about the stats behind it (though my degree in research psy. would indicate differently). The inconsistancy is what's important, no matter what the underlying cause is. Competitions (and keep in mind I don't enter them) are incredilbly inconsistant base on any number of factors, but the biggest one is the judges. My issue is, what are we saying to the consumer?
Exactly. This is what people (including 1winedude) aren't getting. The *reason* for the inconsistency is irrelevant at the moment. The fact that it's so inconsistent in the first place is the issue. It makes a mockery of awards as a judge of quality. They're a waste of time.
Once people realize that and decide it's a problem, only then does the reason becomes important.
I wouldn't say that I don't get it at all… :-) I would say that I have big issues with how the conclusions in the report are being reached.
I would also say that highlighting the fact that these gold medals are kind of useless and spurring the dialog about that is a GREAT thing!
Specious and heady…two of my favourite words in the English language. Great post…as usual.
Cheers!
I much prefer the single reviewer. You can get to know his or her palette, learn weather or not you agree with them, and then decide to follow them. What's what makes blogger powerful. The sheer number of them allows you to tune into someone you like. When I see good scores from guys like Meadows, Heimoff or Tanzer, I know I'm going to like the wine, I feel we have simlar perspectives. While other reviewers can give a wine a 95 and it still means nothing to me. Its like when I used to watch Siskel & Ebert. Is Gene Siskel liked a movie, I had high hopes, While is Roger Ebert did, it menat nothing to me.
Even in large publication inconsistancies can occur. Anyone ever notice how Washington & Oregon wines in the WS are usually 3-4 points hight than CA wine? Are they really thet much better, or is the the effect of the reviewer?
Great points – one of the biggest issues here (and this where I'm totally in agreement with Tyler and Alder) is that the message being sent to the consumer is meaningless.
Good post. LOL, that means I agree. But the statistics on the article in discussion are pretty damning. My suspician is that wines can be reliably grouped into poor, so so, OK, and good, and that any further distinctions are so judge dependent as to make them suspect. I think my wife and I are fairly good at determining what Washington wines (we are natives) are good. WA wines may be a few points better than CA wines simply because at this point we have very few bulk producers. And $ for $ our wines are quite a bit cheaper for the same quality.
Do these competitions (and the philosophical nature of competitions themselves) not rest on the presumption that their goal is to identify the wines of the highest quality? How is it that a zinfandel from XYZ winery submitted into a field of 100 other zins as part of the San Francisco International Wine Competition can get a double gold (one of the top 10 wines) and in a field of 80 zinfandels in the San Francisco Chronicle Wine Competition not get a medal at all, when 75% of the wines in that category got medals. This is the kind of bizarre disconcordance that the paper is pointing out, and while yes, there are undoubtedly variations in the methodology, expertise and mood of the judges, as well as the field of entrants, they are not enough to explain the complete disconnect between the results.
Hi Alder – I think the answer to your question is "maybe."
Part of the issue is that there's no way for consumers to know if a competition is worth anything – who is judging, how it's being held, what wines are being pitted against one another (or even *if* they're in competition against each other or some other standard)…
Consumers and wineries both need to look at these competitions for what most of them are – fun. I love competition and sporting events, but no NFL team would herald a pre-season victory as being as important as the Superbowl…
When I read the report this past June, I knew it was a matter of time before it was used as fodder to criticize the wine competition process. I am surprised that it took so long. The visceral responses nearing "death to the judges" on other blogs is a bit short sided for my taste. There are simply too many variables when judging wine at these events to make an argument that there will ever be perfect consistency. Those who think they could do a better job should prove it before damning the process.
What I have seen in most competitions is that rarely does a "bad" wine receive a medal. We may disagree on a wine being awarded a gold or double gold medal but this is a subjective argument. To the panel of judges at that time and place, the wine deserved the award. Wineries have a choice as to whether they submit their wines and to which competitions they submit to. This is as it should be in a free country. If those who do, sell more wine when their wines win medals, that's great.
Wine judges that I know take their responsibility very seriously. They want to find the best wines and award them accordingly. Wines are evaluated for faults, balance and varietal characteristics but again what makes a great zinfandel to one judge may not be the same thing to another regardless of both having identical training and tasting skills. This is where the study falls short. It only looks at the statistics from the side of numbers. I commend the work Prof. Hodgson put into the study and think it can be used as a starting point to go deeper into ways to make competitions better but that is all. It is nether damning or validating of the process. It confirms that wine judges are humans with varying tastes, not necessarily tasting ability.
I am all in favor of working to make these competitions more valuable by doing more research. I just don't believe that these two studies are the end all to the discussion. Whether we admit it or not, what we do in blogs and journalism affects the wine industry and we need to keep the snobbery out of the discussion and look for solutions to improve the process. It is too easy to shout traditions down just because we were not invited to judge.
Thanks for a different perspective on the subject.
Eloquently stated!
Especially considering it was stated on a Friday night before a holiday weekend. Go drink some wine already! :-)
Cheers!
I would like to mention (again) that there is a form of "competition" where admitedly a small number of wines consistently receive "gold medals" (90+) from virtually every "judge". I have no involvement with CellarTracker, but doesn't this site, through its member notes demonstrate that consistency is possible across time and place with a relatively large "panel".? Of course these wines would never be submitted to a competition: their reputation is secure. eg the cults and wines like Ridge Zins, Copain and Pisoni Pinots, Alban Syrah Estate – http://bit.ly/lcDwL. If they were entered, they would most likely pick up golds in virtually every fair or newspaper competition listed by the Wine Institute. Other less vaunted wines, for instance the current release of Babcock Chardonnay, have a smaller number of reviews, but score above 90. The same is true among critics. There is concurrence on certain wines.
I think the point is not that concordance can't be achieved across different competitions, but that grade inflation has resulted in far too many gold medals being given out to meet marketing goals.
Excellent point and I think it underscores the notion that wine competition awards have largely become devalued and therefore pretty much useless for the consumer.
i have to disagree about cellartracker, simply because when you search for a wine on CT you SEE THE SCORES that others have been giving it! this leads to a HUGE amount of bias, e.g.,"wow i thought this wine was an 80 but everybody else is in the 90s. maybe it wasn't that bad…"
Very true. On one hand, it helps calibrate the responses, on the other hand to be sure that you aren't influenced by others' scores you'd need to be diligent enough to enter your review *before* looking at others' reviews of the same wine. In the end, there's no way to tell who took which approach on CT…
We mustn't let the perfect be the enemy of the good. Yes, the system is flawed in this and other ways. But I think Joe is right; that the members maintain their integrity. They are largely using the site as the name implies: to keep a data base on their inventory of wine. So offsetting the desire to follow the majority opinion, is the need for accuracy so the service is performing its primary function. Also, you can look at various wines randomly and see real divergence. It's only the superwines that garner consistently high scores–the 1 to 3% of wines in the data base. Those wines in the middle of the bell curve, like most wines submitted to wine competitions, genrate notes and scores in the mid 80s as you would expect. Finally, the members are mostly wine geeks; they are bound to be collecting fine wines a number of which deserve high marks.
I think one reason that generally great wines are not entered into competition is that they would frequently be outshined by so called lesser wines. I buy expensive WA wines from time to time, and much of the time I cannot honestly say that they are better than another wine which is much cheaper. But they are always very good.
It's also worth pointing out that it's common opinion (and my personal experience) that the CT members are *though* reviewers. So when I see a higher score there, I'm more apt to think that the wine in question is at least pretty decent…
have you READ cellartracker reviews?! everybody who buys a wine on wine.woot goes and gives it a 90+ and that's that.
Reference please?
i would give screaming eagle a 97 too if i paid $3,000 for the bottle. i see your point but i think it's off the mark to think that there are certain wines that everyone will like. that's why judges can agree on what's bad (faulty wines) but not what's good (which varies for every individual)
It's probably true that there is considerable variation among "experts", but there are certain wines that enjoy broader appeal, (not "everyone') particularly among consumers. That's what I find among the members of my wine tasting society. To mix metaphors, we will continue to look for the Holy Grail of Sweet Spot wines .
Remember too that Bob Hodgson found that most gold medal wines failed to win a medal in another competition. He didn't seem to analyze the more interesting question: were there wines that won in more of the Big 13 competitions.
As others have said, Joe's analysis, rather comically, ends up supporting Hodgson's point: that the results of wine competitions are essentially random.
Joe shouts that Hodgson doesn't take into account that there are a thousand variables that make each competition unique. Well, uh, yes. Which is exactly what makes the results, when looked at across the breadth of highly regarded wine competitions, meaningless (and which, again, is Hodgson's point).
Joe even offers an analogy that *perfectly* captures why wine competition results are so inconsistent. He writes:
The problem with this pseudo-scientific view is that it’s a bit like saying that I am always going to be stronger than my friend Bob, because in 13 attempts I jumped an average of fifteen feet into the air, while my friend Bob jumped only 4 feet. Therefore, we can conclude statistically that I am stronger than my buddy Bob. Oh, but we left out little tidbits that might influence our conclusion – like the fact that I jumped from a trampoline on the surface of the moon, while poor Bob jumped from a standstill on paved road in Iowa, while nursing a sprained left ankle.
Joe's point that comparing two tests done under wildly different circumstances — jumping on the moon while healthy vs. jumping on a paved road in Iowa with a broken ankle — is precisely in accord with Hodgson's paper. Indeed, it's those who defend wine competitions and suggest their results are meaningful who are doing the equivalent of concluding that Joe is stronger than Bob based on their jumping tests. In fact, to extend the analogy, wine competition defenders are conducting the test on several different planets, with different gravitational pulls, and still saying the results are meaningful!
Pretty hilarious stuff.
nice analysis!
I've also said that making a conclusion based on 13 competitions in one area of one country is like analyzing my backyard and making conclusions about how grass grows globally…
I don't know about that Joe. I don't think you can say that without knowing what percentage of the total entries nation winde are represented in those 13 competitions.
I guess that's technically true, Chris, but with so many competitions / state fairs / etc. it's probably not an absurd assumption that 13 is a small percentage of wine competitions globally.
Joe,
I don't think the sample size is the issue, it thnk it comes back to the "cause system". No consistancy the methodology which results int he the final outcome, or in the case medal awarding. the qualifications of the judges, thier expections of a wine (do they prefer big oaky, buttery chards or lean unaoked ones), the wines before and after a given wine. Not to mention some competitons allow the raters to change thier rating after all other rating are submitted, which means ANY score can be influnced by the other judges.
"Stochastic independence is simply another way of saying that the events are not related. For example, if you roll a 5 on a die, the event of rolling a 5 on your second role are independent. In other words, a wine winning a medal in one competition doesn’t impact what it will or won’t win in another competition. Which is exactly what you’d expect from a different competition, with different judges, and competing against different wines."
This is crazy. It only makes sense under the assumption that, by definition, a wine itself can have no role in how it fares in a judging. The outcome must be entirely a matter of variables other than the wine. Unfortunately, this is what Hodgson's empirical findings suggest, but there is no reason to suggest that logic alone demands this outcome.
Thank you – stated rather eloquently!
Exactly. Geetus made the point that your analysis and Hodgson's analysis are nearly identical and you called Hodgson's analysis crap. Then you tell Geetus that he stated that point eloquently. Dude. Rolling on Floor Laughing with Tears in eyes and pain in side!
Your analysis assumes that "contestants" in a wine judging contest are judged only against each other and not against any reliable standard of quality, which is actually the point of Hodgson's report, there is no reliable or standardized measurement of wine quality. Yeah, the math language is outside of most people's vocabulary so I can understand why you didn't understand it and made such a ridiculous posting. Your point about the trampoline is laughably nearly identical to the Hodgson analysis in that there is no particular standardization in process of judging wines either. Dude, you brought tears to my eyes. Thanks for the best laugh I have had in ages. You can't essentially agree with an article and call it BS, not without making an educated person ROFL.
Sorry you feel that way.