As of last week, the results of the 2013 California State Fair Commercial Wine Competition have been fully revealed, and July 4th seemed an auspicious time to recap the (all American) Best of Show winners from the comp. (itself a bit of an American institution, having been established in the 1800s), and share my thoughts on my fave wine of the competition from the judge’s seat.
And now that I’ve completed my tour of the International wine judging circuit for 2013 (having lent my palate to the 2013 Argentina Wine Awards, the 2013 Wines of Portugal Challenge, the 2013 CA State Fair Commercial Wine Competition and the 2013 Critics Challenge), this also seems like a good opportunity to confirm or bust up several wine competition myths, since wine comps. in general are once again under attack in the media as “junk science” (can anyone, anywhere, name one single soul who has ever proffered wine competition judging as an actual scientific endeavor? Because I’d like to be first in line to kick that person in the gluteus max).
First, let’s tackle the wine comp. myths, because that will go a long way in explaining why some of the wines that won Best of Show in the newly-revamped CA State Fair comp. (now headed up by my friends and long-time wine writers Mike Dunne and Rick Kushman, both of whom have done yeomen’s work in bringing new levels of both fun and professionalism to the event)…
Warning… 1800+ word screed ahead… you have been warned!…
Wine Competition Myth Number 1: Wines compete as the best-of-the-best
Verdict: BUSTED
Wine competitions are competitions between the wines that get entered. Period. Anyone who tells you differently is trying to sell you something (hint: that something is probably wine, in a tasting room). Generally speaking, the more prestigious the competition (due to the judges involved, the history of the event, the quality of the wines typically entered, etc.), the more likely that better wines and top-tier producers are to enter it. The best of the best don’t need to enter comps., because they can sell their wines without medals. For many a wine brand, the medals offer a means of differentiating their wares from those who don’t have medals hanging around their bottle necks for whatever reason.
Wine Competition Myth Number 2: Bronze Medals in American wine comps. are basically meaningless.
Verdict: CONFIRMED
One of my biggest pet peeves with wine competitions in the Americas (both North and South) versus those in Europe is that Bronze medals are awarded to wines that are not flawed but otherwise aren’t showing much on the day they’re judged. Usually, this is because those wines suffer from an affliction that’s almost worse than being bad: they’re boring as hell. They’re the “C” grade wines in my (stupid) wine reviewing system. They’re wines that induce lassitude both from the effects of their alcohol content and their one-sided dullness. As a result, bronze medal winning wines in most American wine competitions can be safely ignored by the American public, but usually aren’t because winning any medal that’s fashioned after Olympic-styled accolades are viewed as having achieved something more than just showing up and not being bad. Thankfully, the Critics Challenge comp. doesn’t even bother with Bronze medals, I think for that very reason. I wish more American comps. would follow suit.
Wine Competition Myth Number 3: All wine competition judging is inherently bullsh*t.
Verdict: Part One – PARTIALLY-BUSTED
Every so often (which is to say, every twelve ours or so), wine rating and wine competition judging are attacked in the media as being the equivalent of bovine turd-iness. The latest attack comes via an article in The Guardian, in which David Derbyshire cites an experiment that retired professor turned Humboldt County wine proprietor Robert Hodgson conducted on judges from the CA State Wine Fair Comp.
The experiment, which dates back to 2005, is one that has been performed on me several times in blind tasting sensory evaluation panels, and in summary consists of serving the same wine blind a few times within the same tasting session, and presumably seeing how consistent the judges / evaluators are at consistently awarding/evaluating the same wine. According to the article, Hodgson found the results disturbing:
“Only about 10% of judges are consistent and those judges who were consistent one year were ordinary the next year. Chance has a great deal to do with the awards that wines win… They say I’m full of bullshit but that’s OK. I’m proud of what I do. It’s part of my academic background to find the truth.”
I respect what Robert Hodgson is doing here (even if his website proclaims that his winery is “recognized for producing medal winning wines in both national and international wine competitions for over 30 years” – I’m guessing these are the same comps. he’s now trying to debunk; they picked up a Silver this year, by the way). And I deeply respect the fact that whatever reverts to mean over time must, by mathematical definition, be random in its result. This is true for actively managed mutual funds (seriously – the data on that are unequivocal, and if you have an adviser who has recommended any such funds to you, you should fire that person immediately). It’s also true for athletic performance, but only a fool would assume that the reasons for random results would be identical in those two cases.
And so it also stands to reason, I think, that Hodgson’s wine comp. study results are not random for the same reasons that athlete’s winning records and mutual fund returns are random. I’d posit that wine comps. have random results because humans are involved, and neither they nor the wines they are judging are ever static, with a cherry-on-top reminder that, unlike index funds for the mutual find business, no superior alternative to human judging at wine comps. yet exists.
In other words, Derbyshire’s assessment that “Over the years [Hodgson] has shown again and again that even trained, professional palates are terrible at judging wine” is crap. Hodgson’s work has only shown – so far – that a wine’s performance in medal awarding by expert judges is inconsistent.
Well… no f*cking duh, dude.
Verdict: Part Two – PARTIALLY-CONFIRMED?
I’ve no idea how inconsistent I was at the CA State Fair comp. (and it’s not clear to me from the article if I was one of the guinea pigs or not). However, for the past year or so I’ve been involved in a sort-of-secret sensory evaluation group that has met periodically in the Finger Lakes. I’ve been told personally by the organizers of that group (not connected with the comps. I’ve mentioned in any way) that the data I provide in my tastings for them is consistent enough statistically to be used as “good” data for their purposes, and I’ve been invited back to every session that they’ve had to date (I’d tell you more on this, but I can’t as I’m under a NDA). Which suggests some consistency on my part, and which bothers the hell out of me.
It bothers me because fine wine is totally inconsistent. Fine wine should be changing in the bottle and in the glass. The wine I taste one minute should be different than the one I taste several minutes later, if the wine is any good. Wine changes, our tasting of it changes, and we’d probably need chaos theory levels of math to incorporate the vast number of variables influencing the outcomes of quality assessment. It could be argued that if I – or any judge – were really that consistent, then it could be argued that our results ought to be inconsistent.
The trouble with drawing meaningful conclusions here, even from good data, is that what gets labeled as random noise in most other studies is actually essential part-and-parcel-cause-and-effect for the results when it comes to wine. Put another way, do you know what could change a wine from a gold medal winner in one competition to a loser in another, even among the same judges? Anything. The barometric pressure, whether or not I had an argument with somebody, needed to take a dump, had a great song stuck in my head, ate a good breakfast, saw to much of the color red on billboard ads on the way into the judging hall that day, or got a pour into a glass that got polished with the wrong towel… You get the idea.
I’m not faulting Hogdson’s results, but am faulting the Guardian’s conclusions. It’s not that all competition judges suck at what they do, it’s that their task is handicapped into an artificial situation from the start. And if a competition compiles judges who spend far more time sampling wine than any normal human ought to do, well, anything, then we need to start asking ourselves if it’s our assumptions that are off before we start throwing stones at hard-working humans because they fail to act like machines.
To be fair, both Derbyshire and Hodgson hint at this in the Guardian article, particularly when Hodgson is quoted as saying “I think there are individual expert tasters with exceptional abilities sitting alone who have a good sense, but when you sit 100 wines in front of them the task is beyond human ability.” No disagreement there, provided that we add “to remain consistent in the face of such myriad ways in which it can get entirely F-ed up, even at the most well-run competitions” to the end of that quote (it’s implied, right?).
Whether the inconsistent results are down to the people, the wines, the environment, or all three, the moral of the story is this: if your assumption is that a wine should win the same medal every time in any given competition, then you’re just as much a fool as the high-fee, active mutual fund buyer. But if you also think that you shouldn’t tout a gold medal result (if you’re fortunate enough to win one) to help you market your wine, then you’re also a fool. The system of quickly evaluating a wine isn’t natural, isn’t perfect, and isn’t simple, and so if our assumptions are wrong (e.g., humans have robot-like quality assessment ability, wine is static, etc.) then our conclusions based on the results are bound to be off, too.
So here’s to off conclusions, as we salute the Best of Show winners:
Best of Show Dessert
2012 Navarro Vineyards Gewurztraminer Cluster Select Late Harvest (Mendocino, $28)
Best of Show White
NV Korbel Blanc de Noirs Méthode Champenoise (California, $12)
Best Value
NV Barefoot Cellars Moscato (California, $6)
And, finally, my personal fave of the wines I tasted at the comp.:
Best of Show Red:
2010 Imagery Estate Winery Cabernet Franc (Sonoma Valley, $39)
Not my favorite style of Cab Franc, but I had a crap-ton of respect for this wine. It combines the lush, sweet, extracted fruit, silky tannins, and generous nature of the CA reds style with the herbaceous, spicy, and vibrant nature of Cab Franc from the East Coast and Europe. Chinon or Virginia it’s not, and the marriage of stylistic components isn’t quite perfect, but the wine delivers big-time on the fronts of pleasure, complexity and intellectual curiosity. For many Left Coasters, this could be the Cab Franc gateway drug; for Right Coasters, it could be the CA red gateway drug; for Mid-westerners… it will definitely go with steak, so you’re covered, too.
Cheers – and have a safe & happy Independence Day!
Brilliant, thanks for being the voice of reason and writing in such a delightful way! P.S. I award you the Best Wine Blog Post of the Month. ;)
Alana – thanks for that, on both counts. Happy 4th!
Short response: yup.
Slightly longer response: I too was a judge at the CA State Fair, my first ever experience as a "proper" judge. I can tell you my judging quality changed during the two days – it took me some time to get used to the format. I brought plenty of personal inconsistency to the process.
Also, I agree 100% about how the wine was presented. Wine does change in the glass over time. I've had plenty of wines that were nice at opening, 10 minutes later were dull and boring, 30 minutes later OMFG! Where we hit each glass in that wine's time line has to make a huge difference. In this way a competition is a sort of high stakes crap shoot.
(I suspect your consistency at the "secret" tasting has a lot to do with the wines being presented in some consistent manner in a consistent place. Just by the way. Cheers!)
Thanks, Jefe – great seeing you in (stay weird!) Sacramento. I guess another way of saying this is "well, it is *humans* that are involved, after all!"
Well done Joe. Nice to hear someone explain the nuance of the situation, rather than scream and yell to get attention.
While I haven't been a judge in a wine competition, I have been involved in several blnd tasting scenarios, including a winemaking class where we would add components to wine (acid, sugar, vinegar) in varying levels, mix them up, and try to put them back in order. There is no question that this was an objective, and not subjective, exercise. The results were also pretty clear: some people had better palates than others, some people tasted certain flaws but not others, and palate fatigue started to set in towards the end of the class (around 30 glasses of wine).
As you said, there are a million factors that might affect your interaction with a certain wine. but there are also some very real, quantifiable, and scientifically measurable ways to measure someones ability to taste wine. As for wine competitions, they are probably somewhere in the middle. Professional critics are probably more consistent, although still fallible. And you could always scientifically measure a wine for pH, VA, reduction, and a hundred other aspects. The only thing it won't tell you is if the wine tastes good
Gabe – exactly, brother! I guess it boils down to wineries understanding what they're in for when submitting wines to these comps. I.e., understand that the results cannot be consistent, just as their wine cannot be if it has any complexity to it, and don't get too worked up either way if it wins a Gold or doesn't.
Well, I can tell you that most wineries get pretty excited when they win. And God bless them for that. While I think it is easy for a good wine to slip through the cracks in those competitions, it is still an incredible compliment for a panel of wine drinkers, regardless of their level of expertise, to deem your wine to be the best of the bunch.
gabe – And they should get excited. Just not too excited. And not expecting that they will win like that every time, as the data suggests strongly that might not happen. In other words, not winning a Gold is not the same as your wine sucking! :)
I still get excited when random strangers tell me they like our wines. I get excited when I see our wine on a restaurant wine list. I honestly feel like getting a good score or a gold medal is a more official version of that same experience. I've spent years of my life cooped up in a 60 degree cellar trying to make wines that taste interesting and delicious. If somebody likes the thing that I made, that makes me happy. If they didn't taste my wine in a controlled scientific environment, I couldn't possibly care less.
gabe – I understand (as closely as I can without making wine, anyway, but as someone who puts grit & sweat & soul into something and then puts it out for public "consumption"). And AMEN to that approach.
Of course, the flipside to the happiness is that some selling is required, and I recognize that critical ratings and medals can help sell.
not the first time we've stumbled upon the similarities between making wine and writing a wine blog. i'm sure we could extend this analogy to include wine blog awards.
We don't submit to wine competitions, but scores have helped our winery, so I have to appreciate what you said about that. I guess I'm lucky being an Assistant Wiinemaker…I don't have to worry about sales or scores. I just make wine, and if it tastes good, I'm happy.
Thanks for writing that great blog post, and sparking the most interesting discussion on a topic that has been all over the web lately. I give this entry 92 points! ;-)
Thanks, gabe. Does that mean I got Gold, or Silver?? ;)
Joe,
Goddamit, I just wrote a piece about this same non-issue that I was going to publish Monday on HoseMaster of Wine. I'm still going to publish it, mostly because I already wrote it and blogs devour content whether it sucks or not, but now it's going to seem stupider than usual. Crap.
We make many of the same points, but I'm concerned that wine judges come off a bit too much like wine critics defending the 100 Point Scale. It's, in some sense, an indefensible system. And, truthfully, most wine judges don't do it out of some kind of duty to wine or an idealistic sense of guiding consumers to fine wines, but because it's flattering to be asked, to be thought of by some yahoo as a "wine expert," and it's a blast to hang out for a few days with a bunch of fellow alcoholics in a hotel.
As you know, I also judged at the CA State Fair, though not on your panel. I'm sure our panel was "tested," we even discussed it and pointed to wines we thought were "reruns," but, in fact, as Emerson famously wrote, "A foolish consistency is the hobgoblin of little minds…" Who cares if a judge is "consistent?" Either a judge is qualified, or isn't. After that, an honest impression is all one can ask.
So don't bother to read my blog on Monday (not that anyone does). I've judged more wines in more competitions than I'd like to admit, and I've learned a lot from the whole process. That's why I do it. To taste wines with people more knowledgeable than I, and learn to be more open-minded about what qualifies as great wine. Nothing wrong with a Bronze Medal, by the way, it signifies that a wine is sound, and, as you know, there are hundreds entered in competitions that are not. What competitions should do, but never will (for obvious financial reasons), is list wines that were entered but didn't receive medals. That would give a Bronze Medal its legitimate value.
So thanks for scooping me, 1WineDoody, and GET OUT OF MY HEAD!
Ron – a pleasure hanging with you in Sacramento. I've two responses for you:
1) There's plenty of room for both of our interpretations in Poodleville, and you *know* I'll be reading on Monday, and
2) Basically, I agree with you (particularly about publishing non-medalling wines). My point, if there is one, is not to become an apologist, but to reiterate that tearing down wine judging in the way that many of these articles do is simply showing that they don't understand how it works, or how *any* subjective judging works. It's like saying that beauty pageants are crap because the same woman doesn't win Mrs. Universe every year. I think I should reiterate that I don't disagree that the results revert to mean and are therefore essentially random. I just think it's stupid *not* to expect an outcome like that from wine comps, when we societally don't seem to expect it from any other similar types of competition experience.
p.s. – It's kind of scary being in your head… why do dream about dwarfs so often?!???
Joe,
You think it's scary in here, man, it's worse the deeper you go.
Oh, dwarfs–those are just my personality traits brought to life. Especially Grumpy, not so much Bashful.
And when it comes to competitions, let's not forget that many of them help support county fairs with the money raised (wine judges are paid very small honorarium for their work), as well as contributing recognition and prestige to lesser known wine regions. What was the Paris Tasting of 1976 if not a wine competition? No one seems to question those results from those judges–and they were all French, for crying out loud.
Ron – Questioning the `76 tasting would be like questioning the Miracle On Ice, which would mean we'd as Americans be giving up bragging rights over an event that we cannot help but keep bringing up to foreigners. It will never happen!
i couldn't imagine the hosemaster not weighing in on this subject.
*** It bothers me because fine wine is totally inconsistent. Fine wine should be changing in the bottle and in the
***glass. The wine I taste one minute should be different than the one I taste several minutes later, if the wine is ***any good.
(Sorry for the weird code in my response above!)
That of course, is the Money Quote of this piece. Wine is a moving target, and it follows from that, at least for me, that the value of these competitions is questionable.
@chicagopinot – :) I would say less a matter of questionable and more a matter of "proceed with caution." I mean, in some of these comps., like this one and also Critics Challenge (which I'm highlighting next week here on 1WD) get talented, hardworking people who have proven that they know what they're doing when it comes to tasting wine. And the results are *still* inconsistent. So… it follows that we should question whether or not wine comps. ever will have a scientifically viable level of consistency when being judged for medals at a certain point in time against other wines (we already know it can happen when tasting for relative measures that are more objective, like perceived sweetness, acid levels, and to some extent quality level). Cheers!
On a similar note, Joe, have you seen the movie yet? The one where the four dudes are competing in the Miss America of Wine Pageant? I definitely respect and applaud their determination and hard work.
But once again if your quote above is correct, it forces us to ask some hard questions. Apparently, as a Serious Wine Professional, I am supposed to know how to break down one of Mother Nature's (God's?) creations thoroughly, using no more than four minutes and ten seconds. I am sorry, but I have a little too much respect for the winemaker's art and God's grace in giving us the Land to work on in the first place, to want to attempt that.
Sorry to go off topic, but these competitions you write about and this apparently very popular movie are depressing to me and I wonder if I'm the only one feeling this way.
@chicagopinot – I have seen SOMM, and reviewed it over at wine.answers.com (generally, I liked it: http://wine.answers.com/learn-about-wine/wine-mov… ).
I wouldn't get too disheartened. If you need to evaluate a wine quickly for a given exam, then fine, do it for the exam, kick its ass, and move on. No one is saying that has to be the way that we evaluate wine in "real" life outside of the largely un-life-like constructs of comps. and exams.
Great post again, Joe. I've been an outspoken critic of wine competitions for years, and I think you're right on the money with this (although I think I lean more positively towards Hodgson's conclusions than you do.)
I guess my real beef with competitions is not so much as what they do for wineries or wines, but what their existence does for consumers. (And I guess and judging can be taken this way.) Consumers–with an extreme lack of knowledge about a terribly confusing subject matter–tend to forget to take these competitions with the huge grain of salt that should accompany them. They are not objective, they are not science. It's just what some influential (consistent or inconsistent) people thought of a given wine on a given day.
What gets under my skin is when customers take this at prima facie and look at points an scores as indicators of what really matters (which is, will I enjoy this wine?)
So, in light of all that, I'll make a formal call for you to advance yesterday's blog post even further by dropping your grading system (but maybe keep your cool badges.) What do you say?
Carl – thanks. "What gets under my skin is when customers take this at prima facie and look at points an scores as indicators of what really matters (which is, will I enjoy this wine?) " – cannot tell you how much I agree with this! As I've often said (and will reiterate in an upcoming post this month), no ratings (including mine) are worth a hill of beans to you if you don't already know what you like (all I can tell you with a grade is how the wines stacks up against the worst and the best that I've tasted – and fortunately at this point I've tasted enough of the worst and the best worldwide that my opinion on the matter could be argued as being educated). But if you don't like the style of an A- wine, then what good will that rating do for you? Not much.
As for the grades… I know, I know… periodically I poll on FB and twitter about them (more like once every blue moon, but at least I do get around to it! :). And each time, the yeahs slightly outweigh the nays, and so I have kept them. BUT… I'm leaning towards not using them in the features, and only using them in the twitter /short-form reviews. Ease them out, so to speak. Happy to debate that one, but I need to balance between the weaning off of the crutches, and the realization of what people (say that they) want.
Good post Joe. I did a blog post about wine competitions a while back and came to similar conclusions. The conditions in which "experts" taste more than 20 wines in a day and expect their palate to still be reliable are not realistic. Plus what I found in working for a medium to large sized winery and entering over 20 wine competitions each year were;
1. Wine competitions started out as non-profit organizations with the clear intention of providing unbiased opinions.
2. Over the years, costs have risen and they lean now towards getting as many wineries to participate to cover their costs (pay their salaries)
3. Most wineries that enter "wine competitions" do so because of lack of a 90+ point score from a reputable wine magazine or writer. Once they have this, why waste your time in a competition where anything less than a Gold would be considered a loss
4. These days they keep adding new award values like: Double-Gold, Platinum, Double Platinum so that even a "regular" Gold seems kind of worthless.
Just my two cents. Happy Fourth.
@spiritandwine – Thanks. And I think a good part of it also has to do with who judges, in terms of who wants to enter. For example, at the Critics Challenge this year, we had some pretty well-known judges, and some of the wines entered didn't "need" medals when you factor in their track records with critics. What I wish s that someone tried "ranking" the different wine comps. based on history, perception and reputation of the judges. That would help consumers to better understand which comps. medals might "mean" more than another comps' medals. But I've been wishing for that for years :) – there's a lot of work involved in that for potentially little pay off!
You come across as more than a little defensive, which is unsurprising given the consistent, surprising results obtained from experiments. To say that good wine should taste different minute to minute, and that this explains judging inconsistency is a wussy cop-out.
Research shows that most people cant distinguish Lafite from Chateau Pissoir: http://www.guardian.co.uk/science/2011/apr/14/exp…
It also shows that expensive wine only tastes better than plonk when people know the price: http://ageconsearch.umn.edu/handle/37328
It also shows that people cant distinguish red or white unless they see the color: http://www.daysyn.com/Morrot.pdf
I cant begin to understand any of this shit. I know that I prefer a $55 Mollydooker Blue Eyed Boy to a $13 Rosemount Shiraz – would I still prefer it if I dont know what I were drinking?
Aloha – I understand where you’re coming from on this. I’ve also previously written about most of the evidence that you cited in your comment. I sound defensive because I’m being defensive; somebody has to defend this stuff, because it’s under attack and in large part by journalists who don’t fully understand the topic. I don’t really know what else to say on the matter, side from rejecting criticism of anything with subjective elements is to logically reject all criticism of those things: wines, food, movies, books, anything. Wine is an easy target because it’s a moving one; I wish I could just say that wine was easy and immutable, but it’s not. It changes, it’s complex, it’s a total pain in the ass to judge. Assuming it’s otherwise is kind of like saying you can judge a banana at any level of ripeness and compare the results equally. Yes, you can do it, but you’d better not be surprised at inconsistent results if you do.
Joe et. al.:
For Warren Buffett’s personal investment advice (disdaining stock pickers), see:
http://money.usnews.com/money/blogs/on-retirement…
On the subject of county fair wine judging competitions, see Caltech lecturer Leonard Mlodinow’s essay:
From The Wall Street Journal “Weekend” Section
(November 20, 2009, Page W6):
“A Hint of Hype, A Taste of Illusion;
They pour, sip and, with passion and snobbery, glorify or doom wines. But studies say the wine-rating system is badly flawed. How the experts fare against a coin toss.”
Link: http://online.wsj.com/article/SB10001424052748703…
(As for his science bona fides, he has co-authored two books with the esteemed Stephen Hawking.)
A salient quote from that essay:
“I [Mlodinow] did email Mr. Parker, and was amazed when he responded that he, too, did not find [Fieldbrook Winery winemaker, scientist and retired statistics professor] Mr. [Robert] Hodgson’s results [on California State Fair wine judgings and medal awards] surprising. ‘I generally stay within a three-point deviation,’ he [Parker] wrote. And though he didn’t agree to [Falcon Nest vintner] Mr. [Francesco] Grande’s challenge [to submit to a controlled blind tasting], he sent me the results of a blind tasting in which he did participate.
“The tasting was at Executive Wine Seminars in New York, and consisted of three flights of five wines each. The participants knew they were 2005 Bordeaux wines that Mr. Parker had previously rated for an issue of The Wine Advocate. Though they didn’t know which wine was which, they were provided with a list of the 15 wines, with Mr. Parker’s prior ratings, according to Executive Wine Seminars’ managing partner Howard Kaplan. The wines were chosen, Mr. Kaplan says, because they were 15 of Mr. Parker’s highest-rated from that vintage.
“Mr. Parker pointed out that, except in three cases, his second rating for each wine fell ‘within a 2-3 point deviation’ of his first. That’s less variation than Mr. Hodgson found. One possible reason: Mr. Parker’s first rating of all the wines fell between 95 and 100 — not a large spread.”
Joe,
"Fine wine should be changing in the bottle and in the glass. The wine I taste one minute should be different than the one I taste several minutes later, if the wine is any good. . . ."
Fellow wine blog W. Blake Gray did a masterful job of explaining, in part, what is happening in the glass.
~~ Bob
Excerpt from the Los Angeles Times “Food” Section
(May 6, 2009, Page E1ff):
“[Decanting;] Call It Aroma Therapy for Wine”
Link: http://www.latimes.com/features/food/la-fo-wineai…
By W. Blake Gray
Special to The Times
Air is one of the most talked about but most misunderstood elements in wine.
We say a wine needs to "breathe" as if it just needs a few minutes to freshen itself up, releasing its seductive perfume. In fact, most wines have been waiting years just to cast off a little gas.
In the end, the result is the same: To be appreciated, a wine needs to smell its best. To do that, it needs more air, faster, than you might think — but not for the reasons you might have heard.
People talk about a wine being "closed," . . .
But poetry aside, to wine researchers, "closed" means nothing. It's just another metaphor, like saying a wine is "cheeky."
"The word 'closed' does not have a physical meaning for sensory testing," says Andrew Waterhouse, chairman of the Department of Viticulture and Enology at UC Davis.
Further, Waterhouse says the implication that a "closed" wine is missing something is a misdiagnosis. In fact, rather than withholding scents, the wine is actually giving you something extra: sulfur compounds that are potent enough even in tiny amounts to cover up the fresh fruit aromas you want to smell.
Sulfur occurs naturally in both grapes and the yeasts that turn grapes into wine. Sulfur forms more than 100 compounds called mercaptans. These sulfuric compounds form differently and unpredictably in every bottle of wine.
When exposed to air, they eventually re-form into something less annoying, but they need a few minutes to do so. We call it "breathing," but it's really a seething sea of recombining elements.
"I think of wine as a tier of about 100 different compounds that are either taking on oxygen or passing it on to something else," says Kenneth Fugelsang, associate professor of enology at Cal State Fresno. "When that process is finished, the wine is ready to drink."
. . .
Bob, it's a complex beast to be sure. Theoretically, though, we should be able to consistently get a handle on the quality level of a complex wine, but that study suggests that isn't always the case.