Maybe this is a bit early, but the judging lineup for the 12th annual Critics Challenge International Wine Competition, held in Stay Classy San Diego, has been finalized, and once again I’ll have the honor and pleasure of being one of said critics putting the submitted wines through a GLASS CASE OF EMOTION!!!
Previously, the CC has been the origin of many an entertaining and surprising wine find for me, several of which have been reported on these virtual pages. Back in 2013, I was luckily paired up with the irrepressible Leslie Sbrocco, which is kind of like pouring combustible liquid onto a raging fire (let’s not forget that Leslie and I managed to work words such as “Godzilla” and “nipples” into our tasting notes during that incarnation of the CC). Last year, my partner in crime was the elegant and talented Deb Parker Wong, who went ga-ga with me over a wine that eventually made onto the 2014 MIW list here.
So, the expectations I’ve got for CC 2015 are waaaaay high, and I am pretty stoked about being a part of the action.
For more details (or to submit wines), check out criticschallenge.com. For a run-down of how some of the various entrants have fared in previous versions of the Critics Challenge, you can check out the details on their dedicated Wine-Searcher.com page.
Cheers!
Dude,
Can you lift the veil of mystery on how point scores (maximum 100) were assigned to wines judged at last year’s Critics Challenge International Wine Competition?
Was each “component” of the wine (e.g., appearance, color, aroma, bouquet, acidity, body weight, alcohol level, flavors, overall perceived quality) assigned a maximum point score that summed up to 100?
By way of example, the “Wine of the Year” (2014) was:
Dutton-Goldfield 2012 Chardonnay
Rued Vineyard, Russian River Valley, $50
Awarded a “Critics Platinum Medal” and “100 points”:
2012 Chardonnay, Dutton Ranch, Rued Vineyard Green Valley of Russian River Valley $50.00 100 Points
I can find no 100 point scoring scale referenced by the Competition.
~~ Bob
Bob, I don’t think it’s a mystery, you can probably just ask Robert for specifics. It’s not done by aggregation, it’s an overall assessment. I think the point of CC is gathering veteran critics and judges who can be trusted to give quality assessments. I give a score range to match the medal I think the wine deserves, so I’m more focused on ensuring the wine gets or doesn’t get an award and I don’t really care about the final number per se.
I did ask Robert, through an exchange of e-mail queries and opaque replies.
Wines are scored on a “100 point scale,” but no written scale (or the guidelines on how it is used) were proffered.
Hence my takeaway that it is all whim and caprice.
(I will send you via private e-mail the thread.)
Bob, having read the email exchange, I’m not sure this is anything more than a tempest in a teapot. Why would the act of giving a score to a wine in order to assign it a medal in a competition be only either totally structured or total whim? There aren’t degrees in between those two extremes?
The UC Davis 20-point scale is based on assigning points to each technical and quality measure of a wine.
It the wine under analysis falls short on any measure, it is downgraded.
On Robert Parker’s 50 to 100 point scale, he likewise embraces assigning points to each technical and quality measure of a wine.
Quoting a 1989 interview with Wine Times magazine (later to become Wine Enthusiast magazine):
“WINE TIMES: But how do you split the hairs between an 81 and an 83?
“PARKER: It’s a fairly methodical system. The wine gets up to 5 points on color, up to 15 on bouquet and aroma, and up to 20 points on flavor, harmony and length. And that gets you 40 points right there. And then the [ balance of ] 10 points are … simply awarded to wines that have the ability to improve in the bottle. This is sort of ARBITRARY and gets me into trouble.
“WINE TIMES: You mean when you are in the cellars of Burgundy, you look at a wine and say this is a 4 for color, a 14 for bouquet, and so on [?]
“PARKER: Yes, most of the times. What happens is that I’ve done so many wines by now that I know virtually right away that it’s, say, upper 80s, and you sort of start working backwards. . . .”
But Parker also got himself into trouble by stating that some wine varietals and wine grape varieties don’t merit a score above 90 points because they don’t improve with time in the bottle.
And he got himself into trouble with this 2002 comment in The Wine Advocate:
“ . . . Readers often wonder what a 100-point score means, and the best answer is that it is PURE EMOTION that makes me give a wine 100 instead of 96, 97, 98 or 99. ”
(Parker acknowledges his whim and caprice.)
So what makes a wine have a very exactingly assigned “98 point” score — as was awarded the 2010 Rocca “Grigsby Vineyard” Cabernet Sauvignon at Robert’s sixth annual “Sommelier Challenge” — and not “100 points”? “99 points”? “97 points”?
Where did the wine fall short of “perfection” by two points?
A scale with upwards of 100 discrete scores intrinsically denotes precision. That there is a substantive difference between a wine awarded “98 points” and “100 points.”
If not, then the scoring scale is capricious and exaggerates its value in discriminating between wines.
A sentiment found in Caltech professor Leonard Mlodinow’s guest essay titled “A Hint of Hype, A Taste of Illusion” in The Wall Street Journal a few years ago:
Link: http://online.wsj.com/article/SB10001424052748703683804574533840282653628.html
Bob, I don’t disagree about the concerns, but as I stated I’m not at all focused on point assignments in the comp, I’m focused on whether or not a wine should be medalled, and if so what medal category I think it should be awarded. The points are, for me, in a range and simply a means to an end that I’m forced to provide during that gig. So I can’t really answer for the other judges.
When I served as a wine competition judge, I used a UC Davis-inspired scale to assign “gold” and “silver” and “bronze” medals to grape varieties I was assigned.
Each wine methodically assessed for every discrete “component.”
No whim or caprice.
If the wine didn’t adhere to its varietal “norm” (“typicity”) or was out of balance, it was downgraded.
By way of example, the red wine scoring scale components:
appearance — up to 5 points;
color — up to 5 points;
off-odors — negative 10 points to negative 5 points to zero points (deduct depending on severity);
total acidity — up to 5 points;
sweetness — up to 5 points (e.g., a purportedly “dry” wine that has residual sugar gets zero points);
bitterness — up to 5 points;
aromas — up to 20 points;
bouquet — up to 10 points;
body weight — up to 5 points;
astringency (tannic acid) — up to 10 points;
flavors – up to 15 points; and
overall quality – up to 15 points.
(The scale changes for white wines, which by practice have no astringency.)
My upward revision of the UC Davis 20 point score to a 100 point scales puts a greater emphasis on the hedonic aspects of a wine . . . aroma, bouquet, astringency, flavor . . . and its overall quality judged against its peer group for that varietal.
Add up each component score and you have a aggregate score on the 100 point scale.
Wines scoring 90 points and above [an “A”-letter grade in school]: “gold” medal.
Wines scoring 80 points to 89 points [a “B” to “B+”-letter grade in school]: “silver” medal.
Wines scoring 70 points to 79 points [a “C” to “C+”-letter grade in school]: “bronze” medal.
Wines scoring below 70 points [a “D”-letter grade in school]: no medal.
Erratum.
Let me compose a more lucid sentence:
When I served as a wine competition judge, I used a UC Davis-inspired scale to AWARD “gold” and “silver” and “bronze” medals to grape varieties I was assigned.
Bob, for what it’s worth, I basically use the same scale I employ for wine reviews, with B level wines being silver and A level wines being gold. I don’t want to speak for Robert or Mary, but part of the idea behind CC is that the judges are already well versed in reviewing wines. Interestingly, despite not using a gnat’s ass level of detail in composing a score/rating/medal award, the amount of times that my partners and I disagreed on our wines during my tenure at CC was quite small. So, I guess what you’re saying is that the critics, writers, MWs, etc. at CC use only whim and caprice to evaluate wine in a critical context. I don’t know whether to laugh at that sentiment, or feel insulted, since it puts years of effort and study on my part and the parts of the other CC judges into an extremely narrow categorization without respect to the reputation of their individual careers and work.
Quoting an excerpt from Parker’s 1989 interview with Wine Times magazine:
“WINE TIMES: You mean when you are in the cellars of Burgundy, you look at a wine and say this is a 4 for color, a 14 for bouquet, and so on [ ? ]
“PARKER: Yes, most of the times. What happens is that I’ve done so many wines by now that I know virtually right away that it’s, say, upper 80s, and you sort of start working backwards. . . .”
Based on his years of experience, he “knows virtually right away” what a wine’s score range is.
No different from your CC colleagues assessing a wine and placing it into a medal category.
And that’s fine.
Where I part company is awarding a medal AND assigning a numerical score with such seeming single integer precision as “This is 100 points and this is 99 points and this is 98 points . . .”
That’s whim and caprice. A feat that cannot be replicated. (And that was the criticism leveled by that Caltech professor in his Wall Street Journal essay.)
The same problem I have with other wine judging competitions, as well as with the Wine Spectator.
Quoting from the Wine Spectator’s March 15, 1994 issue (“Letters” section, page 90):
“Grading Procedure”
In Wine Spectator, wines are always rated on a scale of 100. I assume you assign values to certain properties [read: “components”] of the wines (aftertaste, tannins for reds, acidity for whites, etc), and combined they form a total score of 100. An article in Wine Spectator describing your tasting and scoring procedure would be helpful to all of us.
(Signed)
Thierry Marc Carriou
Morgantown, N.Y.
Editor’s note: In brief, our editors do not assign specific values to certain properties of a wine when we score it. We grade it for overall quality as a professor grades an essay test. We look, smell and taste for many different attributes and flaws, then we assign a score based on how much we like the wine overall.
So I guess my long reply to your comment . . .
“I don’t know whether to laugh at that sentiment, or feel insulted”
. . . is: laugh!
You assign medals and school house letter grades “A” through whatever, but seemingly not numerical scores.
Case in point:
https://www.1winedude.com/wine-reviews-weekly-mini-round-up-for-march-2-2015/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+1winedude+%281WineDude%29
Bob, yeah, I don’t do the numbers. Having said that, I wouldn’t draw such a hard line on the numerical scores. In the context of the competition, they are a means to an end (a medal).
I have no problem awarding medals — I’ve done it myself.
But at the end of the competition to declare that one wine is assigned a 100 point score, a second wine a 99 point score, a third wine a 98 point score and so on conveys a level of precision that is simply unsupportable.
As Robert Parker has stated in The Wine Advocate on this subject:
“The 1990 Le Pin [red Bordeaux, rated 98 points] is a point or two superior to the 1989 [Le Pin, rated 96 points], but at this level of quality comparisons are indeed tedious. Both are exceptional vintages, and the scores could easily be reversed at other tastings.”
[Source: The Wine Advocate, issue 109, dated 6-27-97]
Good weekend to you.