Depending on who you ask, Wine Trials author Robin Goldstein is either the wine world’s Satan, or the wine consumer’s Savior.
Whether you feel that Goldstein’s powers are being used for good or evil, you can’t say that he harbors a fear of shaking things up. Goldstein became a polarizing figure in the wine world in 2008, when he ruffled the feathers of Wine Spectator by creating a fictitious restaurant whose wine list included some of their lowest-scoring Italian wines in the past two decades, and subsequently won their restaurant Award of Excellence. The aftermath caused one of the most heated debates of the year in the wine world.
Goldstein also coauthored The Wine Trials, the first edition of which is the bestselling wine guide (for inexpensive wines, anyway) in the world. The premise of the Wine Trials was simple: compare everyday wines to more expensive equivalents in blind tastings, and see which ones the average person preferred. As it turns out, most wine consumers – to a statistically significant degree – enjoy the less expensive options; more feathers ruffled!
Goldstein has a new website, BlindTaste.com, and the 2010 edition of the Wine Trials has recently been released. I tore through my review copy of The Wine Trials, and I found the first 50 pages (which describe the approach and science behind the book, and hint at its future implications on the wine industry) to be some of the most profound reading on wine appreciation that I have ever come across. The Wine Trials doesn’t just poke at wine’s sacred cows – it skewers them, grills them, and serves them up with an inexpensive Spanish red (Lan Rioja Crianza in this case, which took the Wine of the Year honors in the 2010 Wine Trials). A similar take on beer, The Beer Trials, is set to be released this Spring.
Robin kindly agreed to answer a few questions for our readers. I’ll warn you that you should be prepared for a quick and opinionated mind – and you might want to pad the walls of your wine world, because that world is about to get turned squarely onto its ear…
1) Summarize for our readers what you think The Wine Trials is all about. What do want readers of The Wine Trials to come away with?
This is a book that encourages you to learn your own palate better through blind tasting, to take wine magazines’ 100-point ratings with a (large) grain of salt, and not to assume that you’re going to like an expensive wine more than a cheap wine if you cover up the labels.
Probably the most immediately useful part of the book is the guide to 150 of the best wines under $15 that are widely available around America, with a photo of each bottle and simple, unpretentious wine descriptions. So I hope the book will quickly pay for itself by narrowing your inexpensive wine selections in the wine store or supermarket. But I hope that readers will also take the time to consider the arguments that I set forth in the first half of the book, which discusses the theory behind blind tasting and the scientific research that’s increasingly demonstrating the staggering power of expectation and bias on our most fundamental taste experiences.
2) What led you to create The Wine Trials? You seem to be passionate about blind tasting as a means for consumers to find their own, individual palates and wine preferences…
First of all, it was a book I wanted to use myself. I had grown tired of the increasing opacity of the way inexpensive wines were being branded – who knows what to make of a shelf full of wines branded with labels full of animals or cartoons of scantily clad women?? And I thought wine drinkers needed a guide like this. I also thought the under-$15 market – the wines most people really drink – was being underserved by wine magazines, which tend to be overly focused on expensive wines.
As for blind tasting, that’s been a passion of mine for years before The Wine Trials existed. In fact, it goes back to my college days, when I studied philosophy and neuroscience and wrote a senior thesis that was related to the perception of sensory experience. My college roommates and I even used to blind taste at our bar, trying to figure out whether people preferred Grey Goose or Smirnoff…but more recently, I’m interested in the impact that the marketing of conspicuous consumption has on the economy.
3) In The Wine Trials 2010, you make an extended reference to your experiment with the Wine Spectator Restaurant Excellence Award, which caused quite a stir, especially on-line. What did you learn from that experience? Did the results and feedback from that experience change your views or perspective on wine ratings and wine magazines? Did it impact how you approached the 2010 Wine Trials?
I think the experiment demonstrates that the Wine Spectator Awards of Excellence are a hoax on the readers: the award is represented as the result of a rigorous judgment by the magazine’s wine experts, yet they’re really just a form of advertising. My experiment showed that you can get the award even if your restaurant doesn’t exist and your “reserve wine list” is full of wines deemed undrinkable by Wine Spectator itself.
4) Who do you think would win in a boxing match: you, or James Suckling?
Although I did once dominate King Hippo in Punch-Out, I’d bet on Suckling, because my upper body strength these days is quite poor. On the other hand, maybe Suckling would take it easy on me because he wouldn’t want the blood spewing from my face to sully his Ferragamo trunks…
5) Do you think that receiving a free review copy of The Wine Trials is enough compensation for the grief I endured in the Wine Spectator forums trying to get their editors to comment on your restaurant awards experiment? I mean, I think you at least owe me at least one beer as well for that…
How about a copy of The Beer Trials? It’s coming out this spring, co-authored with Seamus Campbell, and I’m very excited about it. The book will review 250 of the world’s most popular beers. And it travels better than a beer.
[Editor’s note: It’s a deal.]
Tom Matthews responded to the initial exposé on the WS website with a personal attack, accusing me of “malicious duplicity.” I think, however, that WS readers would have been more interested in hearing an apology for the magazine’s misrepresentation of an advertising scheme as an awards program, an explanation of how the process worked such that this had been allowed to happen, or at least a promise to change the standards or the awards process. WS’s refusal to do so, I think, has further devalued the award in the eyes of the public.
6) Some of the conclusions that could be drawn from the tasting science / experiments cited in The Wine Trials 2010 have potentially profound implications for the wine review world as it’s currently structured. Notably, you state that Wine Magazines ratings can inflate wine prices, while wine magazine reviewers are more apt to give higher ratings to wines with higher prices (and that in some cases those price categories can be inferred by their reviewers even though the tastings are blind).
It’s a short leap from there to conclude that wine magazine reviews are potentially part of a self-feeding cycle causing higher & higher wine prices, while the wines that they are viewing have less and less appeal to the average wine consumer. What do you think that wine mags need to do in order to address the disparity between them and the average consumer – to break the cycle?
Three things:
One, stop accepting advertisements from wine producers whose wines are being rated. This is an unacceptable conflict of interest, and recent research (which I discussed on Blind Taste, my blog — http://blindtaste.com/2009/12/10/new-study-suggests-that-wine-spectator-advertisers-get-higher-ratings/ ) has shown a correlation between advertisements and ratings. I write in the blog: “We should be skeptical of criticism whose publication is financially supported by the producers of the products being criticized. Wine critics should not accept advertisements from wineries. Period.” You’d think this would be an obvious point, but apparently it’s not.
Two, start tasting blind – and I don’t mean tasting blind in the way that WS and some other mags claim to do, which means knowing the appellation and vintage (and therefore the price range), just not the producer. I mean tasting blind without any knowledge of the price of the wine or reputation of the region or grape. The critics at Robert Parker’s Wine Advocate, one of the few wine publications that doesn’t accept ads, are guilty of often tasting completely non-blind, at the winery itself. However great these critics are, they aren’t immune from bias. They’re human beings, and the research that I cite in The Wine Trials indicates that all human beings are vulnerable to the placebo effect. In fact, wine critics, with all the information about wine floating through their brains, might even be more vulnerable than most.
Three, get rid of the 100-point rating scale. It implies a level of resolution that the human palate simply doesn’t have. And worse still, the way the scale is almost universally used – even when it’s based on blind tasting – it rewards a specific big, complex (and expensive) style and penalizes a wine for being simple, light, and refreshing. Something like a dry ros?é from Provence, a Vinho Verde from Portugal, or a young table wine that makes great sangria almost never breaks 90 points. The clear message is that light, refreshing wines can be decent, but a wine must be big and heavy to be truly great. I don’t agree with that message. Would a food critic be taken seriously if he used a 100-point scale in which every single restaurant in the high 90s was a steakhouse, and a fish restaurant couldn’t score above 88? Of course not. Yet that’s the exact equivalent of what the wine magazines are doing. In a particular situation: you’re at the seaside in summer, let’s say, and you’re eating a lunch of grilled shellfish – and there is nothing else in the world that you would rather be drinking than a clean, dry rosé. What did that rosé do wrong such that it deserves to be in the bottom half of the magazine ratings?
7) What role (if any) do you see wine blogs playing now? Do they represent a chance to close the potentially widening gap between wine mags and the wine consumer?
Yes. I think the emergence of wine blogs are one of the best thing that’s happened to the wine world in the past decade. They are a much-needed force against the abuse of power by the mainstream wine media elite. When I revealed my Wine Spectator exposé, I got an incredible outpouring of support from wine bloggers that, like me, were tired of the way this sort of abuse proliferated and happy to see that somebody had exposed it. It was a great counterbalancing force against the censorship that happened on Wine Spectator’s own website, which was worthy of the Chinese government. I posted a long, polite response to Matthews’ personal attack on their site, responding to his points one by one, and the WS censors deleted it immediately. Bloggers don’t tend to have that attitude; I’ve almost never seen a blogger delete a negative but thoughtful comment. Blogs are meant to encourage debate and disagreement. That’s not to say that every wine blog is good, but the good ones are more interesting to read, I think, than almost any of the mainstream magazines. I’ve started my own blog, blindtaste.com (which now contains the entirety of the WS exposé), although I don’t post as often as I’d like. I’m awed by how prolific some wine bloggers are; that’s the toughest part of it, coming up with something interesting to say almost every day.
8) You failed to mention the influence of the music of Canadian rock icons RUSH in your acknowledgements? Was that conscious, or just an oversight?
No, it was conscious; I actually tried to avoid listening to the music of RUSH during the wine review process. This was for good reason: as I’ve argued in The Wine Trials, external factors can bias our taste experiences. Had I been listening to RUSH, I would have been in such a good mood the whole time that I would have liked all the wines, and thus lost my ability to be critical.
[Editor’s note: this sounds like a cop-out to me, but hey, he knows a lot more about the science than I do, so…]
9) In The Wine Trials, you state that it’s a misconception that bargain-priced wine brands don’t undergo vintage variation, citing the fact that more recent vintages from 2009 winners didn’t make the cut. I’d argue that bargain wine brands still suffer from less vintage variation than most of their more expensive counterparts, since many of them are more expensive because they’re sourced from smaller lots where weather would play an increasingly important role. Would you say that we’re both right, or do The Wine Trials results support a different conclusion for those smaller-production, expensive wines?
I think it would be inappropriate to make a categorical statement comparing the variability of inexpensive wine vs. expensive wine (however you define those categories). First of all, some expensive wines are so manipulated with aggressive oak, micro-oxygenation, and over-concentration that they tend to be similar from one year to the next. But more importantly, the important thing to keep in mind is that variation from one year’s release to the next doesn’t happen only in the vineyard. Inexpensive wine brands often vary the style of winemaking from one year to the next–one striking example in The Wine Trials 2011 was the less oaky style of many inexpensive California Chardonnays as compared to their releases from a year or two before. Compare this to classic higher-priced regions like Burgundy where they’re not making many stylistic changes in the vinification from one year to the next.
Keep in mind that many expensive wines come from AOC or DOC regions where there are very strict standards governing blending, vinification, aging, etc. This is a pressure against variability. You’d almost never see a particular producer’s white Burgundy go from oaky to not oaky from one vintage to the next, yet you see it with Fetzer. On the other hand, Burgundy is one of the regions where variations in the climatic conditions change the wine most from one vintage to the next. So what changes more from one year to the next? Fetzer Chardonnay or a Chassagne-Montrachet 1er Cru? The answer’s not clear. In short, I think you have to address the question one wine at a time. But I maintain the claim that it is a complete misconception that inexpensive wine doesn’t vary much from year to year.
10) What’s next for The Wine Trials? Any changes planned for The Wine Trials 2011?
With every new edition, aside from tasting all new wine releases, the other editors and I want to include an ever-broader selection of wines – but to do so without lowering our standards, which means tasting more and more wines each year. We’ve been lucky that the book has sold well enough that we’re able to update it annually, which I think is absolutely crucial.
And then, of course, The Beer Trials is on the way, which will take a different approach than The Wine Trials, but one that I hope will be equally useful to readers.
Cheers!
(images: blindtaste.com, amazon.com)
Joe – You write: "he ruffled the feathers of Wine Spectator by creating a fictitious restaurant whose wine list included some of their lowest-scoring Italian wines in the past two decades, and subsequently won their restaurant Award of Excellence. "
That's pretty inaccurate, don't you think? It's at the very least badly misleading. He ruffled the mag's feathers by submitting a wine list that had plenty of high-scoring wines – plenty enough to get their basic award – and then misled his readers by publishing a small sample of the list which included mainly the low-scoring wines. In other words, he wanted people to THINK his list was filled with point-score duds, but in reality it was a pretty normal list. Some high scores, a few low scores, and a straightforward award-winning kind of list.
I think you're drinking his kool-aid with the way you described it…
You might be right – I wasn't aware of that aspect of the WS restaurant awards piece that he did. Of course, Robin is welcome to respond here to help clear it up…
I went back and looked up Thomas Matthews' response, which included the following details (and this is from a thread in which you posted numerous times, Joe):
"On his blog, Goldstein posted a small selection of the wines on this list, along with their poor ratings from Wine Spectator. This was his effort to prove that the list – even if real – did not deserve an award.
However, this selection was not representative of the quality of the complete list that he submitted to our program. Goldstein posted reviews for 15 wines. But the submitted list contained a total of 256 wines. Only 15 wines scored below 80 points.
Fifty-three wines earned ratings of 90 points or higher (outstanding on Wine Spectator’s 100-point scale) and a total of 102 earned ratings of 80 points (good) or better. (139 wines were not rated.) Overall, the wines came from many of Italy’s top producers, in a clear, accurate presentation."
I admit that I haven't followed this closely in the months that followed. Perhaps what Matthews says here was a lie? But if not, did Goldstein ever explain his decision to only publish the partial list, when most of the list was a standard award-winning list?
(And by the way, let's not re-hash the whole merits of the awards program. I think it's a bit silly, probably overly ambitious, and I'd ditch the bottom-level award.)
Joe – Not to belabor this too much, but I've been going back to check out the stories as they broke. Alder Yarrow posted about the initial incident by slamming WS, but then he backtracked when he got the full story. He actually said that Goldstein is the one who looked bad. On that Vinography thread you then commented:
"Goldstein is too shadowy in his presentation of this 'study.'"
Obviously I like your blog, because I read it all the time and I talk to plenty of people who do, too. But I'm curious as to how you forgot that you thought Goldstein was "shadowy"? I'd have liked to see you ask him about it.
Excellent interview. Perfect balance between wit, insight and journalism. I score it an absolute 94 (you would have scored higher had you compared the revolutionary book with the brilliance of the Beatles).
IMO wine magazines have their place in the same respect that Consumer Reports has their place. The difference being, CR ratings are usually based on analytical fact and data that can actually be measured scientifically. I don't think I've ever seen them score something lower because they didn't like the color.
I love the fact that he validated wine bloggers. The $15 market is what new wine consumers are interested in. The trouble with the lower price range is that you get a greater variance in quality. It seems, that once you cross a certain threshold, the trustworthiness of a bottle greatly increases. Our only hope is that WS and WA NOT review these lower priced wines because once they do, they'll no longer be lower priced wines. ;)
This is definitely a book that I will purchase. Thanks Joe. Thanks Robin!
Josh @nectarwine
Thanks, Josh!
The potential implications of the scientific research cited in the book are fascinating. What the book doesn't go into is why we consumers so often slavishly follow wine scores as *if* they were scientific, even when those giving the scores make no such claim. So a word of caution: the book seems to assume that the consumer view is the valid one, and that is correct – but ONLY on a consumer-by-consumer basis!
Hey man – I linked to that forum thread at the top of this post, but I'd forgotten about that detail. I did post numerous times, but that was 1) one post to see if WS editors would offer any detail on whether or not anything about the awards process would change, 2) about 100 posts fighting with WS forum members :-).
I did a bit of searching and Goldstein did address that portion of Matthews' response:
"Yes, this experiment was mischievous. Deception was required to expose what I saw as a wrong against the readers and the public. But the deception was hardly elaborate. Everything I did took only a couple of hours."
Full detailed response is at http://blindtaste.com/2008/08/31/the-truth-behind…
Would be interested to know whether or not you think Robin's response is adequate.
Cheers!
Joe – Thanks for the follow-up. To me, it was absolutely unforgivable for Goldstein not to include the full list in his initial report. That smacks of manipulation, and there is no justification whatsoever for withholding the full list. In other words, if he were confident that his list was truly garbage, he'd have published the full list. But the effective way to get a splash is to lead people to believe that WS stamped a wine list that had nothing but disasters.
Also, for what it's worth, I'd be stoked to see a Soldera on a wine list. Any Soldera. And that's one of his "bad wines." Soldera is a controversial name but is regarded as one of the best by many wine lovers.
He's very good at manipulating statistics. I don't care that half his list was comprised of unrated wines. And I agree that the average diner does not know that an "Award of Excellence" is only the lowest level award, achievable with banal wine selections. But Goldstein should never have misled people about the full list. I've yet to hear any good explanation for his deception on that point.
One thing's for sure – seeing Soldera on a wine list would rock.
Hell yes.
Don't misunderstand my point. I think Goldstein had a point in poking at the awards program. My understanding is that it started out much smaller, and it had grown by large measures. It was so unwieldy that the base-level award should probably be scrapped or re-worked. I've eaten at plenty of establishments where the award seems pointless. And certainly restaurants use the base-level award to frame themselves as something special, which typically they're not.
But I don't see it as nefarious and I see Goldstein as a guy who is very good at manipulating stats. A full disclosure from the outset from him would have helped his credibility with me.
Or in my case, 150% more enjoyment! :)
It's all good man – keep it coming!
Like Alder, my views changed as well. And while I'm not Robin's biggest fan when it comes to the piece on the WS awards, I'm not an investigative journalist, and I wanted to keep the focus of the interview on the Wine Trails book, because there are elements in the first 50 pages of that book with (in my opinion) some profound implications on how we should be viewing criticism in general.
The WS awards piece is a very, very small element to the Wine Trails, and I'm happy for discussion about it here, but I don't want people thinking that TWT and the WS awards piece are synonymous – they're not.
Evan — there's a misunderstanding here. In the original blog entry (quoted below), I was perfectly transparent about the fact that the list I submitted consisted of mostly normal wines. The point you're missing is that those 15 terrible wines were not simply a random selection from the list. Rather, it was the complete "reserve list," listed as "Vini Rossi Riserva della Casa," which consisted almost *entirely* of wines judged to be terrible or undrinkable by WS, which were priced on my list in the hundreds of euros. Quoting from my original 2008 blog entry:
"The main wine list that I submitted was a perfectly decent selection from around Italy that met the magazine’s basic criteria (about 250 wines, including whites, reds, and sparkling wines–some of which scored well in WS). However, Osteria L’Intrepido’s high-priced 'reserve wine list' was largely chosen from among some of the lowest-scoring Italian wines in Wine Spectator over the past few decades."
You can read the entirety of the post here:
http://blindtaste.com/2008/08/15/what-does-it-tak…
I hope that clears up the misunderstanding.
I'd like to note that in 2009, Wine Spectator reviewed more than 3,000 wines that retailed for $15 or less — and all in blind tastings by experienced wine critics.
I'd also like to thank Evan Dawson for clarifying some important details about the hoax on Wine Spectator's Restaurant Awards program.
Thomas Matthews
Executive editor
Wine Spectator
What percentage of total reviews does the 3,000 comprise? What percentage of the average issue of WS is devoted to wines at $15? For example, cover stories, expose's etc. (I am not being accusatory, I am genuinely curious as I am not what you would call an avid reader).
Wine Spectator reviewed about 17,000 newly-released wines in 2009, so the "$15 and under" group represented about 18 percent of all reviews. In addition, three cover stories (Jan-Feb; April and Oct. 15 issues) were devoted to value wines.
And on our Web site, WineSpectator.com, we post a review of a wine under $15 every day, free for all users.
Thomas Matthews
Executive editor
Wine Spectator
As I've explained above in a response to the first thread, Evan's account was inaccurate; as you know, the 15 or so wines that I posted in my article were not a random selection of Osteria L'Intrepido's entire list, but rather the entirety of the high-priced "reserve list." It is generally understood in the industry that a "reserve wine list" is meant to showcase the very best and most expensive wines from a restaurant's cellar.
Every wine on my reserve list was priced above US$100, every one of them was rated by Wine Spectator, and the average score of the wines on that list, in your magazine's own judgment, was 70.6 points. Of the 35,498 Italian wines rated in WS's database, only 239 of them–less than 1%–scored 71 or below. That would put the average wine on Osteria L'Intrepido's "reserve list" safely in the bottom 1% of all Italian wines rated by Wine Spectator.
Is that your magazine's definition of "excellence"?
Robin Goldstein
Thanks, Tom!
A colleague of mine was excited about this book, and lent it to me. Will see where it takes me…
I think it's trendy to speak against the "old-boys-club", to "stick it to the man", and to promote $15 wines. In this economy, those are the wines that sell the most.
The author made this point: "In a particular situation: you’re at the seaside in summer, let’s say, and you’re eating a lunch of grilled shellfish – and there is nothing else in the world that you would rather be drinking than a clean, dry rosé. What did that rosé do wrong such that it deserves to be in the bottom half of the magazine ratings?"
I disagree with that point – of course, the Rose may be a perfect match, but let's not try to compare In-n-Out burger with French Laundry, shall we? I love both. But just because something is delicious and appropriate does not make it great. Simple things can we wonderful, but they do not move us. I think great things *move* us.
Also, I am a big believer in bias – for me personally, it's a good thing when it comes to art, beauty, taste. I just wrote about that in my post on the 1984 Chateau Margaux. http://www.chevsky.com/2010/01/1984-chateau-marga…
I appreciate blind tastings for the fun and learning factors, but when it comes to enjoying, I don't like them!
As much as I hate to admit, but someone like Gary Vee has a much more balanced, practical and honest perspective than I am hearing from this author. But perhaps I speak too soon – the book is my shelf – let me go read!
Best regards,
Iron Chevsky.
Do you taste wines blind before you decide what bottles to buy? While I see that it removes some of the fake marketing influences in our judgments of wine, I have another question about the practice (for a consumer) and it has nothing to do with the practical difficulty of arranging blind tastings of wines before buying.
If the label and the image we have of where a wine comes from, and philosophy of the winemaker, and whether the winery is run 'sustainably' are all so-called outside influences that are removed in a blind tasting, is that really what you want to achieve?
Here's what I mean. When I drink a wine at home, I'm not drinking it blind then. I see the label. I know stuff about it. If it's from a part of the world I'd like to visit, I like to think about being there. This all adds to my pleasure of drinking the wine. Choosing wines based on blind tastings misses that on purpose and calls it a good thing. But these biases actually increase my enjoyment of the wine. So why try to eliminate them?
Malcolm Gladwell made a similar argument in his book Blink. (I think he was writing about the marketing and packaging of ice cream). The packaging, the image, the brand – these are all part of the enjoyment experience. Blind tasting may seem scientific, but trying to make something that's subjective appear objective is a bias too.
What do you think? Would you – if it were practically feasible – want to drink all your wines at home blind? Think about it.
Interesting points – obviously few people would opt to drink a wine blind with dinner. I suppose a formal review of a wine is another matter.
I would agree to a point, the big thing is of course not to fall into the New Coke trap, but if you are trying to evaluate a wine (or soft drink) for quality purposes, not for how well it is going to sell purposes, I don't see a problem with blind tasting, particularly the way Wine Spectator does it by scoring the wine blind and then adding some context by revealing the wine before submitting the completed note.
One comment on the advertiser question is that I think there are pretty serious liberties taken if he's using that study from last month that basically found a 1 point ratings difference with the control group being ratings from The Wine Advocate. The guy who wrote the study said it himself at the end (and good for him for not going to the big headline): he has no idea what is causing this difference. He has a bunch of guesses of course, but the biggest problem in the study is the lack of a control. Now I know the Wine Advocate is supposed to be the control, but because we are talking about subjective viewpoints of the same topic, having the Wine Advocate be the control group is inherently problematic. The correlation between the ratings of the two magazines is less than 0.5, which to me means that you've less than a 50-50 shot that the two publications are going to agree on the wine regardless of ads.
The real test is of course do advertisers get higher scores in Wine Spectator than non-advertisers? The answer of course depends on how you want to slice the data, US-only=difference of 0.3 to advertisers, entire world=difference of 0.25 to advertisers, or his own panel C creation (which is weighted by production amount)=.21 to non-advertisers. Bottom line is to me an entirely unmeaningful less than half a point, when once you consider that Wine Spectator doesn't deal in less than whole numbers, is almost identical. In fact, I would bet the difference between being an advertiser and a non-advertiser is less than just the sheer amount of error in the 100 point scale.
I don't like the 100 point scale and I think there are a lot of things Spectator could do better. But Goldstein's take on this strikes me as way too close to his "sting" operation with the Restaurant Program. As always, there are lies, damn lies, and statistics.
One comment on the advertiser question is that I think there are pretty serious liberties taken if he's using that study from last month that basically found a 1 point ratings difference with the control group being ratings from The Wine Advocate. The guy who wrote the study said it himself at the end (and good for him for not going to the big headline): he has no idea what is causing this difference. He has a bunch of guesses of course, but the biggest problem in the study is the lack of a control. Now I know the Wine Advocate is supposed to be the control, but because we are talking about subjective viewpoints of the same topic, having the Wine Advocate be the control group is inherently problematic. The correlation between the ratings of the two magazines is less than 0.5, which to me means that you've less than a 50-50 shot that the two publications are going to agree on the wine regardless of ads.
The real test is of course do advertisers get higher scores in Wine Spectator than non-advertisers? The answer of course depends on how you want to slice the data, US-only=difference of 0.3 to advertisers, entire world=difference of 0.25 to advertisers, or his own panel C creation (which is weighted by production amount)=.21 to non-advertisers. Bottom line is to me an entirely unmeaningful less than half a point, when once you consider that Wine Spectator doesn't deal in less than whole numbers, is almost identical. In fact, I would bet the difference between being an advertiser and a non-advertiser is less than just the sheer amount of error in the 100 point scale.
I don't like the 100 point scale and I think there are a lot of things Spectator could do better. But Goldstein's take on this strikes me as way too close to his "sting" operation with the Restaurant Program. As always, there are lies, damn lies, and statistics.
If memory serves me correctly, the study you're referring to actually concluded that the evidence best supported a conclusion that the advertising did **not** result in higher ratings.
Some detail on the methodology is here – http://www.wine-economics.org/workingpapers/AAWE_… – though it doesn't answer the questions that you've posed, I think.
One thing I should note is that Glodstein in TWT states that we should take the blind tasting results "with a grain of salt" for some of the same reasons that you're mentioning.
Phil,
Thanks for clarifying the results of this study about Wine Spectator ratings for advertisers' wines versus wines from non-advertisers. In fact, the author, Jonathan Reuter, concludes that there basically is no bias. Here is an excerpt from his blog post on the subject:
"As I clearly state in the abstract, "I find that advertisers earn just less than one point higher Wine Spectator ratings than non-advertisers when I use Wine Advocate ratings to adjust for differences in quality. However, I find only weak evidence that the selective retasting of advertisers’ wines contributes to the higher ratings. Moreover, conditional on published ratings, Wine Spectator is no more likely to bestow awards upon advertisers. I conclude that while advertising may influence ratings on the margin, Wine Spectator appears largely to insulate reviewers from the influence of advertisers." In other words, despite the statistical evidence of a difference in ratings, based on Wine Spectator's use of blind tastings, and the preponderance of my empirical evidence, I conclude that the level of pro-advertiser bias is small to none. "
Our loyalty is to our readers, not the wine industry, and our goal is to give every wine a fair and equal chance to show its best, in a methodology that prevents bias, conscious or not, from affecting the taster's judgment, so that we can deliver credible, reliable wine reviews to consumers.
Thomas Matthews
Executive editor
Wine Spectator
Thanks for link, you're right about the lack of information. One easy scenario is pairing a $75 Barolo with a $20 Cali. Chard.–and you get someone who doesn't like Barolo, or the Barolo is too young, or conversely you get someone who doesn't like heavily oaked Chardonnay. Or if wine #2 was a big Zinfandel and wine #3 was an expensive Burgundy, etc. etc. It does sound like they didn't overload any individual people with too much tasting, so that's good. Since they're basing a rigorous statistical analysis that underpins the entire argument on the results it's a bit discouraging to be told to take it with a grain of salt.
As a side note, I despise the use of OK/Fair as the lesser negative in the "good" scale. Find me one person who thinks "Okay" is the same negatively as "Good" is positively. Sorry about the OT, that is perhaps my biggest research pet peeve.
Good point – personally, I don't feel that 'okay' is negative. And now I will notice that on EVERY survey I ever take from this day on! :-)
Thanks, Tom – And in today's under $15 picks, you've got one of my fave value wines (the HOGUE) – http://www.winespectator.com/dailypicks/category/…
Reuter's article was not discussed in The Wine Trials — it came out after the book. However, given the data, I felt that Reuter's conclusion was too soft, as I argue in my blog entry:
"Even if selective retastings explain only half of the one-point bias, that’s still pretty damning; it means that if you advertise in Wine Spectator, you might well get the benefit of a selective retasting that gets you, on average, an additional half-point. Translation: advertising influences ratings. With respect to the other half-point, if there are indeed 'consistent differences in how the two publications rate quality, which leads to predictable differences in advertising,' then you should try leafing through a copy of Wine Spectator and seeing if you’d trust critics who favor the types of wines that tend to advertise in the magazine. I think the roster of advertisers speaks for itself."
http://blindtaste.com/2009/12/10/new-study-sugges…
I also don't agree that a one-point effect is minor. Keep in mind that the WS scale operates largely on a 20-point scale (80 to 100)–so one point represents something on the order of five percentile points, not one. The difference between an 89 and a 90 can have a huge impact in the marketplace.
The problem is that the whole process leading to determining this "one point" difference is flawed beyond belief because the Wine Advocate is not a proper control group. A control group, as I'm sure you know, is a group that acts the same as the test group except for the thing being tested. So other than the fact that Wine Spectator tastes blind and mostly in their office while the Wine Advocate does not, that wine is an inherently subjective subject that leads to disagreement no matter what the various variables about the wine are (less than .5 correlation between the two publications on their scores), I guess you can say that indeed both magazines score wines on a 100 point scale and publish the results. Once you throw out the efficacy of the Wine Advocate as a control group, you're left with the differentials between advertisers and non-advertisers in Wine Spectator, and the difference is less than half a point.
Just to be clear where I am coming from, I don't like the 100 point scale at all and would argue that a comprehensive study of the error involved in using it would uncover more than any study on advertisers. I work for a magazine that is not a direct competitor of Wine Spectator, but in the same arena nonetheless. So I have nothing to gain and arguably more to lose by defending them. But I don't unlike unfair fights and selective use of facts even less.
I have not read The Wine Trials but am skeptical based on reviews and news articles I have read. I do have many questions about the methodology of the blind tastings and wine choices. I note that the Wine Trials website does not provide a description of that methodology. There is a link to a working paper of a study on blind tastings, but it is not complete and raises more questions in my mind. For example, that study only included 506 participants and 523 wines, which I do not consider to be a large study.
It also appears the book is best directed to the consumer with little or no wine knowledge. Those with some wine training/knowledge may see their recommended wines differently. Which would mean that most wine bloggers are probably not the intended audience of the book. It seems the book is directing consumers to more mass produced, commercial wines, rather than any small production, artisan wines.
By directing consumers to such commercial wines, are we doing them a disservice? Wouldn't it be better to expose them to less common wines, but which also are value wines?
Thanks, Richard. You're right about the target audience I think. I'm not opposed to exposing consumers to commercial wines – there are some very good wines to be had in that bunch. I suppose the price range does exclude smaller producers for the most part. I guess the follow on question is, is there any chance of seeing low-priced but less common / artisan wines included in the blind tastings?
I am reading the Wine Trials 2010 now, and am not impressed. My skepticism is increasing and I find a lack of transparency. And despite Robin's answer to Question #7, the book does indict wine bloggers for perpetuating a love of expensive wines.
Interesting – I didn't get the indictment of wine bloggers at all from the book; I certainly think that Robin has largely positive things to say regarding bloggers in the interview above (at least in terms of the quality of writing).
Cheers!
Any answers about the methodology? How was the blind tasting conducted?
Check this out:
http://passionatefoodie.blogspot.com/2010/01/wine…
Thomas Matthews
Thanks Thomas for providing the link. Tyce, an Associate editor, provided some partial answers to the methodology, but failed to answer some questions, and some of his answers only raised more questions.
A very enjoyable read and many good points made through the interview. Too many to go into detail but a couple that stick with me here:
a) The wine magazine industry encourages high-priced wines and elitism. I agree; it seems to almost be an old boys club between established brands, the mags and the advertising agencies.
b) Listening to Rush while drinking wine will add probably add 15% more enjoyment for the average wine fan.
Unfortunately you will soon notice that it is rampant. It's pretty much the accepted standard for that scale.
Thanks – it is worth the read.
I don't want to imply here that I am in 100% agreement with everything that Robin states in the book (hopefully that isn't being inferred from the mere fact that I have interviewed him); the purpose today is to get the book in front of the readers and hear Robin's take on what drive him to put the Trials together.
The book should stir thought and discussion. I am sure it's meant to be provocative, and it is provocative – and we don't have to agree with it in order to discuss it.
I don't think TWT invalidates or obviates the type of tasting / reviews performed by the likes of WS, Wine Enthusiast, etc. I think the deeper implications are about how we consumers react to those things, and how those publications in turn react to the consumer trends.
Cheers!
Curious as to your take on the methodology used in the book–I know you're a bit of a stathead and so are aware of a)how easily statistics can be manipulated and b)how important methodology is in interpreting the result.
The main counter-argument against the kind of blind tasting Goldstein is promoting is that you are completely sacrificing what the wine is at the alter of impartiality (or supposed impartiality). What I mean is that maybe you think this is a wonderful wine, a nice meaty red with robust taninic structure that would pair nicely with beef. And then I tell you it's a Pinot Noir. Now, the wine still tastes the same, so maybe the argument is we need to stop worrying about if a wine tastes "the way it's supposed to", but that doesn't help the person who is expecting Pinot and gets Syrah. And, to use the favorite example of of non-blind arguers, you of course have no idea if this is a wine with a great track record of ageing gracefully, etc. You also have problems with stronger, bigger wines dominating blind lineups, in particular if you have people who aren't used to blind tasting. And finally you have the issue of when the wine is designed to be consumed: I'm thinking about something like Barolo here which is notoriously difficult when young.
All of these flaws are incredibly magnified if the tasting lineups are crazily diverse, with high RS wines in with high acid wines, reds with whites, big tannins with no tannins.
So I guess that's all a big windup to ask, does the methodology in the book take all of this into account? Or are the best "scoring" wines the young drinking, sweet/bold, low acidity wines? Blind tasting is hard and there are a lot of traps people can easily fall into.
I enjoyed your article Joe. I was particularly interested in the book and they blind taste tests that show the more expensive wines aren't necessarily the best wines. The vast majority of wines I buy are around $15, and I have been pretty happy with them, as a whole.
Thanks Shannon – you will certainly find those in TWT! :)
I've had experience with about 1/4 of the wines that fared well in TWT blind tasting, and I would consider them solid values. I'd say there is a small percentage of wines in the book that I was like "are you kidding me? that wine sucks!" – but I'm just one opinion :).
Cheers!
If memory serves me correctly, the study you're referring to actually concluded that the evidence best supported a conclusion that the advertising did **not** result in higher ratings.
Yes, that was the conclusion of the paper. It was not promoted that way and many people (Goldstein apparently included) did not treat it that way.
Totally agree with the utility of tasting blind. At least with respect to not judging wines for personal, in-home consumption based on ratings and price, etc. I have discovered several wonderful everyday drinking wines I never would have looked twice at if I had not tried them blind. And, by the same token, I have been persuaded to buy, and to think I even liked, wines that came highly rated and got lots and lots of positive press. Then after tasting them, and thinking I liked them, asking myself, well, do I like the taste of cough syrup? Hmmm, then why did I think I liked this 90+ point well-regarded wine? Hah, I don't! But I had talked myself into thinking I did because I was predisposed to liking it based on good press, and a high rating. I'm pretty suseptible to that kind of thing, unfortunately. There's a sucker born every minute. ; )
It's all about, trust your palate and drink what you like. That might be expensive 95 pointers, and that's OK, or it might be value wines that never got rated at all, and that's OK too,
One thing that I'm finding interesting is the question of whether or not we're born predisposed to that kind of influence and marketing, or if the influence and marketing come after we've been exposed to it day after day after day after day…
Nature vs. nurture.
I'd wager it's a bit of both…
Cheers!
Wine Spectator would never refuse an award to a restaurant wine list simply because it included a few wines with low scores. We understand that different palates have different tastes. We look for a range of wines from reputable producers, regions and vintages that offer diverse styles and match well with the cuisine, along with a clear and accurate presentation.
Thomas Matthews
Executive editor
I would also be interested to know how a restaurant awards program that takes in more than $1 million annually (not including additional advertisements taken by the restaurants in the awards issue) cannot afford to do even the basic due diligence necessary to ascertain that a restaurant actually exists, especially given that Wine Spectator has full-time staff in Italy. As the program stands, even if a restaurant does exist, the wine list submitted to WS need not have any correlation with the actual contents of its cellar. This is one of the many unaddressed issues I raised in my detailed response to your post, which was censored from the WS website.
The entire purpose of an awards program administered by a panel of purportedly independent experts is to help consumers see through the puffery of a restaurant's self-promotion. Any wine lover can go online and see if a wine list has "a range of wines" in "diverse styles" with "clear and accurate presentation." Readers might hope that your magazine, with millions of dollars at its disposal for the purpose of arbitrating "excellence," would offer a judgment a bit more sophisticated than the equivalent of scanning a restaurant's list online.
Readers might expect, at a minimum, that a restaurant's high-priced reserve list would get extra scrutiny, or, at least, that the high-priced reserve list would be judged against your magazine's own ratings and eliminated from contention if the average rating fell in the bottom 1% of all Italian wines ever judged by your own magazine.
Just as troubling is the extortion of a $250 fee from any restaurant that wants its wine program to be considered for an award. Many of America's top wine restaurants, including Alinea, refuse to pay this advertisement fee on principle, thus eliminating them from consideration. How can a list of top wine programs that excludes Alinea be taken seriously? This $250 fee violates the most basic ethical standards in the world of journalism. If the New York Times or the Michelin guide charged a restaurant $250 to be considered for a review, would those reviews be taken seriously?
If, beyond this, not even a basic review of the WS scores of a high-priced reserve list is being undertaken by your staff (whether wine critics or interns), and if the vast majority of applicants win the award, then it is not clear to me what information, if any–other than which restaurants are willing to pay $250–is being conveyed to consumers by your awards program.
Robin – Sorry, but I still have serious problems with the way you handled the initial release of information. You failed to release the entire list, and your lack of disclosure fueled a misunderstood story. And guess what? You profited off of that misunderstanding. Here's a perfect example:
http://www.vinography.com/archives/2008/08/wine_s…
Alder Yarrow, who is regularly hailed as one of the sharpest and most thoughtful wine bloggers, published the above post in response to "news" about your "study." The headline smeared the magazine. But of course Yarrow didn't have the story correct – here's what he wrote:
"In summary:
1. Researcher invents fake restaurant in Italy.
2. Researcher builds web site for fake restaurant.
3. Researcher constructs wine list of the lowest scoring Italian wines from Wine Spectator in the last decade.
4. Researcher enters Wine Spectator Restaurant Awards.
5. Fake restaurant wins Wine Spectator Award of Excellence.
I haven't laughed so hard at a piece of wine news in years. It's truly unbelievable."
Well, yes, it was unbelievable. Because it wasn't true. Your wine list obviously was not composed the way Alder thought it was. Now, if you would like to blame Alder Yarrow for this, go ahead. But I promise you that he was hardly the only one. In fact, given his place in the wine blogging world, I wonder: Were you aware that he published this post? If so, why didn't you correct him? I can only conclude that your plan had worked perfectly. The full story is much more watered down, but you juiced it for maximum impact. Very slick.
Now, having said that, it's important that you understand something: I think the Award of Excellence is essentially meaningless. Worse, the average diner might assume it's more like their Grand award. The restaurants will employ it for that effect. So calling attention to the flaws of this program, however well intentioned, is fair. But you misled us from the get-go, and it's hard to take someone seriously when they begin by leaving out major details — and then they allow the misunderstanding to spread far and wide.
I hardly think you can say he "misled us from the getgo" when he has shown that te original post very clearly stated that "The main wine list that I submitted was a perfectly decent selection from around Italy that met the magazine’s basic criteria". It doesn't get much clearer than that. The fact that everyone focussed on the most scandalous interpretation of what was posted doesnt make Goldstein's original information misleading in any way and only indicates that the story might have been blown somewhat out of proportion in the subsequent retelling by others.
As to releasing the entire list: to what end? Nobody has tried to argue (who understood what Goldstein originally said that is) that it was anything other than a "perfectly respectable list", though as Goldstein points out on his blog the average of their scores as judged by Wine Spectator was somewhat less than the average score given out by them historically.
Thanks, Robin – there's no doubt that a point difference can be significant, most especially in the 89/90 difference as you point out.
I would argue though that a 92/93 or similar difference wouldn't have as much impact, however (but there might be data out there to suggest otherwise).
I see it as an indictment when the authors state the blogger's passion conflicts with their results, and the results of scientists. I agree with your #1 statement, and I do think the book supports it as well, that we are all allegedly susceptible to outside influences such as the price of a wine. The authors obviously want wine reviews to all be done blind, something most bloggers do not do. So that too would be an indictment of bloggers that don't blind test.
And I also agree with your #2 statement, though I would say that most bloggers would already fall into the author's "wine expert" category.
It would be as absurd to categorize all wine bloggers, as a group, as "good" or "bad" as it would be to categorize all print wine journalists as such. In the book, as in the interview, I criticize anyone–journalist, blogger, magazine critic–who tastes non-blind, exposes him/herself to the placebo effect, and winds up being impressed by expensive wines that they wouldn't be as impressed by under controlled blind conditions. In general, however, I see the explosion of wine blogs as a good thing–first, because it's allowed a lot of great previously unheard voices to rise to prominence, and second, because it encourages spirited debates like this one.
Thanks for recognizing the spirited discussion aspect – that's the thing that keeps me going when it comes to writing this blog.
Well, that and an urge to drink! :-)
Check p.11, paragraph 2, where bloggers' "passionate enjoyment of expensive wine" is in conflict with their results.
Depends on what you mean by indict; the same chapter goes on to say that bloggers enjoyment and passion about high-priced wines is real, though it may be impacted by knowledge of the high-price of the wine (which the studies cited i the book suggest increases the real pleasure experienced by the taster). Doesn't sound like the book is finding fault with bloggers in that respect – actually, two conclusions can be drawn from that (these are NOT stated in the book, they're just my thoughts on it):
1) wine bloggers are just a susceptible as anyone else to the influences described in the book, &
2) wine bloggers may be working their way into the "pro" category of taster described in the book, who actually do prefer more expensive wines on the whole by a slight statistical majority.
I'd agree with both of those conclusions.
Blind tasting is a lazy critics crutch. No one on this planet drinks a wine blind for enjoyment. No one.
Quality of a wine and the experience of drinking it are based on too many factors, and a wine tasted blind may taste great blind, and terrible under normal conditions. When I judge at competitions I often rate wines that I enjoy everyday in my home very low, and wines I can't stand very high. Why?
Because the context changes, and the conditions where I am tasting change. I'm enjoying the wine with food, or friends, or it's a special wine with some meaning behind it. I judge a wine based on how I consume it and not on a ridiculous non-reality based artificial situation.
Books like this need to die a quick death. Teach people to drink wine, any wine, no matter what. This book is just recreating everything that Robin himself hates. It's having someone else tell you how to enjoy wine. And telling you what it should taste like. Phooey.
Let's really change the wine world and quit writing books about what wines taste like and more books about the histories, cultures, and foods that we drink with them.
Hey Ryan – always great to hear from you!
I agree that covering context is needed (and I think I practice what I preach there among the virtual pages of 1WineDude.com). But I also think that books like TWT and teh blind tasting approach have their place as well – if anything, they get us talking and discussing about the 'hows' (like we're doing here :-).
Cheers!
That makes little sense. That's like saying drowning puppies in a river will at least get us talking about it! It has it's place in population control, and should be considered at least. Just as silly as the idea that we should explore both sides of an issue with only one side.
The truth is this kind of "definitive" bs without any methodolgy laid out or real benefit for the consumer just leads to making wine yet again difficult to understand. And convincing consumers that they can't understand wine without someone telling them that wine expensive or cheap is any good.
Sorry can't see how this book is of any use to anyone.
In terms of making the methodology more transparent, I *do* agree.
In terms of the book being harmful to consumers – I disagree totally (but I still love you! :-). I think in a way you're saying I'm not giving the consumer enough credit, but I think that I am; I'm a consumer and perfectly capable of deciding for myself if this book has any value to me even if the methodology isn't explicitly laid out, or if the approach has flaws, etc. – and I expect my readers can do the same, because they're all smart people.
Hi all – Ken Payton did a great (and I think very balanced) review of TWT earlier this month, well worth checking out:
http://reignofterroir.com/2010/01/05/the-wine-tri…
Interesting how you are making money off of promoting the book, you have an associates account on Amazon. Does that make promoting it a conflict of interest for you? :) Thanks for the article.
I don't think it does, but my readers are of course welcome to decide that for themselves individually.
I can tell you the EXACT total of ALL Amazon.com affiliate sales that I have received EVER.
Ready?
$10.41
The decimal place is in the correct spot, by the way.
So… decide the level of conflict of interest on your own, my friend! I can find more than that in my wife's home office desk drawer! :-)
That may just cover our domain name registration :) I enjoy your blog and read it regularly. Keep up the good work.
Thanks! I enjoy readers like you because you demonstrate why (and how) the interaction of a community can keep people like me honest, which is awesome. Cheers!