Being an automotive journalist is a dream job, but it does come with a few sacrifices. Your inbox will quickly become completely unusable, because it’ll be inundated with nonsense studies and irrelevant cold-pitched media releases that nobody with even half a brain would spin into content. Case in point? Here’s a study claiming that the Mazda CX-5 is the “most disappointing” car to drive. Beg your pardon?
I like the way the Mazda CX-5 drives, and so does everyone I’ve recommended it to. They like the taut body control, the well-weighted steering, and the normalcy of a conventional automatic transmission, even if they aren’t car enthusiasts. So what gives? Do people just want cars that aren’t great to drive? I mean, that would explain why many Subarus sell well, but no. It turns out that the methodology used in this study is incredibly bad.
Let’s start with what Compare The Market AU was looking for with this study, which turns out to be surprisingly vague. Instead of individual negative experiences pertaining to a vehicle’s powertrain, steering, suspension, or braking, it just looked for a bunch of keywords. As per Compare The Market:
We analysed user reviews across a range of models, looking for negative mentions of “difficult”, “hard”, “confusing”, “uncomfortable”, and “disappoint”.
Notice how there’s no mention of negative keywords having to directly pertain to the driving experience, a decision that draws the entire title of the study into question. In addition, counting keywords rather than individual negative reviews could skew data, as it wouldn’t be unusual for a negative review to feature more than one keyword.
Also, all of the user reviews analyzed were posted on Edmunds, which is limiting in a few ways. Firstly, Edmunds is only one platform. Secondly, online reviews suffer from several forms of bias. As researchers at the Stevens Institute of Technology, Temple University, and the University of Texas found in a 2017 paper:
…two self-selection biases, acquisition bias (mostly consumers with a favorable predisposition acquire a product and hence write a product review) and underreporting bias (consumers with extreme, either positive or negative, ratings are more likely to write reviews than consumers with moderate product ratings), render the mean rating a biased estimator of product quality, and they result in the well-known J-shaped (positively skewed, asymmetric, bimodal) distribution of online product reviews.
The bottom line? From espresso machines (seriously, will I be happy with a modded De’Longhi ECP3630?) to cars, everything subject to online user reviews will have a bunch of exceptionally positive reviews and a handful of extremely negative reviews. Any non-polarized consumers are essentially disengaged, which means input data from online user reviews alone will be flawed.
Oh, and don’t think the flaws in this study stop there. The terrible data was then used in one of the most egregious, moronic normalization methods I’ve ever seen. To quote:
Once the data for the factors was collected, the factors were normalised, to provide each factor with a score of between 0 and 1. If data was not available, a score of 0 was given. The normalised values were then summed and multiplied by 20, to give each of the cars a total score out of 100. The cars were then ranked from highest to lowest, based on their total scores.
I beg your pardon? Look, this isn’t a good way of using the data because it doesn’t account for total scale. Negative keywords, or really reviews, would need to be expressed as a ratio to the number of positive reviews to normalize for production numbers and reporting frequency.
So, the sample size and diversity isn’t great, the keyword methodology is flawed, and the data normalization methods used here pump numbers to preposterously inflated levels. This is a bad study, and it’s one that some outlets, in their desperation for content, will probably still use. Anyone who takes this seriously should be embarassed, particularly as there are experts and entire analytic firms with a much closer, substantially more careful eye on the automotive industry.
For example, Consumer Reports surveys actual vehicle owners, collecting extensive primary data from owners, normalizing it, and using it to build a great reputation as an expert source.
Cox Automotive collects data on the state of the car market, using a high degree of expertly-sourced primary data and some expertly-sourced secondary data from other studies to build insightful car market reports. S&P Global Mobility is arguably the leading global institution for data on the automotive industry, collecting and interpreting an immense amount of primary and secondary data. Expertise and data sourcing matters.
So, don’t be fooled by bullshit studies. Look for a trusted source, look for good methodology, and understand that a massive number of studies out there are complete and utter horseshit.
(Photo credits: Mazda, Compare The Market AU)
Support our mission of championing car culture by becoming an Official Autopian Member.
-
MIT Study Finds Something No One Mentions About Self-Driving Cars: They’re Lousy For The Environment
-
These Are The 20 Most Durable Cars On The Road According To A New Study
-
Mercedes Loses Big In Consumer Reports’ Latest Reliability Rankings
-
New IIHS Study Confirms What We Suspected About Tesla’s Autopilot And Other Level 2 Driver Assist Systems: People Are Dangerously Confused
-
Tesla Has The Most Crashes In NHTSA’s First Advanced Driver Assist Study But What That Means Isn’t Clear
Got a hot tip? Send it to us here. Or check out the stories on our homepage.
I imagine some of those reviews might as well read: “difficult” to beat, “hard” to ignore, hardly “confusing”, “uncomfortable” for the competition, or does not “disappoint”. If you only look up single words, you’re up for a surprise and your data sucks.
Surveys in general are just such a messy thing to follow. You end up with a general perception that one car is junk (Chrysler, for example, at rock bottom on reliability, when the most common issue is a TV system in the van that is tricky to set up) and another car is great (Consumer reports saying a 2020 Kia Sportage is one of the best vehicles you can buy- while thousands of Theta engined Sportages run around like ticking time bombs)
They’re just such a sloppy way to get data at best, and at worst, a tool to adjust cards in favor of one party or another.
When I sell a car, I get paid $150 from Stellantis for every “passing survey” a customer fills out when I sell a new vehicle.
99.99 percent of salespeople in this industry are coaching thier customers to fill out everything perfectly, so they get thier money, regardless of whether or not the job was properly done.
I will sell a car at Affiliate pricing (under invoice) have it spitshined, warmed up, full of fuel, radio presets set, a bottle of water in the cupholder, paperwork done perfectly, and the customer in and out of the store in 30 minutes total, then spend a half hour showing every function on the car, and give the customer my personal cell to access me with any questions or problems. It’s a saturday, I tell them I will be calling monday to follow up and check on them.
Then the customer fills out the survey, and clicks “No” when asked if the salesperson “followed up within 24hrs of purchase”
BOOM, FAIL, no money for you you lousy salesperson!
My parents have had a CX-5 for years. They are planning to replace it because my very short mother finds it too tall for her. Getting in and out of it is a bit of a challenge at her age with . She’s under 5′ tall and finds my 2005 Pontiac GTO about perfect in terms of height off the ground. So that’s why it’s disappointing to her.
I’ve driven that car a lot, The seats aren’t great, especially for long distances. It handles well, it’s overall a very good vehicle.
As a GenX-er with a science degree, I was taught in college that a study should only contain empirical data that was supervised by the researcher.
In the 2000s, this concept of the “meta-study” where data is pulled from other studies in order to save money became popular, and now we find ourselves relying on this crappy perversion of science to run our lives.
Without control over the data gathering methodology, and mixing data from different methodologies, you get crappy data, and crappy data means you get crappy conclusions.
If the CX-5 is truly disappointing to drive, its probably because buyers’ expectations are raised by its looks inside and out but it ultimately an ordinary vehicle with a few quirks which aren’t to everyone’s tastes. The non-turbo is only typical CUV quick but has neither the space, visibility, soft and easy driving nature, nor big fancy touchscreens CUV intenders are used to. Hence they’re happier in vehicles like wheezy but roomy Subarus which more easily meet their expectations.
My spouse owns a CX-5 and loves it. Although I find it great to drive around town, I don’t consider it a good road trip vehicle. I find the seats are too hard for long distance comfort, engine lacking enough torque to handle even moderate inclines without straining, and the firm steering rather exhausting. Still a looker though.
I bought my CX-5 almost ten years ago because it is best in class for… driving enjoyment given that it is an SUV/CUC/whatever. I’ve continued to compare it to CRVs and RAV4s et al and it isn’t even close. I’ll likely eventually replace it with another CX-5 and not the CX-50 because of that (and that the CX-5 is still made in Japan).
Maybe I’m a weirdo, but I don’t care what other people enjoy about their vehicles. I care about what I enjoy about what I’m driving.
Wait,is this the australian insurance website run by meerkats with russian accents?
I’m shocked! Next you’ll be telling me they cant even speak russian or even drive a car