Oh, I don't think they are particularly biased. Most of their errors are due to cluelessness, stupidity, and methodological intransigence.
If
all you care about in automotive reviews (or reviews of other products) is customer satisfaction ratings, then by all means, ignore everything else. Personally, I want to know other details. Even if a car has the best customer satisfaction rating ever, I want to know about its crashworthiness, about how it accelerates and stops, about its fuel economy, about the reliability of the model, and so on. As I said earlier, I think that CR over-emphasizes their reliability measure in their overall ratings, but that's something I can adjust for myself when I read CR, and factor it into my purchase decision. Note that I
did buy a Model 3, despite CR's giving it a 2/5 reliability rating, and an overall score that was reduced because of that reliability rating.
When you generate a score that makes it seem that a car that is the most loved by customers is not acceptable, then you have to start questioning your scoring mechanism.
That's not how I read CR's rating of the Model 3. CR currently gives the Model 3 an overall score of 65 (the range among "luxury compact cars" being 39 to 80). This score is based on a road test score of 82 (the range being 56 to 88), reliability of 2/5, owner satisfaction of 5/5, and crash-test and safety features (which are mostly excellent). For a while, the Model 3 earned CR's "Recommended" label, but that was based on an early reliability score of 3/5 and was removed after more reliability data came in which dropped the reliability score to 2/5. CR has an official "Not Acceptable" designation that's reserved for really awful products -- mostly those with bad safety problems. The Model 3 doesn't come close to that level in CR's testing.
IMHO, you're mis-characterizing CR's rating of the Model 3. If you don't like the way they weight various factors, fine; you can ignore the overall score and instead focus on the individual factors, which CR presents. An overall score in the way CR does it will inevitably be based on weightings that are somewhat arbitrary, and it's fine if you'd prefer to weight things differently. Calling it "cluelessness, stupidity, and methodological intransigence" is in error, though.
Tesla has been roundly criticized for a number of business practices that can be considered lapses of ethics, ranging from poor customer service to poor treatment of employees. Whether you believe such accusations and whether they're important to you are for you to decide, but discussing them is perfectly reasonable.
Pretty much all lies. Reasonable to discuss only if there is evidence of wrongdoing or ethical lapses. Not "people are saying" BS.
ROTFL. Tesla routinely lays off employees on a moment's notice; there are many credible reports here of poor customer service; and so on. Lies? I'm sure some charges against Tesla are that, but I'm skeptical that "pretty much all" are.
As a CR subscriber, what I find most perplexing about their overall evaluation of the Model 3 is that the chart that represents the problem areas seems to have zero connection to the commentary about them.
13 out of 17 areas are shown as "much better than average", with 2 (paint/trim and in-car electronics) "better than average" and 1 (body hardware) "average".
How all of that adds up to an overall 2 out 5 rating for reliability is a complete mystery.
This is, essentially, an effect of statistical power and the fact that cars are so reliable on average. CR has 17 reliability areas, and they typically get reliability reports from a few hundred owners. Particularly in new model years, that gives them enough statistical power to know how particular models fare compared to each other overall, but with (on average) 1/17th the number of failures per trouble area, there aren't enough failures in most areas to make those individual areas look much worse than average. As cars age and begin to produce more problems, that changes, so you begin to see more in the way of obviously negative scores in individual trouble spots. This is somewhat analogous to resolution in a digital photo. At low resolution, you might be able to make out that a photo is of a person, but you wouldn't be able to identify that person, describe what they're wearing, etc. As resolution improves (or the number of problems increases), you can make more determinations about the person in the photo (or identify the specific areas where problems occur). The fact that any specific reliability area bumps off of a 5/5 score in a first-year model likely indicates problems overall.
This is a misunderstanding that comes up again and again in public discussions like this one. Although CR does explain it somewhere (I couldn't find a link, but I know I've read it on their site and/or in the pages of their magazine), they don't make the explanation prominent enough, IMHO.