Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Coronavirus

This site may earn commission on affiliate links.
Never side with liars, just because they say things you want to hear. Just ask the coal miners.

You hear this a lot and I've always been quite amused by it. It is like Trump is first person in politics to lie. Last time I checked, the majority of history is littered with failed political promises. It is part of the job description and it is why the vast majority of our politicians are attorneys.

Now maybe if people said they don't like the manner in which Trump lies, that would be a different story. So many like the lies they are deceived into with a "please" and "thank you". Like a persuasive attorney. Trump is not an attorney.

So let's break this down:

Obvious, obnoxious lie that most don't really believe or care about = BAD

Slick, lubricated lie that fools the sheep into believing = GOOD

I find it to be an interesting observation that is applicable to many other situations.
 
Last edited:
Obviously the points about the specificity of the test raised earlier here are totally valid. You can't use a serology test like this with such wide uncertainty on how specific it actually is.
The manufacturer claims 99.5% specificity based on scoring 369/371 on pre-COVID samples. That works for me.

IMHO self-selection bias is the big issue here. I'm also not impressed by their adjustments, either (see above).
 
The manufacturer claims 99.5% specificity based on scoring 369/371 on pre-COVID samples. That works for me.
Statistically it doesn't work though. That's why the 95% confidence interval is 98.3-99.9%, they didn't test enough negative samples. I also question whether the negative samples these companies are using are representative of the antibodies in the populations that these tests are being used on. The Stanford researchers could have double checked all those positive samples with a more accurate test! I'm baffled why they didn't do that.
But yeah they should have also tried to get a more representative sample.
 
It's extremely dubious to apply huge scaling factors to bins with a single positive result. I understand a few zip codes were over-represented, but I'm very skeptical about these adjustments. Even worse, they did not adjust for self-selection which to me is a much bigger factor.
Nice comments.

My daughter works for a political polling company. I asked her the other day how they correct for self-selection in agreeing to fill out a survey. I was told "we don't," but she mentioned that they are probably lucky in that people who tend to agree to fill out surveys are those that get up and vote.

Her point could be translated to: is the willingness to undergo antibody testing a confounding factor ? I"ll guess yes, that it is weighted towards people who have either tested Ag positive already and want to know if they are immune, and for those who had Covid consistent symptoms in the past but were not Ag tested.

All of these volunteer antibody surveys will suffer the same bias.
 
Last edited:
  • Informative
Reactions: OPRCE
My IFR guess remains 0.5%-2.0%.

Well, a lower limit of 0.24% would not affect your guess.

That Stanford study used a test with a 99.5% specificity (95% CI 98.3-99.9%) and got 1.5% positive results. All 1.5% being false positives is within the confidence interval! Yet somehow when they calculated the confidence interval of their infection rate that uncertainty is not included. o_O There's an innumeracy pandemic going on right now too.

I hope we don't need another vaccine for that before we get the real thing. "Peer reviewed" would be a good thing. ;)
 
You hear this a lot and I've always been quite amused by it. It is like Trump is first person in politics to lie. Last time I checked, the majority of history is littered with failed political promises. It is part of the job description and it is why the vast majority of our politicians are attorneys.

Now maybe if people said they don't like the manner in which Trump lies, that would be a different story. So many like the lies they are deceived into with a "please" and "thank you". Like a persuasive attorney. Trump is not an attorney.

So let's break this down:

Obvious, obnoxious lie that most don't really believe or care about = BAD

Slick, lubricated lie that fools the sheep into believing = GOOD

I find it to be an interesting observation that is applicable to many other situations.

Yeah, let's just say I don't agree with that your "facts".
 
Where is WSJ getting the number of 4591 new US deaths yesterday?

Reported U.S. Coronavirus Deaths Reach Record 4,591 in 24 Hours

I can't find any collaborating data from either John's Hopkins or worldometers. Nor does it appear to be based on the upward correction that NY published yesterday.

Any ideas?

The CDC did change the death reporting criteria to include suspected cases as well as proven positives. A lot of people got this, but could never prove it.
 
Statistically it doesn't work though. That's why the 95% confidence interval is 98.3-99.9%, they didn't test enough negative samples. I also question whether the negative samples these companies are using are representative of the antibodies in the populations that these tests are being used on. The Stanford researchers could have double checked all those positive samples with a more accurate test! I'm baffled why they didn't do that.
But yeah they should have also tried to get a more representative sample.
Yeah, the specificity confidence interval is wide enough to create a little doubt. We need one of these serology studies in a hot zone with 1%++ confirmed cases. Then we'll see how these 50-100x ratios hold up.

I originally thought the bizarre adjustments were just a clumsy attempt to correct for non-representative sampling. But I now see the study was led by Jay Bhattacharya and Eran Bendavid, charter members of the "just a flu" choir. So bias is the more likely explanation.
 
Stanford tested 3330 people and found 50 positives. That's 1.5%. Their 2.5-4.2% range comes from various "adjustments".

Their biggest adjustment was Zip Code/Sex/Race (ZSR) weighting. They created ~450 ZSR "bins". Obviously most bins had 0 positives and a lot of bins had 1 positive. A few bins had 2-3 positives (exact data not disclosed). They weighted each ZSR bin according to demographic data. A positive from a male hispanic in a poorly represented zip code might be counted as 10 or more, while 2 white women from Los Altos might only count as one positive.

It's extremely dubious to apply huge scaling factors to bins with a single positive result. I understand a few zip codes were over-represented, but I'm very skeptical about these adjustments. Even worse, they did not adjust for self-selection which to me is a much bigger factor.

Well said, yes extremely dubious. And once again I hate tooting my own horn or advertising my product but it looks like the researchers need an application of Idiot Spray themselves. It looks like a cooked result to me from the standpoint of the number of assumptions that they made that supported a wild scaling up of an initially rather modest ratio.
 
Last edited:
The manufacturer claims 99.5% specificity based on scoring 369/371 on pre-COVID samples. That works for me.

IMHO self-selection bias is the big issue here. I'm also not impressed by their adjustments, either (see above).

Agreed. It's like they went into their toolbag of statistical tools to see how they could massage the data to make it look bigger.

I'm going with the raw number of 1.5% for the "overall area" in aggregate. Everything else is just too small to bin out statistically.
 
Well, a lower limit of 0.24% would not affect your guess.
I hope we don't need another vaccine for that before we get the real thing. "Peer reviewed" would be a good thing. ;)
Might as well start here. The 95% confidence interval means that there is a 5% chance that the real prevalence is not within the stated range.

It is true however that as the prevalence in the population is lower, the fraction of positive tests attributable to the false positives increases.
 
Last edited:
You hear this a lot and I've always been quite amused by it. It is like Trump is first person in politics to lie. Last time I checked, the majority of history is littered with failed political promises. It is part of the job description and it is why the vast majority of our politicians are attorneys.

Now maybe if people said they don't like the manner in which Trump lies, that would be a different story. So many like the lies they are deceived into with a "please" and "thank you". Like a persuasive attorney. Trump is not an attorney.

So let's break this down:

Obvious, obnoxious lie that most don't really believe or care about = BAD

Slick, lubricated lie that fools the sheep into believing = GOOD

I find it to be an interesting observation that is applicable to many other situations.

NO ONE in American politics has EVER lied with the frequency and audacity of Trump. Your comment is like trying to excuse a serial killer by saying everyone hurts someone.
 
The manufacturer claims 99.5% specificity based on scoring 369/371 on pre-COVID samples. That works for me.

As @Daniel in SD has pointed out, it doesn't really work for a population with such low prevalence, when the uncertainty on the specificity is relatively high. They had 2 false positives out of 371 control samples! And then they went ahead and used that to test 3330 people and got 50 positive results (directly scaling, it seems fairly likely that ~20 of those were false positives). And it seems easily reasonable that 5-50 of those results could have been false positives, given the wide uncertainty on the false positive rate.

So how do we know that we didn't have 0 or 10 actual positives, rather than 50? That doesn't seem THAT improbable. This is the reason you have to quote confidence intervals (as I know you know).

Of course, there are also false negatives to account for (which may be significant) as well. There are a lot of uncertainties!

Really very surprising they didn't verify their positive samples for positivity.

And agreed, their corrections for selection criteria and scaling of small bins leave additional cause for concern. It's really not a random sample - there are a lot of people eager to determine their status. They do mention this in the paper, but it's not like it's headlined.

Yeah, the specificity confidence interval is wide enough to create a little doubt. We need one of these serology studies in a hot zone with 1%++ confirmed cases. Then we'll see how these 50-100x ratios hold up.

Yes. Agreed. But once that's done, they also have to make clear (would be nice to headline it in the initial publication) that New York City is a hot zone, and there will likely be much lower prevalence elsewhere. Would also be nice to follow up with ELISA testing on their positive samples!

Overall not surprised with the Santa Clara results. It seems perfectly reasonable that there is prevalence of ~1% (means an IFR of about 1%, after all the deaths have taken place), and seems like that's about what they measured in spite of being biased towards finding positive results in both their selection criteria and the performance of their test.

But something seems to have gotten lost in the messaging to claim 50-85x the measured case load - that seems way outside of what their results actually suggest.
 
Last edited:
  • Like
  • Funny
Reactions: bkp_duke and FoverM
The IHME model was updated:
IHME | COVID-19 Projections

The update is based on April 15 data, and the estimates death/day for April 16 and April 17 are more than 10% too low. That, however, might be caused by the recent upward corrections in the data.

They had estimates of 1888 and 1837 previously on their model which was updated on April 12th, for April 16th and 17th.*

COVID Projections Tracker

* I don't know which logging site to compare to when looking at actual deaths. And now we have this previously anticipated complexity of all the uncounted deaths coming in. Makes sense, since it seems like the death numbers were way lower than expected for a while (and still are low), assuming we have 6-9 million infections nationwide right now and an IFR of about 1%, we're still lagging significantly behind the expected ~40k deaths at this point. Obviously all these numbers are super speculative. I have no evidence, other than that's what epidemiologists estimate for the nationwide number of cases.
 
Last edited:
  • Funny
Reactions: FoverM
Thanks to @Buckminster as I think it was his post that pointed me to Swiss Report -

Facts about Covid-19

sorry for those that don't like reading or can't find the time to read report on Denmark, Germany, Iceland, Sweden, South Korea and other details.

Yes, this is a duplicate. thanks again @-Buckminster

Yeah, always good to repost some craziness. I found this website, it might be useful :rolleyes::

Facts about Covid-19

Seriously, why don't you go and read some of the papers on COVID-19? It might help clear up some of your confusion and then you go and refute (or understand the context) of each of these crazy points, one by one.

Or just click on the FIRST link on this page which links to a BI article from March 5th. On South Korea. Which had 35 deaths at the time (to support the 0.6% CFR blabbed about in the article) and now has 230 deaths with just a small increase (6k -> 10.6k) in cases - now 2.2% CFR. (And to be clear, this was predictable with high confidence, at the time the March 5th article was written.)

This is not that hard.
 
Last edited: