Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Coronavirus

This site may earn commission on affiliate links.
I'm not here to debate or argue. If you think the experts in the field you disagree with (like Dr. Mina) are wrong, that's your prerogative.

I look forward to this thread being more about data and information, less argument. Right now, it's pretty useless from my perspective.

You still don't get it. The currency of a scientific discussion is data sets and the interpretation of those data sets. It's not a news feed. If you want a news feed I suppose that's another and potentially just as legitimate use of the forum. But if you want a scientific debate that's serious and that might have at least some validity the currency in that is data sets and the various ways that those datasets can be interpreted.

For sure there are always alternative interpretations for just about every data set you can collect, and also for sure, the data sets are incomplete in relationship to covid-19 in just about every major scientific domain, including but not limited to epidemiology, biology, therapeutics, lethality, (case fatality rate, infection fatality rate), etc etc etc.

The problem is that you support and only seem interested in a kind of dialogue that never leads to resolution because it never gets to the Arbiters of scientific truth which again are . . . . the data sets and their best interpretation.

So, I reiterate my question: do you have a data set that supports your extraordinary claim that the case rate in the United States is 1/30 to 1/100 of the actual infection rate? Do you have a data set that supports that supposition? If so we'd love to see it. It would be very informative. If you do not, why don't you simply admit it? If you can't admit it well that's another problem.

PS - just another tip. When you're engaged in a scientific debate, the argument from Authority (that someone with a degree has said such and such) carries no weight. People are very surprised to hear this who are outside the Sciences who think that somehow it's your degree that gives your argument credibility. It's not. Again I'm not sure you understand this because you're clearly not in the Sciences but there is no argument from Authority in science. A person with no degrees who's able to stitch together the best most cohesive explanation of the available data wins the scientific argument at least over time, even if their lack of degrees means that people might not take them seriously at first. So the fact that a doctor somewhere has penned their name to the unsubstantiated claim that the actual infection rate is 50 to 100 times higher than the actual documented infections means nothing.
 
Last edited:
Ugh. Again - 5 positives out of 147 tests . . . is within the false positive range of this kind of test.

It would be far more appropriate to list it as "ZERO to 5 people tested positive".


As an old professor once beat into my head - NEVER trust a dataset that doesn't include error bars.

There are at least half a dozen Covid-19 antibody tests that are rated 100% specific (probably more). I haven’t seen any papers validating those numbers but I don’t think we can assume they have 3% false positives. But I also agree we can’t assume too much from these preliminary studies without more information — it’s the Wild West out there right now.
 
Last edited:
There are at least half a dozen Covid-19 antibody tests that are rated 100% specific (probably more). I haven’t seen any papers validating those numbers but I don’t think we can assume they have 3% false positives. But I agree we can’t consume too much from these preliminary studies without more information — it’s the Wild West out there right now.

Links please. I've never once seen an antibody test that was 100% specific with a 0% false positive rate.

That would be the "perfect test", and those simply don't exist.


EDIT - as a point of reference, that antibody test I said I went and got on Monday . . . SD County just closed that place down. Another user here went and got the info on the test from them . . . and it wasn't that good.

North County COVID-19 Testing Site Ordered to Shut Down
 
Look, I don't want anyone to die. I wish everyone could live forever. Ok? (I'm actually a futurist and I believe in Ray Kurzweil's idea that someday we will be able to live forever.)
But, here we are. And I just think it's not fair to run up an insane medical bill that future generations will have to pay for. I think that's reasonable.
What if another, more virulent virus, arises in 5 years? We should be saving our bullets for the big kahuna, not spending it all on this.

Wonder what your definition of a big kahuna is. My definition is “any disease that overwhelms the hospital system for an extended period of time.”

Because anyone of any age, who requires hospitalization, would be SOL. They’d have a high likelihood of dying because no beds would be available.

The reason this hasn’t happened is because of the suppression strategies you
seem to advocate removing. Stopping suppressive actions “for the sake of the economy” is basically saying that critical medical care isn’t a necessity.

Most people could live with that... until they can’t.
 
There are at least half a dozen Covid-19 antibody tests that are rated 100% specific (probably more). I haven’t seen any papers validating those numbers but I don’t think we can assume they have 3% false positives. But I agree we can’t consume too much from these preliminary studies without more information — it’s the Wild West out there right now.
FWIW, virtually nothing in testing reaches 100%, in any field. That only works in things that have absolute on and off switches. Medical testing has inherent errors, which rigorous efforts efforts try to minimize. There are always type 1 (missing the presence of something) and type 2 errors (finding something that is not there). Almost zero exceptions to that rule. 'almost' because I have never seen one, but one might exist. Those errors end out requiring direct observation or other definitive methods to know 100%.
 
  • Love
Reactions: Lessmog
Links please. I've never once seen an antibody test that was 100% specific with a 0% false positive rate.

That would be the "perfect test", and those simply don't exist.


EDIT - as a point of reference, that antibody test I said I went and got on Monday . . . SD County just closed that place down. Another user here went and got the info on the test from them . . . and it wasn't that good.

North County COVID-19 Testing Site Ordered to Shut Down

This Johns Hopkins site lists 7 tests rated at 100% specificity, and this only captures a fraction of the tests that are out there (many academic labs have also developed tests). Global Progress on COVID-19 Serology-Based Testing

Without data backing up those ratings I think it's fair to be skeptical of the 100% specificity claims, but I don't think we can just assume there is a 3% error rate. Stanford says its test will be the gold standard but I haven't seen any data yet on sensitivity or specificity.
 
  • Informative
Reactions: Doggydogworld
Links please. I've never once seen an antibody test that was 100% specific with a 0% false positive rate.
There are ELISA tests that claim to have 100% specificity (though with an absurdly small sample size, they only tested 54 negative samples).
https://static1.squarespace.com/sta...15/1586808988334/COVID-19+Presentation_V9.pdf
Should have some New York blood donor data soon. I think we can all agree that antibody tests are accurate enough for that application.
Unprecedented nationwide blood studies seek to track U.S. coronavirus spread | Science | AAAS
 
  • Informative
Reactions: Doggydogworld
FWIW, virtually nothing in testing reaches 100%, in any field. That only works in things that have absolute on and off switches. Medical testing has inherent errors, which rigorous efforts efforts try to minimize. There are always type 1 (missing the presence of something) and type 2 errors (finding something that is not there). Almost zero exceptions to that rule. 'almost' because I have never seen one, but one might exist. Those errors end out requiring direct observation or other definitive methods to know 100%.

That's fair. On the other hand, there is a world of difference between 99.9% and 97% specificity, especially if your goal is to characterize the infection rate in populations that have a relatively low incidence of infection.

Many of these tests are rated at 100% specificity but I haven't seen data to back up those ratings.
 
This Johns Hopkins site lists 7 tests rated at 100% specificity, and this only captures a fraction of the tests that are out there (many academic labs have also developed tests). Global Progress on COVID-19 Serology-Based Testing

Without data backing up those ratings I think it's fair to be skeptical of the 100% specificity claims, but I don't think we can just assume there is a 3% error rate. Stanford says its test will be the gold standard but I haven't seen any data yet on sensitivity or specificity.

You are confusing specificity and sensitivity.

Understanding medical tests: sensitivity, specificity, and positive predictive value

https://academic.oup.com/bjaed/article/8/6/221/406440

Specificity:
The specificity of a clinical test refers to the ability of the test to correctly identify those patients WITHOUT the disease.

Sensitivity:
The sensitivity of a clinical test refers to the ability of the test to correctly identify those patients WITH the disease.
 
You are confusing specificity and sensitivity.

Understanding medical tests: sensitivity, specificity, and positive predictive value

https://academic.oup.com/bjaed/article/8/6/221/406440

Specificity:
The specificity of a clinical test refers to the ability of the test to correctly identify those patients WITHOUT the disease.

Sensitivity:
The sensitivity of a clinical test refers to the ability of the test to correctly identify those patients WITH the disease.

No, I'm not. Search for the term "specificity" in the document.

There are 7 tests listed with 100% specificity (0% false positives). Again, it's not clear they have the data to back up those ratings, but that's the rating.
 
  • Helpful
Reactions: Oil4AsphaultOnly
US is about to have 10x fatalities compared to China.

March 30 - 3,100
April 16 - 32,588

So, 17 days to increase to 10x. Still, several large cities have not yet experienced large breakouts.

To put in perspective, one 9/11 sized death toll daily.
Not to suggest that our numbers are not grim and regrettable but I simply don't believe China's numbers. I think they've clamped down on the infection rate pretty well but I don't believe what's coming out of China right now. The problem is they have a history of motivated misreporting in the service of the regime's preservation and protection of its grandiose self-image. Once you lie a single time all your subsequent Communications are suspect. So I think we simply don't know what the real story in China is and how far it might be from what they've stated. There's plenty of reason to believe that they cooked the fatality numbers by suppressing reporting from local funeral homes around Wuhan.
 
Should have some New York blood donor data soon. I think we can all agree that antibody tests are accurate enough for that application.
Unprecedented nationwide blood studies seek to track U.S. coronavirus spread | Science | AAAS

Well yes, but the sample size and other qualities of the study need to be in order. Just using antibodies doesn't make a good study. Currently there seems to be a tsunami of urgency-justifies-everything reports, papers, opinions, etc.
 
  • Like
Reactions: Daniel in SD
That's fair. On the other hand, there is a world of difference between 99.9% and 97% specificity, especially if your goal is to characterize the infection rate in populations that have a relatively low incidence of infection.

Many of these tests are rated at 100% specificity but I haven't seen data to back up those ratings.
The tolerances that are acceptable vary tremendously depending on the goals intended.
Diagnostic tests generally have high thresholds for both Type 1 and type 2 errors, when possible. If they do not other types of diagnostic support are required. Still it's true that the lower the error rates the better.

Now think about vaccines rather than diagnostic. Generally speaking the Type 1 error can be largish, meaning that vaccines are rarely even close to your 97%. The type 2 error in this case is causing a disease that was not there before. That is a gigantic problem, as commonly encountered is vaccine development testing. In that case 99.5% is generally considered to be unacceptably low.

Our problem here is that we are all considering the possible treatment and prevention of something that is 'novel'.
Nobody knows. Much of the world seems to have strong opinions. Both precision and accuracy are needed to responsibly discuss these topics.

FWIW, a project on which I worked was the post accident analysis of the MD10 rear cargo door. There the type 1 error rate was required to add several digits beyond 99.9. The testing skipped a few critical issues. People died. The risk evaluation design was wrong.
In medicine type 2 errors can kill people, and do. Type 1 errors can kill people and do. It is imprudent to generalize about statistical accuracy without knowing the exact test design.

Rushing diagnostic tests and treatments kills people too. Conservative approaches win.
That is why clinical trails are so arduous and exacting to design, and why they so often are inconclusive or find unacceptable errors. Quickie guesses such as the ones that have happened here end out killing and injuring people unnecessarily because careful design of a known drug would have shown that unintended consequences can kill people.

Of course the foregoing is only my opinion.
 
You are saying that the most basic things we know about the spread of infectious diseases is wrong. Here is the data from New York City. It is spreading exponentially before SAH and after SAH it stops spreading exponentially.
View attachment 532943
First off, the more it gets out in the media, the more people will present, it's part psychological... also the more testing, the more cases. There could have been many cases prior to testing but we just didn't know about them.
Also, the school closing happened on March 18... and yes, after that we do see a bigger acceleration.
 
  • Disagree
Reactions: AlanSubie4Life
Sadly that 3% rate is about the same as the false positive rate for the antibody test. Not many conclusions can be drawn from that data.

Yeah, it's research in progress. I didn't dive deep enough into the matter to know whats true and what not.

After reading the 3% my impression was that on this side of the pond going for herd immunity through natural infection as a strategy shouldn't be a feasible option. Seems to me too much suffering would be involved.

IF a large percentage of this 3% should be seen as false positives, thereby implying something like 99% has not been exposed to COVID-19 yet, then a strategy for herd immunity through natural infection should not be considered in my country. Unless way better treatment options get available over time to reduce human suffering. I trust my politicians in aiming for the right choices with data available at that time.

People say a human life has an economic price or value. Maybe true for economics, but living may have a price of the economy too. In the end we build the economy to support living and not the other way around. We can rebuild an economy and put tulips on the menu when needed and still have fun. I don't think we are advanced at rebuilding a lost life.
 
  • Like
Reactions: blckdmp
The tolerances that are acceptable vary tremendously depending on the goals intended.
Diagnostic tests generally have high thresholds for both Type 1 and type 2 errors, when possible. If they do not other types of diagnostic support are required. Still it's true that the lower the error rates the better.

Now think about vaccines rather than diagnostic. Generally speaking the Type 1 error can be largish, meaning that vaccines are rarely even close to your 97%. The type 2 error in this case is causing a disease that was not there before. That is a gigantic problem, as commonly encountered is vaccine development testing. In that case 99.5% is generally considered to be unacceptably low.

Our problem here is that we are all considering the possible treatment and prevention of something that is 'novel'.
Nobody knows. Much of the world seems to have strong opinions. Both precision and accuracy are needed to responsibly discuss these topics.

FWIW, a project on which I worked was the post accident analysis of the MD10 rear cargo door. There the type 1 error rate was required to add several digits beyond 99.9. The testing skipped a few critical issues. People died. The risk evaluation design was wrong.
In medicine type 2 errors can kill people, and do. Type 1 errors can kill people and do. It is imprudent to generalize about statistical accuracy without knowing the exact test design.

Rushing diagnostic tests and treatments kills people too. Conservative approaches win.
That is why clinical trails are so arduous and exacting to design, and why they so often are inconclusive or find unacceptable errors. Quickie guesses such as the ones that have happened here end out killing and injuring people unnecessarily because careful design of a known drug would have shown that unintended consequences can kill people.

Of course the foregoing is only my opinion.

I generally agree with all that. While gold-plated clinical trials may take too long to be of immediate use for treatment and I understand why some doctors choose to use treatments based on fairly flimsy evidence, hopefully that won't be an obstacle with developing a strong antibody test (Stanford thinks they have one already, for example).

Incidentally, there is some reason to believe that the novelty and structural uniqueness of covid-19 that is giving our immune systems such trouble responding to it also may make it an attractive target for highly specific antibody tests.

There is a good discussion here:

"There is a lot hanging on the uniqueness of the spike protein. In terms of the specificity of serological tests in which it is used, the more unique it is, the lower the odds of cross-reactivity with other coronavirusesfalse positives resulting from immunity to other coronaviruses. The most similar of these is severe acute respiratory syndrome coronavirus (SARS-CoV), which led to the SARS outbreak of 2002. But another four coronaviruses cause the common cold, and ensuring there is no cross-reactivity to these is essential. “If you line up the amino acids of the spike proteins of SARS and the COVID-19 virus, there’s a 75% identity”, says Lewis. Hibberd reckons the overall figure for common cold-causing coronaviruses is probably about 50–60%, but the potential for cross-reactivity really depends on whether the new tests select sections of the spike protein that are particularly distinct across coronaviruses. Even though SARS cases were recorded in only a handful of countries, many antibody test developers—Euroimmun, Koopmans, and Wang among them—are working to demonstrate the absence of cross-reactivity of the new tests with SARS-CoV or other coronaviruses." https://www.thelancet.com/action/showPdf?pii=S0140-6736(20)30788-1
 
Last edited:
  • Like
Reactions: dqd88
Abbott Labs is developing new coronavirus test for mass screening as US tries to reopen the economy; shares rise
Ford said the company is also working on a fourth diagnostic test for the coronavirus: A “lateral flow” blood test that can provide a diagnosis even more quickly.

“This will allow us to scale up to numbers much more significant,” he said. “This falls in our ability to look at mass testing for the general population. They are on time now and they are almost there.”

anyone have analysis on Abbott's discussion of mass screening and a "lateral flow" blood test?
 
  • Helpful
Reactions: EinSV