Welcome to Tesla Motors Club
Discuss Tesla's Model S, Model 3, Model X, Model Y, Cybertruck, Roadster and More.
Register

Coronavirus

This site may earn commission on affiliate links.
It's interesting that these extrapolations rise as we come closer to the date for which the IHME model expects us to reach most of the eventual total deaths. (60,510 by May 2, 67,053 by May 23, 67,641 eventually).

EDIT: The IHME model predicted few thousand less than that a few days ago.
It would appear that IHME have underestimated future deaths. It often happens that elaborate models have no better predictive accuracy than much simpler models. The elaborate models often require lots of assumptions for some set of humans to supply. These assumptions are not always updated in a timely manner as new data come to light. Simpler models can efficiently work with available data as it comes in and avoid tossing in a bunch of other assumptions (that are probably wrong, but believed to be sensible at the time). Another way to look at this is in terms of the objectives of models. Elaborate models often are trying to a causal description correct at a detailed level. That's useful for testing causal theories of scientists. Simpler models may be focused more on efficient predictive inference. Causal models often do a poor job at predictive inference, while predictive models make all kinds of simplifications on causal mechanisms. Causal models tend to overfit the data to accommodate subtle causal effects that are not forecastable or significant, while predictive models eschew overfit because it destroys predictive accuracy. Both approaches have their place, and serve different purposes.
 
  • Like
Reactions: Norbert
Not sure if this has been posted yet. A third party tested a bunch of these lateral flow antibody tests. It looks like the one used in the USC study (Premier) had a specificity of 97.2% (95%CI 92.10%-99.42%). Hopefully they incorporate this information into their study instead of relying on information provided by the manufacturer of a test that was later banned from export by the Chinese government.
Dropbox - SARS-CoV-2_Serology_Manuscript.pdf - Simplify your life
If you have questions about USC COVID-19 antibody testing in L.A. County, he has answers
COVID-19 Testing Project
View attachment 535773
So, even on IgG none can claim above 97% specificity with 95% CI because of limited test samples. That means, as much as 3% prevalence could be purely false positives. That Santa Clara study started with 1.7% positives before weighing for county demo ...
 
  • Like
Reactions: Doggydogworld
Please stop trying to compare my model to R. It only leads to confusion and misunderstanding. Use rather the terms that the model is actual built on.

The model is simply tracking trends in the death growth rate (not R). In those terms, all you are really saying is that you believe that death growth rates will remain high enough 100,000 deaths are unavoidable in the first wave. You could express that as linear function

dlogN = a + b logN

This passes through two points dlogN = 3.9% now at logN = ln(50,236) and dlogN = 0 at logN = ln(100000). Thus, your hypothesis is that

dlogN = 0.6522 - 0.0566*ln(N) for N > 50k

I would point out that a fit of this curve on the last 14-days has this estimate

dlogN = 0.5857 - 0.0501*ln(N)

So you are hypothesizing a slope that declines a little faster than the 14-day fit. That of course is a reasonable belief because the experience over the last 14 days can not really reject your hypothesis.

What I am try to illustrate here is that it is far simpler to look at recent death growth rate and make reasonable projections than to does to speculate about some R value that you cannot immediately relate to observed phenomenon.

I would point out that this is largely a philosophical gap between statisticians (like myself) and mathematical modelers. Statisticians prefer to work with simpler constructs that are readily estimated from observable data. Mathematical modelers are comfortable working with lots of latent processes and parameters they can't actually observe.

At the end of the day, what matters is how fast the death growth rate declines.

All that just proves that I've completely forgotten whatever mathematical knowledge I once knew. Put in more self-deprecating terms, I like the r subscript because I can still understand it. Your complex logarithmic analysis on the other hand . . . . Not so much!
 
So, even on IgG none can claim above 97% specificity with 95% CI because of limited test samples. That means, as much as 3% prevalence could be purely false positives. That Santa Clara study started with 1.7% positives before weighing for county demo ...
Yep. Even the one that was 100% specific (Surescreen) in this study had false positives in the manufacturer's test. You really have to run a lot of negative samples to get high specificity with 95% confidence.
Screen Shot 2020-04-25 at 1.42.41 PM.png

https://www.oilybits.com/downloads/Surescreen_COVID-19_IFU_Product_Insert.pdf
 
  • Helpful
Reactions: EnergyMax
Not sure if this has been posted yet. A third party tested a bunch of these lateral flow antibody tests. It looks like the one used in the USC study (Premier) had a specificity of 97.2% (95%CI 92.10%-99.42%). Hopefully they incorporate this information into their study instead of relying on information provided by the manufacturer of a test that was later banned from export by the Chinese government.
Dropbox - SARS-CoV-2_Serology_Manuscript.pdf - Simplify your life
If you have questions about USC COVID-19 antibody testing in L.A. County, he has answers
COVID-19 Testing Project
View attachment 535773

Biomedomics, by the way, is the test they are using to predict 6% prevalence in Miami-Dade County (they have a 95 CI of 4.x% to 7.x%). I'm really interested to see how they managed to come up with that number. To me it looks like they assumed a fixed specificity and then calculated CI from their sample size. So sad.

With those numbers, it's possible they are simply measuring the approximate specificity of the test (assuming approximately zero actual positive samples in their sample set)! (94% specificity, 4.x% to 7.x% 95 CI).

In other words, a positive test result would be likely to be correct about 15-20% of the time.
 
All that just proves that I've completely forgotten whatever mathematical knowledge I once knew. Put in more self-deprecating terms, I like the r subscript because I can still understand it. Your complex logarithmic analysis on the other hand . . . . Not so much!
Oh, how I resemble that remark !

I think @jhm is saying that he is producing a best fit for the curve at hand and extrapolating into the future, whereas any R factor would only apply for a point in time, pushed forward through whatever the chosen function might be.
 
Last edited:
  • Informative
  • Like
Reactions: dfwatt and HG Wells
Biomedomics, by the way, is the test they are using to predict 6% prevalence in Miami-Dade County (they have a 95 CI of 4.x% to 7.x%). I'm really interested to see how they managed to come up with that number. To me it looks like they assumed a fixed specificity and then calculated CI from their sample size. So sad.

With those numbers, it's possible they are simply measuring the approximate specificity of the test (assuming approximately zero actual positive samples in their sample set)! (94% specificity, 4.x% to 7.x% 95 CI).

In other words, a positive test result would be likely to be correct about 15-20% of the time.
I'm getting the impression that many "researchers" don't actually understand statistics, they only know how to plug a couple numbers into a confidence interval calculator webpage.
A confidence interval for the confidence interval. ;)
Nah, I'm pretty much talking about the confidence interval itself. To be 97.5% confident that a test with ZERO false positives is 99% specific you would have to run about 400 samples (Epitools - Calculate confidence limits for a sample prop ...).
The confidence interval on the confidence interval would take into account that manufacturers are probably modifying these tests over time, getting pre-covid blood samples from sources that don't match the populations the tests are being used on, and manufacturing consistency from batch to batch. :p
 
I'm getting the impression that many "researchers" don't actually understand statistics, they only know how to plug a couple numbers into a confidence interval calculator webpage.
Not that bad, thankfully. It is more often a case of learning a toolkit of biomedically appropriate tests and then using a stats package.
Medical researchers understand the 2x2 table of disease Vs test result, and the better ones understand likelihood ratios. They are conversant with p values and confidence intervals. To varying degrees they know when to use tests appropriate for underlying normal distributions and when not to.

But it is also true that any study of importance will have a stats guy along for the ride, and peer review in the top journals will get a good stats looking over. It is also pretty typical (in my experience, anyway) for a study design to have a Stats guy input for questions about power and confidence intervals.
 
  • Like
Reactions: bkp_duke
Как COVID-19 разрушает органы и убивает человека

Translated:

How COVID-19 destroys organs and kills humans.

Many doctors, and then ordinary people, today say that it is not the coronavirus that kills a person. Death occurs due to the failure of vital organs such as the heart or kidneys. It has already been proven that with COVID-19 not only the lungs suffer (including in the case of an asymptomatic course of the disease), but also the brain, intestines, and even blood vessels. However, it is worth remembering that all these organs could well function if it were not for a new infection spreading around the world. This means that the death of 177 thousand people (on 04/22/2020) came prematurely.

Pathologists and doctors around the world collect data on how a particular organ responds to a massive attack of viral particles. Recall that SARS-CoV-2 enters the body through the mucous membranes, invading infected cells. Then he rebuilds the machinery of the captured cell so that it begins to produce copies of it. (The virus is different from the bacterium in this way: it simply cannot reproduce itself without a host cell.) After a cell spends resources on creating a large number of copies of the virus, it usually dies.

Some time ago, individual scientific groups, analyzing a small number of patients, showed how infection with SARS-CoV-2 thus provokes the destruction of brain tissue. Doctors also recorded neurological disorders, such as loss of smell and taste. In addition, it has been suggested that a new type of coronavirus can invade testicular tissue and cause infertility in men.

Now, the experts of the journal Science have cited summary data that are known to scientists today and describe how a new type of coronavirus leads to the death of a patient. However, this information can still be considered preliminary, because the accumulation of knowledge continues.

Start of infection

When a sick person coughs or sneezes, and also touches surfaces with dirty hands, he leaves a large amount of viral particles in the surrounding area. The pathogen, along with droplets of mucus hanging in the air or through dirty hands, enters the mucous membranes of a healthy person (in the nose or mouth, eyes).

As we already said, then it penetrates into the cells and initiates the production of a large number of copies, which subsequently massively infect other cells.

For the first week, an infected person may not show any symptoms of COVID-19, but at the same time, his body already produces an avalanche of new pathogens that can invisibly infect dozens of new victims. However, some patients almost immediately begin to experience symptoms such as dry cough, fever, body aches, headaches and sore throats, loss of smell and taste.

The main line of defense is the lungs

If the immune system does not cope with the primary invasion of the coronavirus, the pathogen will begin to descend into the lungs. So he can reach the alveoli, in which a real battle unfolds wall to wall. The virus infects the cells lining the alveoli. White blood cells, which are pulled together to the lesion, eject molecules that, like a horn calling for battle, serve as a signal for other cells of the immune system. All of them begin to destroy the lung cells infected with the virus. As a result, fluid, mucus, dead lung tissue cells and white blood cells (those same white blood cells) accumulate on the battlefield. A typical picture for pneumonia.

It becomes gradually difficult for a sick person to breathe, then he begins to suffocate, the amount of oxygen in his blood is rapidly decreasing. X-ray images and computed tomography of the patient's lungs demonstrate the "frosted glass effect." Some deal with this condition. Others require lung ventilation (using those same ventilators). Many are dying.

Welcome to the cytokine storm

In some cases, the reaction of the immune system turns out to be too active, and doctors register the so-called cytokine storm in the blood of patients (scientists have discovered this phenomenon before with many viral infections).

Let us explain that the production of cytokines determines the normal response of the immune system to pathogen invasion. However, in some cases, too many of these signaling molecules are generated. As a result, cells of the immune system begin to attack healthy (not infected) tissues.

Such hyperreaction leads to the fact that blood vessels begin to “leak”, blood pressure drops, blood clots form, which leads to the failure of many organs and, ultimately, to the death of the patient.

However, there is still no consensus in the camp of scientists whether it is necessary to reduce the number of cytokines in the blood of patients with COVID-19. After all, if you go too far, the body will not be able to respond normally to the reproduction of the virus. A number of clinical trials of drugs are currently underway, including those that suppress the development of the cytokine storm. Upon completion, it will become clear whether doctors should work in this direction to save the lives of patients with COVID-19.

Problems with heart

In Italy, doctors recorded a strange case: a 53-year-old resident of Brescia was hospitalized with all the symptoms of a heart attack. However, the examination showed that her coronary arteries were not blocked, she just was suffering from COVID-19.

Scientists do not yet know how SARS-CoV-2 affects the heart (perhaps the cytokine storm plays an important role). However, in various studies, lesions of this organ, which led, for example, to the occurrence of arrhythmias, were recorded in 25-40% of patients admitted to hospitals. Most of these victims were among patients who were in intensive care.

What is wrong with blood?

The circulatory system with COVID-19 also behaves very strangely. In many cases, blood clots form in the blood of patients, which can block the “main” vessels of not only the lungs (pulmonary embolism) and the heart, but also the brain (leading to heart attacks and strokes), as well as smaller vessels (patients complain of pain in the fingers and legs).

Apparently, for this reason, there are so many people who are seriously ill who have had problems with blood vessels before: diabetics, hypertensive patients, obese patients. At the same time, asthmatics, who, it would seem, should suffer more from a respiratory disease, are less likely than doctors expected to be in intensive care.

Scientists can not yet name the root cause of the observed picture, and also why complications sometimes develop rapidly and for some people it is so difficult to recover. But, probably, the attack of the virus on the cells lining the heart and blood vessels is responsible for the damage to the cardiovascular system (there are as many “door handles” needed for it as in the cells of the nose and alveoli).

Another possible explanation: lack of oxygen due to poor alveolar function leads to vascular damage.

Battlefield - Kidneys

Many today are concerned about the lack of ventilators (mechanical ventilation). In the light of the above, it becomes clear why for many patients they are sometimes the only chance of salvation.

However, many doctors note that often those who did not need ventilators had to be rescued by hemodialysis. The kidneys also have many of the same “door handles,” thanks to which the virus enters the cell. This is probably why, in a very large number of critical patients, doctors recorded kidney failure.

Viral particles were found in the kidneys of patients after their death. In addition, the kidneys suffer from a cytokine storm and the effects of certain antiviral drugs. Mechanical ventilation also increases the risk of kidney damage. An additional burden for some patients is diabetes.

Brain for lunch

Between 5 and 10 percent of coronavirus patients in hospitals show problems with the brain and the central nervous system as a whole (for example, they may briefly lose consciousness). However, according to experts, in fact, there may be more such victims, since many people are under anesthesia or are connected to ventilators.

According to one opinion, the brain stem due to the invasion of the virus ceases to feel a lack of oxygen (therefore, patients do not suffocate, even if the level of oxygen in the blood is very low). Normally, it is this part of the brain that is responsible for this innate reflex.

How the virus enters the brain is a big question. One possible way: through the nose and olfactory bulbs. However, scientists have yet to figure out whether this is actually the case or there are other "workarounds."

Intestines for dinner

In some patients, the disease was accompanied by problems with the gastrointestinal tract: diarrhea, vomiting, and abdominal pain.

Earlier, Vesti.Nauka (nauka.vesti.ru) has already reported that coronavirus RNA is found in the feces of infected people long before the onset of the first symptoms. Some scientists believe that the new virus, like its counterparts, multiplies in the cells of the tissues of the lower gastrointestinal tract.

However, so far only RNA and individual SARS-CoV-2 proteins were found in excrement, and not the particles themselves. This means that the risk of catching the infection by the fecal-oral route is small.

In conclusion, we note once again that all the data collected by Science experts is an attempt to outline the general picture of life-threatening events.

How exactly does the disease occur in a particular case, which subtypes of coronavirus attack the body, and which medications to use for treatment, are all these important issues still on the agenda of many medical and scientific centers of the world. It may take years for specialists to complete a “debriefing”.

Meanwhile, humanity needs to devote resources and forces to such research, because far more dangerous pathogens can be ahead of us: more “infectious” and leading to more deaths, “guests” from the animal kingdom. And, the more the planet’s population becomes, the closer we live, the more we step on nature, the more often the world will have to deal with such invasions from outside.

-----

Seems some of the above was pulled from this:
How does coronavirus kill? Clinicians trace a ferocious rampage through the body, from brain to toes | Science | AAAS
 
Last edited:
Nah, I'm pretty much talking about the confidence interval itself. To be 97.5% confident that a test with ZERO false positives is 99% specific you would have to run about 400 samples.

I thought these 400 samples are used to establish the specificity of the test with a certain confidence interval.
And that the specificity of the test is then used to estimate the confidence interval of an actual study using that test (a study with maybe 1,000 samples as an example).
Is that not the case?
 
Biomedomics, by the way, is the test they are using to predict 6% prevalence in Miami-Dade County (they have a 95 CI of 4.x% to 7.x%). I'm really interested to see how they managed to come up with that number. To me it looks like they assumed a fixed specificity and then calculated CI from their sample size. So sad.
Yes, that's what they are all doing. Hopefully the reviewers point this out and make them change. Definitely Santa Clara study will undergo revisions since it has been commented upon by some academic heavyweights (apart from stats profs).
 
I believe the probables include home deaths, as long as COVID is listed on the death certificate.

I don't know for sure, but in the video by Cuomo presenting the test results, which was posted here, he said they are not included with the numbers which are coming from nursery homes. If that is different for NYC numbers on that website, for example, I wouldn't know. But that is the best info I have at this point. My impression was that the home deaths are pretty much unknown at this point, at least in his mind, and require additional analysis that hasn't been done yet.
 
I thought these 400 samples are used to establish the specificity of the test with a certain confidence interval.
And that the specificity of the test is then used to estimate the confidence interval of an actual study using that test (a study with maybe 1,000 samples as an example).
Is that not the case?
No,

As N increases for the test performance study the range of test performance for a given confidence narrows.
So e.g., presuming that you want a 95% confidence of your test performance interval, if N = 100 then an interval might be [0.9 - 1.0] while N = 100 would narrow the range, say to [0.93 - 0.97]

Then,
say the performance characteristic is specificity which is a measure of false positivity.
If you test 100 people with a test that you know (within 95% confidence) that the specificity is [0.9 - 1.0] then you expect between 0 and 10 false positives with 95% confidence. If you had had a tighter specificity interval of [0.93 - 0.97] then you expect between 3 - 7 false positives, again with 95% confidence.

I'll let someone else comment on the central limit theorem as it applies here because I learned it many moons ago and I do not want to mis-speak.
 
Last edited:
No,

As N increases for the test performance study the range of test performance for a given confidence narrows.
So e.g., presuming that you want a 95% confidence of your test performance interval, if N = 100 then an interval might be [0.9 - 1.0] while N = 100 would narrow the range, say to [0.93 - 0.97]

Then,
say the performance characteristic is specificity which is a measure of false positivity.
If you test 100 people with a test that you know (within 95% confidence) that the specificity is [0.9 - 1.0] then you expect between 0 and 10 false positives with 95% confidence. If you had had a tighter specificity interval of [0.93 - 0.97] then you expect between 3 - 7 false positives, again with 95% confidence.

I see. I'm thinking of the fact that a test with 100 people must have more randomness than a test with 1,000 people, because a test with 100 people is less likely to be representative. Like in voter polls, where there is not that kind of uncertainty about the test itself. Perhaps you would think of that as a separate matter. But there are two uncertainties, with one being at a higher level, so to speak, than the other.

EDIT: Or three uncertainties? One resulting from the quality of the test, one from the sample size measuring the quality of the test, and one from the sample size of the eventual study using the test.