MATH FOLKS -- explain the flaw in this reasoning (Doomsday Hypothesis)

I sent that to my sister who has a Phd in Econ and loves stat.

I'll let you know later if I loled.

Cool, can you let me know too, so I know whether I loled? At this stage, the probability that I loled is 0.5
 
No hidden pun, just referring to xkcd's take with the awesome. There is a website that gives a really good introductory explanation

Haha ya I just meant the comic itself.

Thanks for the link!

I'll let you know later if I loled.

Pretty much this :icon_lol:


Alright, from Qu's article (didn't get a LOT out of it but I'll post some concluding remarks from it):

All your prior information about a phenomenon's total duration is incorporated in the prior density w(T). Often you can improve your predictions of future longevity by studying a phenomenon as it progresses, gathering information about its particular history. In the absence of gathering additional information, however, all predictions about future longevity must arise from the prior density. That Gott's rule, as it comes from the delta-t argument, is independent of the prior density is a dead give-away that it has no predictive power. Since any prior density can be embedded in a Copernican ensemble, it is clear that the Copernican principle does not restrict the prior density in any way and thus is irrelevant to predicting
future longevity.

The sort of conclusion that I'm taking from this is that the principle uses only factors of probability to make its estimate rather than factors that would account for the developmental history of the phenomenon; and by extension its future. So it seems that the situations it would be applicable to would be uncommon and unrealistic, perhaps similar to the firework example above.

The examples in the author's conclusion were particularly helpful.
 
I don't get how the two are even comparable. What if Eve had carried out the calculation comparing the probability of the population reaching 4 vs 60 billion?
 
I don't get how the two are even comparable. What if Eve had carried out the calculation comparing the probability of the population reaching 4 vs 60 billion?

Do you think the chances were better that there would have been 60 million humans than 4?

I mean, given that the 4 aren't being shielded by an omnipotent being, of course.
 
Cool, can you let me know too, so I know whether I loled? At this stage, the probability that I loled is 0.5

I need the desired p-value before i can make a definitive conclusion.
 
The sort of conclusion that I'm taking from this is that the principle uses only factors of probability to make its estimate rather than factors that would account for the developmental history of the phenomenon; and by extension its future. So it seems that the situations it would be applicable to would be uncommon and unrealistic, perhaps similar to the firework example above.

The examples in the author's conclusion were particularly helpful.

If you walked in on an old lady's birthday and found out she was turning 102 you wouldn't say, "Golly gee, this isn't any special time so there's a 50% chance she's more than half done with her life and a 50% chance she isn't."

Here's something from another paper by the same author:

What about the focus of Gott’s Nature article, the longevity of the human species? A species’s survival depends on its ability to adapt to short- and long-term environmental changes produced by other species in its ecosystem and by climatological and geological processes. The adaptations are made possible by existing genetic variability in the gene pool and by random mutation. How homo sapiens fits into this picture is a complicated question, certainly not amenable to a universal statistical rule. As Ferris [1] puts it, “ . . . in my experience most people either think we’re going to hell in a handbasket or assume that we’re going to be around for a very long time.” Both views are a reflection of advancing technology. The first comes from alarm at technology’s increasing impact—changes might be so rapid that we (and certainly other) species could not adapt. The second comes from a belief that technology can save us—by controlling the environment or by making possible remarkable adaptations such as escaping our earthly environment or changing our genetic constitution.

Gott dismisses all such thinking as the illusions of those who don’t appreciate the power of the Copernican principle. He contends that everything relevant to assessing our future prospects is contained in the statement that we are not at a special time. This article shows that the Copernican principle is irrelevant to considerations of the longevity of our species.


Source

As he states, "It should only be used when the phenomenon in question has no identi able time scales." Most people would contend that there are easily identifiable time scales related to species longevity.
 
I am going to write a break down of the joke that a real statistician would probably cringe at, and it will destroy the humor in it, so I will spolier tag it. But check out the link I posted with an example on mammograms/breast cancer, it is a pretty good visual explaination.
So the frequentist does your typical hypothesis test - the machine has an error rate 1/36 that is smaller than the typical p value used in science - 1/20. The p value is when you would reject your null hypothesis, in this case that the world hasn't ended. Therefore, given all of these facts, he rejects the null and concludes there is support for the hypothesis that the world has ended. The frequentist basically takes what the machine at face value, or in a way assumes getting an answer that happens less than 5% of the time is probably right.

What the Bayesian guy isn't sharing, is that in testing the same hypothesis, he sets it up with the prior probability - the probability before any information comes out of this machine that the world will end. Since supernovas are extremely rare, the prior is very low, and the information from the machine is taken into account in this framework and doesn't greatly increase the odds the world has ended. Therefore, he has very strong support that the world hasn't ended, and makes the 50 dollar bet. Clearly this answer is intuitive, if a machine tells you a super rare event has occured and the machine lies only somewhat rarely, then it probably is giving you one of those somewhat rare lies, despite the fact they are rare, because the event it is reporting on is even more rare.

Then, you have the absurdity that people that exist in the world are comparing the odds that the world has ended.
 
The premise that there is a "maximum" amount of human beings that could ever live is unfounded.

If mankind dies off, the total number of human beings existent at the moment of extinction and during the whole history of mankind will be a random number. There is no point in which God comes along and says "well, that's enough!" and ends the sequence.

Also, there is no mathematical relation between the amount of humans that have already lived and the theoretical amount of humans that will have lived throughout history once humanity is gone. This is because this second number does not exist right now. One cannot have a ratio between a known number and an unknown number. It could be anything.

I'm not sure I follow this objection either. I don't think the maximum number is completely random, but rather an increasing (or however fluctuating) number resulting from interactions between the previous members of the set and their environment. Although the factors may be too complex to integrate in a meaningful way, it's not simply a shot in the dark.

The fact that the maximum number is subject to variation is the whole point of the exercise. Given that it could be whatever number within a ridiculous range, we consider the probability of being where we are if it's going to end up on the higher end or the lower end. If there are going to be 100 trillion humans, it's strange (or improbable, rather) that we find ourselves amongst the first 60 billion-ish. More so than if there were only ever going to be 100 billion.

But that may be putting the cart before the horse. Ugh, headache.
 
I am going to write a break down of the joke that a real statistician would probably cringe at, and it will destroy the humor in it, so I will spolier tag it. But check out the link I posted with an example on mammograms/breast cancer, it is a pretty good visual explaination.
So the frequentist does your typical hypothesis test - the machine has an error rate 1/36 that is smaller than the typical p value used in science - 1/20. The p value is when you would reject your null hypothesis, in this case that the world hasn't ended. Therefore, given all of these facts, he rejects the null and concludes there is support for the hypothesis that the world has ended. The frequentist basically takes what the machine at face value, or in a way assumes getting an answer that happens less than 5% of the time is probably right.

What the Bayesian guy isn't sharing, is that in testing the same hypothesis, he sets it up with the prior probability - the probability before any information comes out of this machine that the world will end. Since supernovas are extremely rare, the prior is very low, and the information from the machine is taken into account in this framework and doesn't greatly increase the odds the world has ended. Therefore, he has very strong support that the world hasn't ended, and makes the 50 dollar bet. Clearly this answer is intuitive, if a machine tells you a super rare event has occured and the machine lies only somewhat rarely, then it probably is giving you one of those somewhat rare lies, despite the fact they are rare, because the event it is reporting on is even more rare.

Then, you have the absurdity that people that exist in the world are comparing the odds that the world has ended.

Give me your lunch money
 
Do you think the chances were better that there would have been 60 million humans than 4?

I mean, given that the 4 aren't being shielded by an omnipotent being, of course.

This is a really good question! I don't think we know well enough to say regarding humans - this is one of the parameters in the Drake equation - how often do advanced beings develop civilization?

But for other animals, say rabbits - you put two on an island that meets all their needs and then place odds on how many there will be total. The Doomsday approach holds that you might end up with very few - but from what we know of ecology, the rabbits will reproduce exponentially until they reach carrying capacity most likely, barring catastrophe occurring early on.

So, a small population of humans living in a good area - what are the odds? A lot of other branches of humans died out (which may have been due to interactions with the successful ones), so it might not be that good. But I think going back to the original post, it may be naive to calculate without taking into account some of these factors.
 
This is a really good question! I don't think we know well enough to say regarding humans - this is one of the parameters in the Drake equation - how often do advanced beings develop civilization?

But for other animals, say rabbits - you put two on an island that meets all their needs and then place odds on how many there will be total. The Doomsday approach holds that you might end up with very few - but from what we know of ecology, the rabbits will reproduce exponentially until they reach carrying capacity most likely, barring catastrophe occurring early on.

So, a small population of humans living in a good area - what are the odds? A lot of other branches of humans died out (which may have been due to interactions with the successful ones), so it might not be that good. But I think going back to the original post, it may be naive to calculate without taking into account some of these factors.

Fuck, when I wrote that I swear to God I was thinking the exact same thing about the rabbits. Two on a perfect island. Unreal :icon_chee

Anyways, I think the point of this example is similar to the Qu's and the old lady. We can't just make these kinds of predictions solely based on probability. The rabbits environment and genetic make-up might actually totally favor their procreation, there being more rabbits isn't enough to bring the odds down. Similarly, because the lady is old we can't go ahead and say that she's likely to live much longer, because of the natural lifespan of a human being.

Cool. Thanks everyone.
 
If you walked in on an old lady's birthday and found out she was turning 102 you wouldn't say, "Golly gee, this isn't any special time so there's a 50% chance she's more than half done with her life and a 50% chance she isn't."

Here's something from another paper by the same author:

What about the focus of Gott’s Nature article, the longevity of the human species? A species’s survival depends on its ability to adapt to short- and long-term environmental changes produced by other species in its ecosystem and by climatological and geological processes. The adaptations are made possible by existing genetic variability in the gene pool and by random mutation. How homo sapiens fits into this picture is a complicated question, certainly not amenable to a universal statistical rule. As Ferris [1] puts it, “ . . . in my experience most people either think we’re going to hell in a handbasket or assume that we’re going to be around for a very long time.” Both views are a reflection of advancing technology. The first comes from alarm at technology’s increasing impact—changes might be so rapid that we (and certainly other) species could not adapt. The second comes from a belief that technology can save us—by controlling the environment or by making possible remarkable adaptations such as escaping our earthly environment or changing our genetic constitution.

Gott dismisses all such thinking as the illusions of those who don’t appreciate the power of the Copernican principle. He contends that everything relevant to assessing our future prospects is contained in the statement that we are not at a special time. This article shows that the Copernican principle is irrelevant to considerations of the longevity of our species.


Source

Im having trouble seeing the logic hold.

Lets use this example.

You are infected with a bacteria, lets say n=10. Lets say after a few hours n=100, we say this is the random number. Now using this logic it seems more probable to say the bacteria will only reach n=1000 and not n=10^6.

I just don't see how the logic holds barring outside knowledge of the persons immune system or antibiotic usage.

Every species starts off with a small n, and that means the probability will always favor n staying small, but since there are already 7 billion (?) people doesn't that negate the logic of using this probability premise.

Edit* actually I should say 60 billion people, I was thinking of people alive currently.
 
Last edited:
Is that true, though?

Think of the rabbits.

I was saying in the context of the premise presented it is true. I'm arguing against it, because from what we see in nature (ie. rabbits, bacteria, etc) most populations end up growing exponentially barring outside factors, which we are not supposed to take into consideration.
 
Is that true, though?

Think of the rabbits.

Well if you were picking random microbes that enter your body, you would a whole lot that fizzle out after just a few, and a rare some that bloom and colonize.

Again it seems funny having this convo, intentionally ignoring all of the factors. Those factors would typically go into your prior probability when setting these odds..
 
If you walked in on an old lady's birthday and found out she was turning 102 you wouldn't say, "Golly gee, this isn't any special time so there's a 50% chance she's more than half done with her life and a 50% chance she isn't."

Here's something from another paper by the same author:

What about the focus of Gott
 
I've gotta get out of here before the probability of my head exploding gets too much higher.

Here a link to Bostrom's primer, mentioned in what I originally quoted: http://www.anthropic-principle.com/?q=anthropic_principle/doomsday_argument

Well if you were picking random microbes that enter your body, you would a whole lot that fizzle out after just a few, and a rare some that bloom and colonize.

Again it seems funny having this convo, intentionally ignoring all of the factors. Those factors would typically go into your prior probability when setting these odds..

I don't know if it's intentionally ignoring so much as realizing that there are so many factors to take into account that the range of possibilities is enormous. Whether or not THAT is true is a whole other thing, but fuck, we made it past 2012 didn't we? Carrying capacities are for n00b species.

At least this perspective is refreshingly optimistic :p
 
The reason I think it's kind of bullshit is because no matter what the number of your sample is, the sample size n is assumed to be closer to the given lower bound than further away from it, which can be wildly untrue/vary a lot.

So in a situation like the doomsday scenario, whether the human population is 10, 1000, 1 billion etc..., using that probabilistic frequency approach, it always makes it out that we are closer to doomsday than far away from it solely because the denominator for doomsoon is smaller than doomlate and that seems like a pretty bad way of estimating sample size or a total population IMO
 
Skimmed the OP's article, Reminds me of Richard Gott's theory where we have between 2000 and 200000 thousand years before extinction. Based on the premise we only occupy one planet.

The survival rate of humanity increases exponentially with each new self sufficient outpost outside of earth.

Been a few years since I've looked at the math, interesting topic though.
 
Back
Top