"In terms of this particular problem? Because I told you so."
"Why did I tell you so? Because I was trying to pump the intuition that - assuming 0.1, 0.2, 0.4 equally prior-probable within the 'less than 0.5' bucket - that bucket was then less 'likely' to yield Text Queen Queen Text Text than the 0.5 bucket, even though 0.4 was in that bucket. I was trying to pump that intuition by showing that, if the whole biased bucket and the whole unbiased bucket started out equally probable, then, after seeing Text Queen Queen Text Text, we'd think the unbiased bucket had become more probable and the biased bucket less probable. So what we saw must have a lower 'likelihood' in the biased bucket that starts out with 0.1, 0.2, and 0.4 having equal prior-probabilities."
"I mean, we could argue about how that would go in real life, instead of a thought experiment. You could say that all four hypotheses are equally simple to describe out loud and should therefore be around equally probable. I could then counterargue that if we're talking about an actual coin, then in real life, most coins are probably pretty close to being fair random-number generators when spun - though I ought to actually verify that here, before I bet anything important on it. So it should actually take hundreds of observations before we start believing the coin is 40% biased towards Queen, I would argue; five coinflips is nowhere near enough. Therefore, I'd conclude, 'the coin is biased 40% Queen' is a lot less likely than 'the coin is an unbiased random generator'."
"But it would be better if arguments like that didn't have to appear in our 'published-experimental-reports'. Which is one angle towards 'grokking' an underlying central reason why 'published-experimental-reports' ought to summarize likelihoods for hypotheses that are more like 'observational likelihood if this coin has 0.4 Queen propensity', and less like 'observational likelihood if this coin has a less than 50% Queen propensity'."
"If you just summarize for the reader 'What is the likelihood of my data, in the world where the coin comes up 40% Queen? The world of 50% Queen? The world of 10% Queen?' then you don't have to confront the question of whether 40% Queen was 1/3 as prior-probable or equally prior-probable with 50% Queen."
"Oh, and, uh, to make it explicit:"
Examples of Baseline terms for 'prior', 'likelihood', 'posterior':
'Prior' coin is 40%-Queens:
P( Q=0.4 ) = 1/6
'Prior' coin is 50%-Queens:
P( Q=0.5 ) = 1/2
'Likelihood' of TQQTT, if coin is 40%-Queens:
P( TQQTT ◁ Q=0.4 ) = 0.03456
'Likelihood' of TQQTT, if coin is 50%-Queen (fair):
P( TQQTT ◁ Q=0.5 ) = 0.03125
'Posterior' coin is 40%-Queens, after seeing TQQTT:
P( Q=0.4 ◁ TQQTT ) = 3456 / (729 + 2048 + 3456 + 3125*3) >(1/5), <(1/4)
'Prior' coin is <50%-Queens:
P( Q<0.5 ) = P( Q=0.1 ) + P( Q=0.2 ) + P( Q=0.4 ) = 1/6 + 1/6 + 1/6 = 1/2
'Likelihood' of TQQTT if <50% Q:
P( TQQTT ◁ Q<0.5 ) = 1/3 * .00729 + 1/3 * .02048 + 1/3 * .03456 ~ 0.02
"How am I doing, Korva Tallandria?"