This post has the following content warnings:
happy days increasing the universe-conquering capabilities of Lawful Evil
Next Post »
« Previous Post
+ Show First Post
Total: 2578
Posts Per Page:
Permalink

...is what Keltham would like to say, but yeah, realistically, he knows they're going to stress out about it.  He apologizes for that part.  Last time the researchers ended up not knowing exactly what they were being tested on, and that... was not really a better way of doing things, he doesn't think.

There'll be a predictable two-hour recovery period after lunch, before they start the afternoon's research.

 

The whiteboard now reads:

Wielding the Law of Succession, that after N lefts and M rights, the odds of LEFT vs. RIGHT are (N+1) to (M+1):

- Figure out how you would analyze whether Source 1 and Source 2 of LEFTs and RIGHTs had the same propensity p for left-vs-right, or different propensities p1 and p2.  Assume it's not obvious at a glance.

- Say what an experimentalist out of Civilization would say or not say in their summary of their data.  (It won't be, "I decided these two sources were the same" or "I'm betting these two sources are different.")

- Talk, if you have the time, about a general symptom that might appear - not just in analyses using the Rule of Succession - when two different experimenters end up performing an importantly different experiment-on-reality and end up with two datasets generated by importantly different sources, as might turn up when someone out of Civilization was analyzing their datasets or experimental summaries side-by-side.

- You are welcome to say more, if you have more to say.

Permalink

PL-timestamp:  Day 25 (21) / Morning Test of Doom

Permalink

A stupid thing you could try - it might not work, but it seems like a place to start - is take the first set of outcomes as your starting guess for the second set of outcomes, and then do the math they learned already to adjust the first set for each additional trial, and then you'd end up with your new estimate. ...would that produce the same result if you did it in either direction? It should. So the difference in estimates from each individual estimate to the combined estimate should be equal, if they were tests that ran the same length, which wasn't specified......

 

 

...is this really better than eternal torture....

Permalink
Willa's Story

Someone reading Willa's thoughts might be surprised to find out that she isn't afraid, she's excited. Willa's good at tests, the tricky 2-4-6 thing from earlier this week notwithstanding.

And this one isn't like that, it's pretty well defined. If anything's going to get in her way of being special, it won't be a test like this. She keeps her boosts back for now, she can at least see how to start already...

If the first data set has N Lefts and M Rights, then that would inform a set of relative Posteriors:

p^N*(1-p)^M

For some probability p that each ball goes Left.

Then you would want to normalize those to sum to 1 if possible, you could use the result Keltham already got for that because it's a pretty hard problem on its own; she looks in her notes and finds that it's M!*N!/(M+N+1)!. So for the first data set the Hypotheses of left probability have their own probabilities:

P1(p) = [p^N*(1-p)^M]/[M!*N!/(M+N+1)!]

Similarly, if we say the second data set has L lefts and R rights, we can deduce the same probability function there:

P2(p) = [p^L*(1-p)^R]/[R!*L!/(R+L+1)!]

Willa feels sure somehow that the right solution involves using these two functions together somehow. She could instead just use the first one and then feed the other data into it, but why should one data set be treated differently than the other? The situation is symmetric, so the Law should treat them symmetrically.

So what's the important thing here? It's tempting to say that the functions want to be the same, but that's wrong. Both functions could have no data at all and both would be the same flat line, and she'd know nothing at all about if they were the same.

To KNOW FOR SURE the ps are the same, or different, you would have to know p exactly. The function would have to be a lone spike of probability somewhere in each case. Like if P1 was a spike at p=0.5, and P2 was a spike at p=0.6, then you have a 0% chance they're the same. Similarly, if they're both spikes at p=0.5, there's a 100% chance (as long as the model is right in the first place...)

But how do you actually process the P functions to get those 100% or 0% or anything in between chances? Well, what do you always do with probabilties? You multiply them. So then... you'd have to multiply these functions together. With an integral?

Willa's been feverishly learning calculus since she saw it used to such powerful effect, she thinks you'd integrate the two of them multiplied together, and it would be a definite integral. You'd be integrating over the little probability, the p, so dp. The bounds would have to be from p=0 to p=1, the set of possible outcomes.

Would you have to normalize? Scary, she isn't sure. She'll think more about that part later.

INT([p^N*(1-p)^M]/[M!*N!/(M+N+1)!]*[p^L*(1-p)^R]/[R!*L!/(R+L+1)!],p,0,1)

What a mess. But a lot of this doesn't even have p in it, it can seamlessly escape the integral. Goodbye denominators! To Abbadon with you!

INT([p^N*(1-p)^M]*[p^L*(1-p)^R],p,0,1)/[M!*N!*R!*L!/(M+N+1)!(R+L+1)!]

Rearrange a little bit, combine like terms...

INT(p^(N+L)*(1-p)^(M+R),p,0,1)/[M!*N!*R!*L!/(M+N+1)!(R+L+1)!]

And now it's the same form Keltham had again! She can use exactly what she used to normalize them earlier! She doesn't even have to do any work! She feels like cackling. She doesn't of course, but she'll remember this later and cackle.

[(N+L)!(M+R)!/(N+L+M+R+1)!] / [M!*N!*R!*L!/(M+N+1)!(R+L+1)!]

Clean this up...

[(N+L)!(M+R)!(M+N+1)!(R+L+1)!] / [(N+L+M+R+1)!*M!*N!*R!*L!]

So this could be an answer. But calm down. Don't get overexcited.

OK, that's impossible.

But she still had to be a little careful here. First, was she supposed to normalize? She thinks how the flat probability distributions would've looked. P(p) = 1 from 0 to 1 would be the flat one, that normalizes properly, she knows. If she integrated that times itself, obviously she'd get 1. Concerning. So she has some normalizing work to do still then.

What if it was P(p) = 2 from 0 to 0.5, for each? Then she'd get 2, from 2^2=4, then 4*0.5=2. Makes sense, they're twice as much like each other. So the idea is at least relatively correct, good, good. Think of her answer as a Rating for now, rather than true probability.

Rating = [(N+L)!(M+R)!(M+N+1)!(R+L+1)!] / [(N+L+M+R+1)!*M!*N!*R!*L!]

It occurs to her now that if you assume each true probability can be anything between 0 and 1, the chances they line up exactly should be 0. In a way, it's nonsense to say they can be "the same", at least when working in this framework.

But they can still be nearer together or farther apart. Maybe what she's looking for is the expected difference in probability, or something like that. The half-full ones were twice as good. And clearly, the half full ones are twice as near together. So distance apart is inversely dependent on rating, almost surely.

What's the average distance apart for rating 1 then? That's the key to all this, she can work from that to get everything. But there's something tricky here, she feels a tinge of suspicion.

Owl's Wisdom.

And she realizes she's at least a little wrong. The 0.5 and 0.6 spikes she thought about before would have rating 0, and they're very definitely 0.1 apart. Darn. Is this the end of the road for the two-function method? But this sort of thing would have to happen no matter how she does it, wouldn't it? If she's working from an initial prior that the true p1 can be anything between 0 and 1, and the true p2 can be as well, then the odds of them ever being the same must always be zero.

She's so tempted to ask Keltham what the heck this problem is even supposed to be then, but she forces herself not to. This must be part of the test.

So let's think about this a little more, in a new and strange direction. Imagine she had a prior not about p1 or p2 individually, but about them being the same. If her prior was 1/2 say, that's sorta like saying p1 or p2 can each be one of two values with equal chance, and they might line up or might not. And 1/3 chance of being the same would be three different values, and so on.

So maybe the chance p1=p2 is something you have to get both from the prior probability "Q" that p1=p2 and from the data itself. Maybe her rating is useful after all? Let's think of some cases.

0.5 spike and 0.6 spike. Rating zero. Chance they're the same, always zero. Full-flat and Full-flat. Rating 1. Chance they're the same? Well, the full-flats are like having no data at all, which means the chance must remain Q, the prior you started with. Half-flat and Half-flat. Rating 2. Well, you definitely don't multiply, since Q might be more than 0.5, and 2Q would then be more than 1. Bad. But the chance is definitely more than Q. 0.5 spike and 0.5 spike. Rating... rating infinity. Has to be, infinitely squished together so infinitely big rating. Probability has to be 1.

So what sorta function looks like that? Multiplying is dumb, what about an exponent? But Q is less than 1 and big Rating is good, so maybe...

Updated Probability of Same = Q^(1/Rating) ???

It's a wild guess, but it gets points for being a simple guess, at least. So this would mean half-flats with rating 2 would give you SQRT(Q). 0.5 upgrading to 0.707. It seems plausible? It's at least something to use as a backup plan, it's not a terrible try.

Can she work it out from first principles now that she has a better idea what she's doing? Or maybe find that it's wrong and see something better? Owl's Wisdom runs out, and she decides to cool off for a little, do some sanity checks on her Rating to make sure it even makes sense. It gets better and better as both cluster to the same side, and worse and worse as they cluster to opposite sides. OK, good.

She's pretty sure now that the problem needs a Q. The way it's framed doesn't make any sense without it. If there were buckets, you could make guesses about bucket priors and it might be doable without a Q, but there are no buckets, and buckets are mean and nasty anyway. They went over that. And if you take a totally random 0 to 1 as the prior for both, then the answer to the question is just zero, and it's boring. You need a Q.

But how do you go from Q to anything useful? As necessary as it is, it's kind of an obnoxious object. She thinks about it hard, doesn't get anywhere, and then decides it's time for Fox's Cunning.

Let's go back to those half-flat functions: 2 from p=0 to p=0.5, 0 elsewhere. Imagine I'm given Q=0.5, and that function for each data set. What do I conclude?

It's difficult because the probability weight of the functions together is sort of fundamentally a line shaped-thing, and the functions apart is an area shaped-thing. But she knows it isn't an infinite update in favor of them being not the same, that'd be silly. The two full-flats make for no update at all. Maybe she can use that? With the full flats, the Rating is 1, and Q isn't updated at all. That means Same and Not-Same had the same likelihood there. For the half flats, the Rating is 2, so in a sense the likelihood of Same has doubled. The likelihood of Not-Same... maybe that can't really change? The total probability area can't really be effected by the little probability line.

So imagine a Rating of 2 is a 2:1 update. That feels right, in a comforting way. Her ratings are just likelihoods, basically. So...

Updated Probability = 0.5*2/(0.5*2+0.5) = 2/3

Great, that makes sense. Or generally...

Updated Probability of Same = Q*Rating/(Q*Rating+(1-Q)*1)

Yes, yes. She's going to register a second instance of cackling to be saved for later now. So in summary...

Prior Probability of Same = Q
Set 1: L Lefts, R Rights
Set 2: N Lefts, M Rights

                (N+L)!(M+R)!(M+N+1)!(R+L+1)!
Rating = -------------------------------------------- = Likelihood Ratio of 'Same' vs 'Not Same'
                      (N+L+M+R+1)!M!N!R!L!

Updated Probability of Same = Q*Rating/(Q*Rating+1-Q)

In Civilization, they wouldn't even give a prior Q as a guess in most cases, just the "Rating", aka the Likelihood ratio. But she figures it's good to be very clear about how one should handle their Q, if they had one and wanted to do something with it.

Do her old sanity checks still work? Rating of zero gives update to zero. Rating of infinity gives update to one. Rating of one still gives "update" to Q. And this is more like the real language of probability than her first guess, it's actually justified, at least sort of. She thinks that for now this is as good as she can get with the main part of the problem.

But this is so so so important. She gets her second Fox's Cunning from staff, and then spends the first minute of it looking over everything she's done again, just in case. Nothing else catches her eye, so she starts thinking about the rest in truth now. So she starts writing paragraphs after all the equations interspersed with notes:

"Now imagine that the p you're looking for really is the same, and you have some medicore Prior Q of that to start with. Let's say p=0.5. But one experiment is leaning right a little, and one left a little, so their true probabilities are p1=0.45 and p2=0.55."

"With small amounts of data in both experiments, the probability functions won't be very sharp. They'll just be soft hills, taller in the middle, and the Rating you get by combining them will still be higher than 1, and update Q in the correct direction, towards 'Same'. But as you took more and more data, the functions get sharper and sharper, like her imaginary spikes. Eventually you'll have spike-like functions at 0.45 and 0.55, and they'll barely even intersect! The rating would be terrible, much less than 1, and update Q in the wrong direction, towards 'Not Same.'"

"So at first, if the experiments just had a little data, comparing them would suggest the right result. But as they got more and more, the problem would get magnified, and it would eventually start suggesting the wrong result instead. It almost seems like a paradox: more data should be making things better, but it's making things worse."

"It's because even if the data is real data, the probability functions from it are sort of lies, if you think they're referencing the true behavior and not just the data from the experiment. p1 and p2 aren't really quite the same thing, they're both just nearby shadows of the same p. So really, the Law isn't lying to you. It's saying p1 and p2 are probably different, and they are! But that's just about the experiments, not about p itself."

"So when you look at an experiment, you should maybe limit how spiky you let your probability function get for the true p. You need to force it to be at least as wide as the inherent errors in the experiment are big. Exactly how wide you force it to be, and how to do that with Law instead of with handwaving, is probably the subject of another problem."

It seems like a big difficult problem, and she's fresh out of Foxes and Owls. So that's that with that for now.

But as lunch approaches and she looks her paper over, she feels confident. Maybe some of this is wrong, it's possible, but she doesn't think it could be wrong enough to sink her.

Permalink
Korva's Story

Korva misbudgeted her time last night; her requested books about the history of wizard education had come through, and she, idiot that she is, had spent time reading them, hoping that she could make up the five hours it would probably take her to fully understand the Rule of Succession early tomorrow. If not for the instruction to prep specific spells, she might have tried to pull an all-nighter understanding the math (which she did do earlier this week, before their ostensible day off). But even though she can't actually prep Fox's Cunning or Owl's Wisdom, she didn't want to be the only person who couldn't prep anything, couldn't even prestidigitate her name tag. She managed to cram a little work in during breakfast, after spending more effort than usual prepping her spells, but if anything it made her feel more confused.

She should have realized that this was going to be a test day. Stupid.

She has her notes, but there's no time to do the necessary work to fully understand them. She could try copying the exact math that Pilar and Asmodia did, without understanding which parts are different in this problem, but Keltham will see through that immediately, and - honestly she's not sure she can do the math, actually, she looks at it and tries to read the problem and there's just - nothing, no understanding left in her brain. She tries copying the literal symbols that she wrote down in her notes, in case that jogs something loose, but it doesn't seem to; she keeps staring at it without comprehending.

She's going to die.

No, worse than die; she's going to be rendered completely useless for the entirety of her existence. Maybe she should take some comfort in the fact that this cruelty to her must be making Keltham more evil than he already is, but she doesn't think that's the sort of contribution to evil that gets you a duchy, or a spot in the library of oaths, or a few more years of mortal existence. She hates Keltham, violently and passionately, for stealing her perfectly good life from her and buying her like a slave, without so much as checking beforehand whether he would want to use her or cast her aside like so much refuse. He is able to do so, and so he did, and it doesn't really make any sense to be angry about people using the power they have to hurt you however they want, but she hates him for it anyway. She wants to punch him, or maybe cut his head off, or maybe rip his eyes out, or maybe just scream at all of the gods who are so obsessed with him that he's a self-important teenage idiot, that he's nothing special among his people and only very coincidentally of any interest to theirs, that he's a cleric (a fourth circle cleric!) of Abadar, who is nonetheless too careless to realize that he owns his trading partners like chattel, and that any talk of paying them is farcical, and that in general he deserves to be in the same stupid, awful position as the rest of them, and not only doesn't have the basic residual humanity to feel embarrassed about his wholly undeserved position, he doesn't even notice that it's undeserved. Hasn't even noticed that he's an incredibly shitty teacher, even though he's been trying to cram months of lessons into them in a matter of weeks, because he ASSUMES that adults can learn things that much faster than children, even though they CAN'T, there's NO reason to think they could, they can't do it with languages and they PROBABLY CAN'T DO IT WITH MATH EITHER, unless they are GENIUSES who are doing WAY MORE WORK THAN THEY SHOULD HAVE TO DO to make up for his SHITTY ASS TECHNIQUE.

Except that some people can. The other people in this class can.

The only person she hates more than Keltham is herself. Her hatred for herself is blinding, searing, like being shot through with a piece of the sun. She thinks that she might feel a small piece of what Asmodeus feels about her, some tiny percentage of the full weight of the agony of being human, and a not even very impressive human at that.

 

The only thing she can think to do at all is to write something. Not math; she doesn't have any math in her. She has words, ugly and shifting and imprecise and disgusting and human, but Keltham prefers wrong answers to no answers, so she will at least try to give him the stunningly wrong answer that she has.

There are two tables, each of which produces sets of RIGHT and LEFT. She wants to know whether they output the same or different frequencies of LEFTs and RIGHTs. The first thing to note is that, per previous lessons, you can't know; a Civilization experimentalist would, what - they would have multiple possible specific underlying frequencies of LEFT and RIGHT for each source, to be hypotheses, and then calculate the likelihood of getting each specific dataset in each of the worlds where a given possible frequency was the true frequency. And then they would - what - you need to compare them, but you're not supposed to use hypotheses that rely on comparisons between multiple data sets -

That also doesn't use the Rule of Succession, obviously, because she's only dimly aware of what the Rule of Succession even is. It's something to do with breaking the space of possibilities into infinite pieces, which you'd think would allow you to identify the specific hypothesis that is most likely, maybe, for each of the datasets. And then - how would you present it. The most likely frequency, plus how likely this dataset is if you have the true frequency - maybe, although that has as much to do with the size of the dataset as with whether you're right - 

And then given that information, you - you - well, you can at least say how likely it is that the data sets have the same frequency. And then you'd have to... calculate it. Somehow.

 

Her paper is wet.

 

She looks at the calculations in her notes, one last time, and her brain just slides off of them. Might as well be written in Celestial or ancient Azlanti.

She writes her stupid, probably-incoherent verbal description of how a Civilization experimentalist might structure their analysis, doesn't do any actual calculations, buries her head in her arms on her desk, and waits for someone who thinks that the garbage never cries in Taldor to kill her and replace her with someone less distraught. One of the other students will target her later anyway, even if security doesn't.

Permalink
Alexandre's Story

So the simple solution is that he lists a set of possible starting theories, doing his limited best to carve up all available regions of idea-space visible to him, then describes which sequences he would consider evidence for each theory, then describe how to compare the relative ratios; again, this is all simple things Keltham taught them. He'll also want to come up with a way to mathematically describe 'all theories have unexpectedly low predictive power'; that ought to be very simple given that Keltham already told them what it looks like...

... The obvious way to note an error is just them getting very different results. In particular, getting results that form very different curves; if it looks like slightly different concentrations along a curve when you combine them, or one of them being thrown by one odd result, those are checkable things, but if, in general, it looks like the thing they are measuring has a different shape, probably the thing they are measuring has a very different shape; if they can’t predict each other, a very plausible outcome is that they aren’t the each other they’re trying to predict, he can put that into math...

And Alexandre will quietly enjoy himself, producing not really a general solution for the problem, because given that individual people will start with their own individual beliefs, many of which they are not explicitly writing down anywhere, he does not think he can do that, but something that, given a little work to turn his neat little descriptions into pseudocode, would work quite well as a computer program for interpreting results and adjusting your priors into posteriors based on them, if any computers happened to exist in Golarion that could run it.

Permalink
Dath ilan commentary

...Alexandre's methodology would work great if Golarion had a hypercomputer that could check an infinite variety of infinitary metahypotheses, yes.

It's getting anything done with a finite amount of computation that makes SCIENCE!analysis be nontrivial.


But it sure beats not even knowing how to solve the problem using a hypercomputer, and frankly, not all of the researcher-candidates are getting anywhere near that far.

Permalink
Carissa's Story

Carissa should have put these students through more stressful things so they wouldn't be so stressed out by tests. That's a failure of Chelish education having failed to give students something to be more scared of than math -

- unless the problem is that the students are scared of Hell, and therefore not solvable that way? But they're not even sending the rejects to Hell! 

She entertains the temptation to order everyone to believe that they're not going to be executed for math inadequacy, just so that when this situation inevitably happens anyway they'll know it's their own fault for disobeying orders. Somehow, though, it doesn't seem like that would actually fix the problem.

 

Carissa has a headband that amounts to Fox's Cunning all the time, and has taken to preparing three or four Owl's Wisdoms a day; it's an unfair advantage but life, and death, and everything else, are unfair.

Keltham wants a rule for calculating a likelihood ratio between the 'same' theory and the 'not same' theory. 

If they're in fact the same, then she can add up the lefts from both experiments and the rights from both experiments and get odds of left:right of (N1 + N2 + 1):(M1 + M2 +1). If they're different, then the odds for the first experiment are (N1 + 1):(M1 + 1) and the odds for the second experiment are (N2 + 1):(M2 + 1).  

Then to get the propensity from the left:right ratio you have to do that deeply unpleasant thing with the factorials, which is going to make everything else messy. She digs it out of her notes. 

p: M!*N!/(M+N+1)!

p1: M1!N1!/(M1 + N1 + 1)!

p2: M2!N2!/(M2 + N2 + 1)!

pcombo: (M1 + M2)! (N1 + N2)!/ (M1 + M2 + N1 + N1 +1!)

 

Right, okay, so the odds ratio between the SAME and NOT-SAME theories is just pcombo/p1p2. Which looks hideous, (N1+N2)!(M1+M2)!(M1+N1+1)!(M2+N2+1)!/ (N1+N2+M1+M2+1)!M1!N1!M2!N2!, but conceptually it's not all that hideous. 

 

Presumably if you'd done this experiment you'd want to report the actual data, and the counts of 'left' and 'right', and also your likelihood functions on the actual propensities, and then your likelihood function on SAME: DIFFERENT. 

She feels like the third question is getting at something particular, not just as the general principle that if your two data sources are different then you wouldn't want to combine them and treat them as more observations from the same process, but she's not actually sure what in particular. Presumably you could just notice that the dense bits of your likelihood function for each look very different from the dense bits of your likelihood function for the combination, and that if you ever notice that it's a bad sign.  - are they supposed to formalize that? It seems surprisingly hard to formalize, but she does have almost two hours to go....

 

Some of the new students look miserable. To be fair she's not at all sure that without a headband she wouldn't be just as miserable. She's going to - not try to formalize that, she thinks, and instead fold up her notes and hand them in to Keltham and then tilt her head at him invitingly and head to the library.

 

 

Permalink
A median dath ilani's Story

The median dath ilani has - by Golarion's standards - some power that Golarion knows not, or aptitude that it knows very little.  It isn't easily captured by the INT score that is measured by Detect Thoughts.  Even if you add in whatever "Wisdom" is measured by Detect Anxieties, there's some residual that isn't measured still.

It's not just about the training the dath ilani undergo in childhood.  No, really, it's not.  They have been doing a lot of mate-selection, and not just since there've been prediction markets about what kind of children would result.

 

If somehow you took the median dath ilani, and exposed them to only a few fragments of Law such as Keltham has already taught, a few stories about cognitive science such as Keltham has already told, if they'd had only that little true education by the time they'd reached adulthood -

- the median dath ilani would still be told of the Law of Succession, and think spontaneously, without any outside prompting, of how you could use that to guess whether two sources were the same or different.  Just look at the scores if you use the Law of Succession twice, separately, versus using the Law of Succession on both of them together.  The difference between those scores is the likelihood ratio for different versus same.

If you get 4 LEFT and 1 RIGHT off one source, and 4 RIGHT and 1 LEFT off another source, then analyzing them separately gives you scores for each of 4!*1!/6! = 1!*4!/6! = 24/720 = obviously 1/30 just look at the factorials.  Analyzing both datasets together gives you 5!*5!/11!, which you can see intuitively is going to end up around 1/1024, minus over a bit for the product being 1/2 * 1/3 * 2/4 * 2/5 * 3/6 * 3/7... instead of just 1/2^(10).  (It's actually 1/2772 if you bother to calculate - not as nightmarish as it looks, you can cancel a lot of factors.)

Point is, you've got a set of two likelihood functions for two separate datasets, versus one likelihood function on the combined dataset, where, if you started with a uniform prior on three propensity spaces, and multiplied by all those likelihood functions, the two separate functions would destroy all but 1/30 of the probability on each of the two separate spaces, and the combined likelihood function would destroy all but 1/2772 of the uniform probability on its own parameter space. 

It's not to the point where if you literally pulled a coin and flipped it 5 times twice, you'd conclude shenanigans from seeing 4 LEFT and 1 RIGHT the first time, versus 1 RIGHT and 4 LEFT the second time.  That's just a likelihood ratio of 1/900 vs. 1/1024.

But if the problem is more mysterious than that?  If you are less certain at the start that your data is coming from a single source across both cases?  Then you'd be looking at an update of more like 2772:900 ~ 3:1 that they were two separate sources, after that.  If it wasn't pretty plausible to start, it's not plausible yet now, after so little data.  If you were already pretty suspicious, you're now quite noticeably more suspicious, though.

 

The median dath ilani - even given only such education as Keltham has already provided - fewer hints than that, even - would spontaneously generalize the principle of taking alarm if two experiments seemed to have nonoverlapping likelihood functions.

Suppose the likelihood functions are over a simple hypothesis space - such that likelihood functions form clouds naturally visible in that space - such that there is a natural way to informally see boundaries around narrow subvolumes of the clouds.

You can't say it in an absolute way, apart from some prior and arbitrary concept of how to draw boundaries like that and divide up the space.  You could always throw some random points into an otherwise compact cloud and say that you thought they should be in there.

But informally, it's natural enough to see 376 LEFTs and 624 RIGHTs, look at the likelihood function P(data|propensity=p) = p^376*(1-p)^624, and say, "That cloud has 90% of its density between p=35% and p=40%."

If you widen to the amount between p=30% and p=45%, that's 99.9999% of the likelihood density.

And then let's say that you run a different experiment, and it turns up 602 LEFTs and 398 RIGHTs.

Informally - for there is no way to say it formally, without introducing an arbitrary note of subjectivity; we are looking to cues that the data gives us to look outside our hypothesis space, and there is a limit to how much you can ever formalize that, without invoking enough Law to create a mortal from scratch - informally, you look at that and say "No way in superheated toilet paper are those the same two data sources".  The overlap of the two clouds in likelihood-space is virtually zero.  They have each eliminated practically all of the probability from any hypothesis that could non-stupidly account for the other, and those two different stories cannot exist in the same world.

You do not "combine the data from the two experiments to estimate the parameter".  No, not even in sane worlds where all you do to combine the analyses from two experiments is just multiply their two likelihood functions together.  Sure, you can multiply p^376*(1-p)^624 by p^602*(1-p)^398 and get p^978*(1-p)^1022, but you obviously shouldn't do that.  The first trial concentrates 99.9999% of its survivability between propensities of 0.30 and 0.45, and the second concentrates 99.9995% of its survivability between 0.53 and 0.67.  There's no overlap between what the two datasets say are the livable regions of the parameter space.

These two experiments were not conducted with the same world feeding them their answers, though, ultimately, they were conducted inside the same greater Reality.  Something is wrong in one place or both.

It's just one of those things that, like, is incredibly important to doing real Science! and jumps directly out at you, once you start thinking about experimental reports in terms of likelihoods, which, in fact, the median dath ilani will do even if nobody explicitly tells them so and even if their entire world tries to tell them otherwise.

Permalink
Asmodia's Story

Did Asmodia always have that strange thing in her, that the median dath ilani possesses?  Did she have it before Otolmens touched her?  Did she have it before her hour-long epiphany wearing an artifact headband?

Whatever that strange quality is, Asmodia has it now, somehow, outside of dath ilan.

And if she is not a dath ilani yet - not any kind of ilani, for being so untrained - she is now recognizably an untrained ilani.

Point being, Asmodia already thought through all of Keltham's questions earlier, during the Law of Succession lecture.  He'd certainly hinted heavily enough that he was going to ask questions in those directions.  Practically spelled it out, even.

Asmodia writes down all the obvious stuff.  Thinks a bit more.

Writes down some speculations about, well, maybe you could also update the Rule of Succession on the first data-bunch, ask how well it expected to score, and then further-update the Rule on the second data-bunch to see if it scored around that well; and also do the reverse; and if in either case the further-updated Rule scored much more poorly than it expected to score after the first update, that might indicate a problem.

But it's not really a three-hour question?  Is she missing something?  If she's not missing something, Asmodia is sort of worried about how much Keltham is softballing the new researchers, here.  This seems like really basic and obvious Law of Probability.  What's the catch?

She spends a while staring at her paper, but when Carissa Sevar hands in her own version, Asmodia sort of quietly sighs to herself, and hands in her own as Keltham walks out with Sevar.

Permalink

...Security notifies the Chosen of Asmodeus that Korva Tallandria seems to have broken down in tears in the middle of the classroom, thankfully after the Chosen left with Keltham.  Is this happening inside alterCheliax?  If it happened in alterCheliax they'd - probably tell Keltham, right?  And if it's not happening in alterCheliax, everybody in the classroom needs to be told it hasn't.

Permalink

- huh. 

 

It doesn't seem inherently implausible that a student in Taldor would start crying. Some of the kidnapped Taldane students cry. However, Keltham will probably go talk to Tallandria, if he learns this, and that seems like a situation where a slip-up might happen. Presumably she's being mind-read? Is she in any state to talk with Keltham if he shows up wanting to talk to her?

Permalink

Tallandria is currently experiencing an amount of self-hatred that seems appropriate for somebody as pathetic as herself?  She's currently imagining being shot through with pieces of the sun until only the tiny fraction of her that isn't pathetic remains, and feeling out whether she could learn to be okay with being a paving stone.

Given that she's crying in the first place, in front of the other students, and that this is incredibly incredibly bad for her self-interest, Security is concerned that Tallandria wouldn't be able to run Bluff on Keltham even though her soul depends on it.

Dominate Person?  Toss her to Subirachs for attempted reforging?  They also haven't tried flogging Tallandria until her morale improves, which would be the first resort anywhere sensible if they're not pretending to be fucking Taldor.

Permalink

How about they don't tell Keltham about this, and tell the other students it's not the case in alter-Cheliax, and pull her out an hour before time's up to see if she can be put back into condition to Bluff Keltham.

 

Also they should loop in Asmodia, who she thinks liked Tallandria.

Permalink

 

...should Asmodia possibly be going back in and telling Tallandria that she's at least got a place working on Asmodia's Wall, if Keltham doesn't want her?  Tallandria was pretty helpful about analyzing Nobility Equilibria after she was called in.

Asmodia is mostly worried because she doesn't know if she'd be sabotaging Tallandria even further by telling her that, which is why Asmodia hasn't said it already.  Does Sevar know how people work?

Permalink

Well, it seems hard to sabotage the girl any harder, plus even if she pulls it together and figures out the math they probably don't want to allow someone with composure problems to have Keltham-contact anyway, so Asmodia might as well go in and say it.

Permalink

Right.  Asmodia will go write down "If Keltham doesn't hire you, I'm planning to assign you to work on my Wall" on a piece of paper and hand that to Korva.  She's not sure if Korva is in shape to hear a Message that's spoken once and can't be reread.

Permalink

Korva lifts her head up and reads the paper.

 

It makes sense. She's shit at math, evidently, but she dimly remembers that she also gave herself even odds of failing out last night, and she was planning to perfectly calmly angle for a support position of some kind - maybe writing out Keltham's lessons in a form that somewhat dimmer people have a prayer of understanding, once she's put more hours in and figured out what's going on with them herself. But the shadows that real things cast on other pieces of reality - she's good at that. And she'll learn things, which is the sort of thing that might make her soul non-worthless again.

The biggest immediate problem now, then, is the crying. Which - well, from a wall perspective it's not a problem? She knows, on about two seconds' thought, exactly why her alter-self is crying, and she could probably bluff Keltham dead, about this particular thing. Which is convenient, because she still feels much nearer dead than she'd like.

Permalink

...Security will relay to the Chosen that Tallandria seems much more together immediately after reading Asmodia's note.

(Possibly the newbies still don't, like, particularly believe Sevar about some things?  Maybe it's time to start using mind-control on them, Security does not see how Sevar could have made herself any clearer.)

Tallandria is thinking particularly that she could bluff Keltham about why she's crying.  Tallandria's thoughts went immediately to particular stories that seem reasonable to Security, and about how her general composure will seem more consistent later with her having had a breakdown now.  Possibly relevant if they want to tell Keltham now that Tallandria's broken down, and... tell him one less lie, he supposes?

(This Security has never had an easy time intuiting the exact outlines of Sevar's 'minimize lies' policy, and is just throwing everything to her.)

Permalink

Conspiracies are probably more likely to cover up crying breakdowns which they could easily cover up than to not do that, so it earns them some points, if Tallandria's bluff is really good enough, especially in light of the fact that the Taldor girls do sometimes have crying breakdowns. 

 

Hit her with a splendor, just in case, and then tell Keltham.

 

....she should ask Subirachs about the idea of using Suggestion to make all her underlings believe she's not going to have them permanently reduced to rubble for no reason. It's very appealing but maybe there's some reason this isn't standard which isn't just the cost of the Suggestions.

Permalink

Among the few things that dath ilan has in common with Cheliax, is that it takes a fair amount to make an adult dath ilani break down in tears in public.  Math tests won't do it, even math tests with their future careers at stake.

When Keltham allowed himself to cry about how a hundred and fifty million people weren't going to die for his having entered their world, he sent Carissa out of the room first.  Not from thinking that he was doing something bad that must be hidden - Keltham did not try to hide from her afterwards that it had happened.  But their relationship being so new, he wasn't sure how the alien might emotionally brain-update about him if she witnessed it directly; and that anxiety played into a different social convention out of dath ilan, that he and Carissa hadn't yet had the conversation that you have before you cry in front of someone, or show other signs of great emotional distress.

Dath ilani do not break down and weep in front of strangers.  It's not that they hide weakness.  It's that you don't lay that on someone who hasn't agreed to do emotional labor about you.

If it happens anyways, something is really wrong, wrong on a scale that transcends bothering all of the people around you.

Permalink

Keltham will listen to the Security notification with a deer-in-glarelamp expression, and then turn to ask Carissa for advice.  In particular, should they possibly go get Maillol.  Maillol seems like he might know how to handle this situation.

Sure, as Founder, this is his responsibility, possibly even his fault.  It doesn't mean Keltham actually knows how to handle this situation worth a noodle.

Permalink

" - I don't know that you have to handle it? Maillol has experience commanding teenagers, so I guess he'd be the person to ask if you want to ask somebody, but the emotional wellbeing of every person here isn't your responsibility. In dath ilani companies if an employee of yours started crying would that be your job to solve?"

Permalink

He's the Founder.  Everything is his job to, at the very least, make sure somebody is solving.  If there's no one person whose job it is, then it's Keltham's job.

Permalink

" - all right. Let's go ask Maillol. Emotional support and guidance is normally a priest sort of job anyway."

Total: 2578
Posts Per Page: