Accept our Terms of Service
Our Terms of Service have recently changed! Please read and agree to the Terms of Service and the Privacy Policy
Tanya von Degurechaff in Wrath of the Righteous
+ Show First Post
Total: 811
Posts Per Page:
Permalink

This is actually excellent progress!  Usually abstract moral calculation and theology isn't the right way to talk people through atonement, but Agnew thinks she is making progress!

"It is possible to try to rationally calculate what courses of action minimize suffering or maximize some form of fulfillment.  That isn't how most people relate to Goodness, but some relate to it that way.  Could you imagine some threshold or floor of harm and suffering to other people, past which you would trade off on self-interest and acting on social consensus to prevent?  As an absurd extreme example... if you landed on some planet with a system even more bizarre and arbitrary than Pharasma's, and for the price of getting dust in your eyes and getting mild social disapproval, you could save someone from an eternity of torment, would you?"

Okay maybe jumping to an extreme example is a mistake, especially if Von Degurechaff says no, but it is one of her default reflexes when contemplating theology abstractly...

Permalink

Tanya really isn't used to abstruse moral philosophy thought experiments! If she tries to answer what she would actually do in such a scenario, she - has absolutely no idea, because it's so completely underspecified! She's aware you're not supposed to reject thought experiments by saying they're omitting all the really important considerations, because they do that on purpose to highlight some isolated element, but she really isn't sure what it is in this case?

"So - my instinct is to say yes. Because I'm the person I am, raised in the society I was, and I inevitably keep following most of its rules even when I find myself elsewhere unless I spend time and effort to - naturalize, so to speak. So yes, of course I'd save someone from torture at a negligible price even if it's not the locally done thing. Just as I wouldn't, say, murder someone for their property just because it's locally unremarkable. But I'm not sure that's the answer you're looking for." 

Permalink

"That actually is the sort of thing I'm trying to get at.  So... can you imagine a threshold of trade-off such that you would have either avoided ending up in the situation you had with Arene or made very different decisions once in that situation?"

Permalink

"I'm still not clear if what matters in the Arene example is me personally not being responsible or actually changing the outcome. But assuming it's the former, then - if I had been following different rules to begin with, including a different trade-off between harming others and following laws and orders - the tradeoff is, itself, a rule - then I would have made sure not to end up in that situation, as we discussed earlier. Because it would have been predictable that I would likely need to harm civilians during my military duties, even if the specific case could not be foreseen. If the tradeoff extended to not harming enemies, then I would have done my best to avoid combat service and I believe I would have succeeded."

"I don't think it's consistent to predictably end up in Arene, or another situation like it, and then to refuse to follow orders. There's no reason to end up doing that, no matter what my goals or rules or tradeoffs are. Not everything in life can be foreseen and planned for, but everyone knows military operations sometimes end up harming civilians."

"I don't understand what you mean by asking if I can imagine a threshold that would make it so. I can imagine any possible threshold, from total disregard for civilian casualties to complete pacifism, even if I'm not sure how to justify some of them. That I can imagine things doesn't seem to be material."

Permalink

"Your actions would matter even if they ultimately didn't help Arene as a whole.  I suppose I should try to explain deontological moral frameworks at some point if you want to understand why.  For purposes of hitting Lawful Neutral, we need to find a threshold that would definitely change your actions at Arene or leading up to it at least some, and, I suppose for purposes of being a threshold you are willing to commit to (even with the possibility of ending up outside Pharasma's system) we would want to figure out the minimum such threshold that still sufficiently changes your actions at Arene or analogous situations and leading up to such situations."

Permalink

This finally feels like something Tanya can engage in, a proposed decision rule which might end up leading to absurd conclusions but is at least something more than nebulous 'good'!

"If being good means helping others at cost to oneself, even absent any rational motivations, then - I'm still not sure how to operationalize this. Does it matter which people I help, or is more like a quota of people helped? Do the standard considerations apply -" the tongues-spell doesn't want to translate 'utility function' - "uh, of effective costs to myself rather than absolute costs, for example measuring monetary costs as a percentage of my wealth with time-discounting applied and not as some fixed amount - I assume it has to be something like that because there's enough suffering in the world that at any fixed rate of exchange you'd spend all your wealth alleviating it no matter how wealthy you are, but that leaves many degrees of freedom and I don't know what specific model (*) you use. How do you measure other people's suffering or well-being in order to know how much to pay, everyone is incentivized to claim their suffering is greatest if that brings them greater rewards, do you measure the harm and hold all recipients equal or - never mind, obviously you'd do it with thought reading. Am I supposed to invest in seeking out cases where I'd then be obligated to help, to what degree am I liable for not going out of my way to find them? ...I have many more questions but I assume you have a more detailed framework and I should stop asking and let you explain."

Is it sometimes appropriate to conclude that a given case is someone else's responsibility such that Tanya needn't help or does she have to help everyone equally ignoring how much they're already being helped by other people? Is she allowed and/or obligated to convince or incentivize other people to help, additionally or in her stead? This is too many questions and Tanya needs a textbook.

(*) Tanya really wishes she was speaking the kind of language where 'CRRA intertemporal utility' is just three words, but Tongues has its limits and for that matter so does 1930-era Germanian.

Permalink

"Well, there isn't so much one definitive detailed framework as many volumes of philosophical and theological speculation and research and many different frameworks people have developed over time.  We kind of know empirically what sort of frameworks are enough to almost always get people into Heaven or Axis.  For your purposes we could find a framework close to how you already reason and act, to make it easier for you to adopt.  Hmmm... there might be something in an Abadaran text on exactly how to stay Lawful Neutral and out of Lawful Evil in various situations?  I don't know of exactly the right book off the top of my head, but you are well resourced enough we could pay for someone to go to Absalom and search for a book meeting that specification and I'm almost certain something along those line should exist."

Permalink

Bast it all. Of course it couldn't be easy.

...no, actually, it literally couldn't. Torture afterlives are obviously meant as incentives, the claim (as far as the locals know...) that aliens want to torture humans for their own unrelated reasons and not at all to affect human behavior doesn't pass the laugh test. They wouldn't fulfill their function if it was clear and well-known how to avoid the torture, since everyone would just do that; nobody is so stupid that they wouldn't try to avoid infinite torture. (Unless the god happens to decide that inescapable human urges like coveting your neighbor are mortal sins. Being X tried that line on her but as far as she's concerned it's about at the same level of moral sophistication as saying everyone is born damned because of a creation myth. The only rational response to being told you're damned regardless is the same as to being told the penalty for being late is the same as for coveting. In other words, if the gods want to torture her she can't really stop them but she's not going to dignify that with labels like 'morals' or 'philosophy'.)

The gods might keep their rules a moving target that needs to be empirically rediscovered every time significant new technology is invented (how's that for discouraging progress?). Or it could be the outcome of a sufficiently complex system that deliberately avoids clear rules and allows each case to be argued. It doesn't matter how exactly it works, because it's clear that the gods running the show are themselves incentivized to keep people guessing and occasionally surprise them with unfavorable judgements. After all, the threat of Hell is only useful if you can actually make threats.

Or they could be doing whatever they want in every individual case with no consistency whatsoever, but Tanya chooses to remain optimistic. Or rather, she has no choice but to do so, just like everyone else kept in this cage-world. It's not surprising there's no sure-fire way to avoid being judged evil, but many contradictory frameworks each requiring many abstruse volumes of moral philosophy to master and accessible only to the well-resourced who can travel to a distant city - that much she has no trouble believing the gods allow. 

Of course, a story that starts with 'and then she journeyed far in search of the wisdom of foreign sages to avoid Hell' obviously ends in disappointment, either the foreign sages know no better than the local ones or you are enlightened and realize you were asking the wrong question all along, but Tanya is a rational man who does not use story tropes to predict the real world.

Permalink

Tanya has more questions!

Do these different 'frameworks' lead to materially different decisions in practice, or are they more like different models of a complex reality which notice when a different approach would be objectively better instead of being dogmatic? For example, do practitioners sometimes say 'the theory is clearly wrong here because I have an objective way of judging at least some results'? Is the main difference between them that different people are better at utilizing different frameworks or that different moral questions are better addressed by different approaches? (Does that mean she'll have to learn several?) How do they interact with the idea she should follow Lastwall's rules of engagement? (Are those rules written in a specific framework?)

How does choosing a framework affect her ability to later choose a different one (or reinterpret it or something)? How is this meant to interact with her committing to things?

Also, she'd like to point out she doesn't have a goal of being judged lawful neutral rather than lawful good, let alone of maintaining the minimal standard for lawful neutral with no safety buffer. It's just that atoning to lawful neutral was considered the most expedient short-term solution. 

Permalink

Philosophies derived from Iomedae and Aroden's teachings focus on noticing when a different approach would be objectively better (along one or more objective metrics).  Abadaran philosophies typically notice when things would at least be financially better.  Occasionally you get an Abadaran trying to quantify non-monetary things in monetary terms... with mixed results but Lastwall finds the better part of that philosophizing pretty useful actually!  Sarenite and Sheylnite philosophies... are a bit less practical or objectively quantifiable.  They tend to try to focus more on core philosophical principles, sometimes ignoring the pragmatic answer or not having a way of objectively comparing answers.  Irorian philosophies... Agnew is less sure about those, her impression is that they are a mix of a few core principles focused on self reliance and a few objective standards they measure things against.

The philosophical underpinnings of Lastwall's rules of engagement are derived from Iomedae's teachings and methods.  They are refined and simplified enough to have something an ordinary soldier could reliably follow, they aren't complete philosophical frameworks, as least with how Agnew thinks of "complete" and "frameworks".

Von Degurechaff could shift from a more general framework to a more specific one that is a subset of that without problems.  She could specifically indicate in advance during the atonement conditions under which she would make a bigger switch in framework.  That might make the case for Iomedae certifying a bit trickier, but if both frameworks are sufficiently Lawful Neutral it might be fine. If Von Degurechaff makes a commitment to Iomedae to follow a certain framework without following any exceptions planned for in advance, and then doesn't, Von Degurechaff would probably lose her Law.  That would mean a lot of people would in general trust her less.  For a practical example, she would probably get worse loan rates from the Abadarans if he wasn't Lawful compared to if she was.  If she doesn't follow the framework in an Evil direction she might end up Evil again.  Atonements usually don't go through a second time if someone doesn't maintain their alignment, so she will have used up the easiest path she has to a non-Evil alignment.

Lawful Neutral is generally an easier alignment to maintain than Lawful Good, so yes, it continues to be the most expedient short-term solution.  The most expedient mid-term to long-term solution is probably instead to just donate a lot of money and volunteer a lot of service to Good causes.  The amount of money Terendelev owes Tanya might actually be enough that donation of a large portion of it might by itself push Tanya up to Lawful Neutral.  The Abadarans in Osirion have a long term project to try to quantify how much money it takes to balance out certain Evil acts so they could try a guess...  But linear adding of Good and Evil to get a balance is a crude approximation that likely fails in several key ways, so if Von Degurechaff does want a guess in that direction Agnew will want to qualify it very carefully.

Permalink

It sounds like an Iomedaean framework would be best then? Since Tanya might also want to commit to following the actual Lastwall rules at least for war operations (and sharing weapon technology), and since she needs to follow Lastwall civil laws anyway - or Mendevian ones, Anevia gave her a kind of bad impression of those but presumably people from Lastwall manage to operate in Mendev and it is also an Iomedaen country. Everyone here (including Jon) must be used to Iomedaen frameworks, they presumably have the relevant books and experts and don't have to get them from Absalom, they say they can consistently avoid evil using them... Is there any reason to use a different framework than those?

Learning several frameworks plus a rule for which one to use when sounds great in theory but her main constraint right now is time; is there a framework or other approach they can recommend to start with, with sufficiently broad applicability that it won't be a disaster if she ends up committing to always use it, and once she masters it (which could take a few days or much longer than that, Tanya really isn't sure what to expect here) they can reevaluate?

She already asked Jon that, if she has to go into medical stasis and the curse can't be removed, the money owed her by Terendelev plus any loans they can get against her future potential be used to try to keep her out of Hell. If donating the money to Good causes would work then that's an obvious way to do it. (In fact she'd donate all the money right now if she knew for sure it would be enough.)

Total: 811
Posts Per Page: