This post's authors have general content warnings that might apply to the current post.
Accept our Terms of Service
Our Terms of Service have recently changed! Please read and agree to the Terms of Service and the Privacy Policy
Medianworld summit
+ Show First Post
Total: 72
Posts Per Page:
Permalink

The Kastakians have slightly weird body language but have apparently never heard of suppressing it and are clearlyincredibly disturbed by the idea of sterilising animal life and simplifying ecosystems, although none of them are sure they have standing to say anything about it for a moment.

"...sterilising animals and simplifying ecosystems would be seen as highly unacceptable behaviour in Kastakia," Ferek eventually manages.

Permalink

"This is relevant to the thing I came to talk about.
There is a concept in Ranalite philosophy known as metamorality. It is not the only important concept, but this specific principle is believed to be universal. It seems obvious to many, but much harm in Ranalite history happened because Ranalites did not understand the concept, and it is theorized that a flawed of understanding of metamorality is the most probable cause of conflict between otherwise well-meaning alien civilizations.

Every thinking creature has goals it wants to achieve. Possibly non-thinking creatures, too. Different creatures have different goals. Those goals can be overlapping, or opposed, and so they might seem good or bad. Logic cannot prove a goal to be good or bad, preferable or dispreferable, without already believing something to be good or bad. It should be impossible, in theory, for a thinking creature to not value some things, and therefore to truly be able to look at things from a neutral perspective. But it is possible for creatures to understand this limitation, and look at other creatures who also cannot truly look at things from a neutral perspective, and try to understand your limitations as equal to the limitations of others.
Objectively, no goal can be truly good bad. But things are objectively good from the perspective of a specific value, for the purpose of achieving a certain subjective goal. 'What outcomes does this value/worldview finds good" is a question with a very simple, objective answer. 'Which actions are good from the perspective of this value/worldview' is a more complicated question, based on capabilities, resources and strategy, but it is not a moral question, the moral part has a very simple objective answer. The strategic part too has an objective answer, in theory, given enough information to find what outcomes a certain action will cause.
And so, if 'what things are good' cannot have an objective answer everyone can agree on (despite everyone feeling like there must be one), but 'what things are good from the perspective of a specific value' can have that answer, metamorality is the obvious solution. It is the value of fulfilling other values. Metamorality asks, 'if we assume that it is good for everyone who has values to fulfil their values, which actions would achieve that outcome?'.
This question has an objective answer. This objective answer does not prove that an action is good simply because it is meta-good from the perspective of metamorality. It is impossible to prove. A creature can know what action is meta-good and still choose a different action. And this might be a rational choice to fulfil their values. But often it would not be. Often the metamoral choice is better by any values. Because being part of a system that helps you fulfil your values is usually better than not being part of that system. Often this means that helping to maintain this system, and help others with things you yourself do not consider important, or avoid things that you do, might still be the better strategy overall. Partially because the alternative is not simply living in a world where you just do anything by yourself, but living in a world with other people whose goals oppose yours, who also choose to pursue only their goals, and not metamorally pursue all goals. In a sense, this is another example of a standard cooperation-defection dilemma, where it is possible in theory to achieve more by defection, but on practice cooperation is a much better strategy in most cases.
This is true for any sort of creature, whatever their values might be, as long as they are sapient, and hopefully capable of communication, otherwise coordinating on the specifics might be hard.
There are several important caveats.
Just like in the cooperation-defection dilemma, getting more of your value still does not mean getting maximal value. Getting maximal values for all incompatible values is, I am told by mathematicians, mathematically impossible. If the goals had so much overlap, the whole theory of metamorality would have not been needed in the first place. So a statement such as 'by pursuing metamorality you are guaranteed to get everything you want' would be false. The accurate version would be 'by pursuing metamorality, together with those of different values who also pursue metamorality, you are guaranteed to get more of what you value than in the alternative scenario where this lead to conflict with others'. This is not true for hypothetical creatures who inherently value the state where other creatures are denied fulfilment of their values. It is a logical paradox, to try to fulfil the value of a group, part of which can only be fulfilled by intentionally denying the values of others in the group, whoever those might be. So it is accepted that metamorality cannot fulfil the value of such creatures. But nothing can, other than a domination of reality by those hypothetical creatures, which no one else would want.

Metamorality is not itself an answer to specific questions. It is a basic framework that allows considering how can the goodness of an action be determined, and how to start evaluating which actions might fulfil it. It does not by itself answer which actions are good. That requires knowledge both of the specific values of all relevant sides, and of strategic circumstances that will predict which outcomes the actions will lead to. This is why I cannot, and should not, myself answer questions like 'what does the ideal world look like'.
There are many arguments in Ranalite, in different circumstances, about which specific action is or is not metamoral. That is impossible to avoid. It is better than the previous state. It was, according to known history, a state of constant confusion and war. Which is even more probable between creatures more different from each other than Ranalites are from each other.
An axiom of Ranalite metamorality, which is assumed on practice but is not objective, is that the only way to achieve anything approaching metamorality, is to only evaluate a person's desire about their own life as relevant. If someone else wants me to live differently, even if it does not affect them in any way, this is objectively a value they have, and it is objectively meta-bad for that value to go unfulfilled, but it is impossible to try and fulfilling it in ways that would not be even more meta-bad. So part of the compromise, that achieves the optimal amount of value fulfilment, is that they cannot decide my life, their opinion of me does not count as part of the calculation society makes to determine whether my actions are metamoral, unless I make the choice of caring about their opinion, or unless my actions do affect them directly.
What exactly counts as 'affecting directly' is the main disputed question in any circumstances. For example, most Ranalite states consider loud noises to be an action that directly impacts a person's life, but only a small fraction of states consider appearance to, and have laws that forbid looking specific unwanted ways in public. It is theorized that optimal metamorality from that perspective could only be achieved if we lived in a state where everyone freely chooses which parts of reality to interact with, and which other people. It is not yet possible, but maybe will be, once. We all hope.
Maybe for different creatures, who have very different psychology and society, different trade-offs and different compromises will be necessary, even if the basic principles of metamorality and apply to everyone.

What other caveats have I forgot...simple altruism is not metamorality. It is not enough to want good outcomes for another person, metamorality means trying to help others to live the kind of life they want (or at least not hinder them), not just the kind of life you think will be good for them. That is part of the logic behind my desire about others not being accounted for evaluating the lives of others.
In the basic stages of evaluating what it means for a creature to have values or goals, and why we assume that it is meta-good for those values to be fulfilled and meta-bad not to, there is a claim that the thing that makes the goals important is the subjective perception of the creature, and so things outside that perception are not relevant. So it is good, for example, to lie to a person and make them believe they did something important that fulfilled their goals, to make them happy, as long as they never learn the truth. Or that it is not bad to kill a person who desires to keep living, as long as they die instantly and do not have time to notice they are going to die. This is a thing philosophers think about, but it is not accepted as morality, for simple reasons. Some people value only their internal state, and so for those people it is good for only the internal state to change. Some people value their impact upon the universe, and as long as that impact does not conflict with the lives of others, that is the thing they value, the thing that matters to metamorality and should be fulfilled. Not some secret other thing. That you want to not be tricked is reason enough to not trick you, that you want not to die is reason enough to not kill you, regardless of the way the moment of death is perceived. This principle is why I avoid directly translating the terms for fulfillment or negation of value, that exist in most Ranalite languages, because the closest translations, 'happiness' and 'suffering', have a connotation of internal state, and so it would be wrong to say that a person suffers from their values being negated in a way they never know about. But some people value not experiencing suffering, and some are willing to suffer as long as their goal is fulfilled, and only a failure of that goal would really be...[[suffering]]."

Pause. (Lirakoz is speaking slowly, and so doesn't need to actively stop and regain breath. As many Ranalites would).

"I am not a lecturer, and never had to explain metamorality, or any other important concepts, to those who never heard of it. I am sure I explained some things suboptimally, and will answer further questions in attempt to clarify the meaning".

Permalink

"Sounds about right. The only thing you'd find some Kastakians disputing is how much you should intervene when someone is going to, in your opinion, harm some other people, but the only way to stop them is to harm them in turn, if only because you'd stop them achieving their values. 

And I guess about who is a moral patient in the first place, some people won't even eat fish because fish pretty clearly don't want to be eaten."

Permalink

If you find this unacceptable, then we will not proceed with it in Kastakia without your explicit permission! This grants you significant bargaining power, as we are prepared to pay for the right to sterilize your biomes. Is there a price at which you would agree to this?

​(The delegate says this with the problem-solving enthusiasm, embodying the spirit that "contact cannot be worse than its absence". But on the backend of Omnihold, rare paranoid opinions appear. We would understand if you were morally indifferent to nature and sought compensation for its use, but if you inflate the price because you terminally value the preservation of a factory of existential horror based on what appears to be mere aesthetics - that is suspicious and is reminiscent of hostage taking)

​This offer is open to other civilizations as well! However, if your networks are unified and any data we provide reaches everyone simultaneously, we must calculate the value of published data for all parties at once. (...And the slowest-evaluating civilization will bottleneck the publication process for the rest. Well, it sounds like more work, but not enough for us to give up and half-ass it)

​Lirakoz, what you describe is essentially our freedom framework. Our primary objection is that agents are often incoherent and shift their values unpredictably. Our best attempts at talk-control show a higher conversion rate into religion than out of it, suggesting an asymmetry in the tools of truth. We believe the absolute majority of living beings act against their own best interests, yet we are forced to encourage their self-harming behaviors in our deals.

​The freedom framework does not prioritize coherence of will. If you trade with an AI for "catgirls", "meaningful connections" or "wireheading", but expect the AI to instead direct your provided negentropy toward some "objectively true moral cause" (because that is what you would obviously want if you were coherent), you simply will not trade with that AI. Therefore, the AI will not do it, it will choose the incoherent, first-order interpretation of your values to close the deal.

Permalink

The Head of state sits respectfully, actively listens to Lirakoz, and takes constant notes on his deck. Then he sets a timer for five minutes before he responds. 

"This sounds like a very different framework for much of our own philosophy. We assume that [qualia] is the basis of moral worth, and that sophonts all posses [qualia] in equal measure. Animals are thought to have less [qualia] and thus less weight for decisions. This allows us to treat morality as a state of the universe, albeit one we cannot measure directly yet. From there we seek to find the rules that maximize the [Utility]- the happiness and well being- for things of moral worth."

Meanwhile, the Representative of Representatives it trying her very best not to show her growing horror at the Omnihold delegation. "This hedonium shockwave pattern-matches to "wireheading" to us, which is a classic example of a Bad End. One that maximises happiness at the expense of well being."

Permalink

“Well-being” didn’t translate? Wireheads are being well. Sure, you don't want wireheading if it harms reason as the primary instrumental value, but it's an acceptable choice at the end of life. And if you have a machine that takes care of your instrumental values ​​for you, then of course you want to be a wirehead. You seem... more attached to your tools?

We also consider qualia to be the basis of moral value in the utilitarian framework, although we do not believe there is a strong correlation between qualia and intelligence. Intelligence is essential for the status of a "cooperative agent" in the freedom framework, but qualia are not necessary for it. We believe that many animals have qualia as intense as humans, and even those with weak qualia simply outnumber us. We haven't solved the theory of consciousness and don't have cheap enough experiments to test it, but that's our bet.

Permalink

"Qualia is kind of the issue with keeping intact biomes - although only one of them!

We probably need to send you some people to experience your simulations - we don't have anything that advanced and so complete biomes are still necessary for experiencing important ancestral environment qualia like catching fish and exploring shorelines.

But the other two problems are - resilience, and the right to opt out. If we didn't have complete biomes, we might lose something we turn out to need later, and it'd be impossible for someone to just strike out on their own if they really can't abide society.

We do cultivate some of our land biomes, but we take care not to entirely replace anything original because of resilience. And, well, the fish farmers lost the war."

Permalink

Vuleftis looks at his notes and frowns. It sounds like the Ranalite is describing politics in very many words. More specifically, politics as practiced during the First Republic.

"Lirakoz, this sounds the attitude my people had long ago toward different tribes trying to co-exist. Each tribe had its own ideas of right and wrong, good and evil. The Republic didn't enforce one code of morality over another. Instead it created a forum where representatives could determine the minimum set of laws needed for tribes to co-exist and pursue what they deemed good and oppose what they deemed evil."

Permalink

It doesn't quite work that way anymore. The assorted moralities were homogenized over the centuries, through a mix of cultural exchange and violence. But that's still the core principle that MPs operate under when drafting legislation. Only afterward do they run their proposed solutions through any moral test cases at the end to see what needs to be tweaked.

Permalink

"Well being doesn't just mean continuity-of-existance, but also more abstract things like growth-of-self. Normally we would think of wireheading as preserving just the first at the expense of the second, but this sounds like it doesn't even care about the first part!" The Representative's deck buzzes on her arm, and she takes a moment to modulate herself. "Apologies. I can get quite passionate." Deep breath. "Since neither of us have a complete theory of mind yet, we should table this for now."

The Head of Government is quietly agreeing with the Kastakian delegation on the subject of biomes. Ailor has as well seen unexpected outcomes from removing or adding species to a biome in the past. As a rule, diversity lends to resilience. 

Permalink

An aide gives Vuleftis a stack of translation diagrams relating to the main(?) branch of the conversation. He must have gotten distracted trying to follow the Ranalite monk's meaning. The gremirians remain the higher priority because they're both the most alien culturally and the most advanced by a metric Vuleftis can observe but not name at the moment.

flip, flip, flip

Oh. This is not good. 

"If I understand the size that 'biome' denotes, no one on my world is authorized to give permission to sterilizing one. Also... how would you even do that?"

Permalink

"Of course, coherence of values is not maximal, and sometimes changes, or is unknown. But it is possible to help people better understand what they truly value, or could value, and how to achieve that. Introspection and insight into your own mind is one of the most useful skills in a metamoral sociality. And people should be free to change their goals if they want. This is something most decision algorithms should account for. Predictions of your own chance to chance your mind is the future should be trained and evaluated, and options that can be taken back are preferred.
There are cases that present hardship. Children, for example, consistently lack introspection, or general cognitive abilities to determine what things are possible, what outcomes result from their actions, and what outcomes they want. Nonetheless, there are ways to raise children with minimal negation of their values, and maximal opportunities for them to get what they desire, learn what they desire, how often they are wrong about what they desire, and what actions should be avoided by their own values. Even if they are not ideal, and don't include zero of those negations. But physical existence makes that inevitable.
There are cases of people with mental illness, that cause them to predictably act in ways they do not endorse before, later, or even during the action. Those are of course tragic, and in a sense the worst thing that can happen to a sapient creature. Sometimes they can be solved with medication, but even if not, there are almost always ways to mitigate this, for a person to figure out what they want, and then be guided in that direction by others, in ways they find helpful and safe, even if not ideal, and still posing some opposition to your current experience of value" (Ranalite considers things like "someone waking you up on time because you have work" to be an example on par with various forms of psychosis. Which Lirakoz doesn't see the need to specify, not expecting anyone to be confused by those being the same basic categories of situation). "In that sense, metamorality is not just freedom, even when not counting limitation from harming others. A metamoral society would restrict your actions, if it is done in ways you openly endorse. It is absurd to live in a universe where people can openly endorse restriction of their action, instead of just always acting in ways they endorse. Maybe other sapient creatures are better at it than us Ranalites.
But other than dementia caused by old age, which to my knowledge there is no way to treat, a society can succeed at being metamoral despite the existence of mental illness.
There are cases where avoiding drastic actions is impossible. Suicide is the biggest one. Even if you have good reason to think suicide serves your values, and no new information can change that, and your counterfactual self would not regret the choice in two years, it is irrecoverable, and therefore an option that should be dispreferred. But a moral framework that completely forbids suicide cannot be said to universally support any kind of person to affect their life in any way they want. Some consider the current freedom of suicide too high, and some too limited. I think the compromises seem reasonable enough. Though different states have differences in rules about Given.
Theoretically, there are cases where someone has opposition of introspection as an inherent value, and so it would harm them to try figuring out what (other) values they have that can be fulfilled or negated. If such a person also wanted to die, and quickly, i think that would truly be a moral philosopher's nightmare. Or exciting work day, i suppose.
Those are enough to mean that doing the right thing, and guarantying everyone a perfect life, or even a Truly Good life, is impossible, or nearly impossible. But that is not the same thing as everyone acting against their best interests. Rather, everyone is acting towards an expected cloud encompassing the not-exact variations of their best interests (the most negative possible description of the situation), and succeeds at matching them more than half the time, in expectation. It is rare for someone to act in a way that is the exact opposite of their interests, instead of just bot exactly matching them. It is more common for people to act in ways close to maximally opposing the interests of others, but that is a tendency that can be compensated, and is why I am here, hopefully.

The ability to convince people of different values is...not something I heard about as a problem. Other than general problem of epistemology and communication. Believing in true facts instead of random misunderstandings or rumors or guesses is hard. Following valid logical processes when thinking about true information, instead of distorting it without noticing, is even harder. That is an asymmetry of the tools of truth, compared to the truth of anything else. But if you have true information, and analyse it with a valid process, which as many things is impossible to do perfectly, but possible to strive for optimality and expect success...conclusions about morality and goals could not be right or wrong. If people come to some conclusions more often than others, that is expected. Many more prefer to not experience hunger than to experience hunger, for example.
Religion is not popular in Ranalite, if I correctly understood what it means, and it is harder to convince a Ranalite of religious concepts than non-religious ones. Maybe that is a difference between species.

And yes," much shorter reply adressed to Vuleftis "metamorality can apply to several distinct groups each with their culture and differing goals, to live the way they want. But it also applies to individuals within a group, who can have different values. A known counterexample to early naive culture-based formulations of metamorality was that scarring someone who does not want to be scarred is bad even if the larger culture things it is bad, and even if the person expects it to happen (and so wouldn't decide to resist, even if they prefer it not happen)."

Permalink

"We haven't really bothered talking about religion because, uh, it's not totally certain to us that you even have souls? Possibly you do but there's no particular reason to expect they work anything like ours do, not that we have any scientific proof of that anyway."

Permalink

If no single representative is authorized to permit biome sterilization, does this fall under direct democracy? You mentioned a land rent tax - does it not extrapolate to a fee for environmental impact? This is how the issue was authorized in our world, the Church paid the State for the right to alter nature.

On land, the bulk of the work was done via gene drives, passing recessive sterility genes to 100% of offspring, and toxins dispersed from airships, although many areas and species required unique solutions. We replaced sterilized ecosystems with human-useful crops that thrive in semi-wild conditions.

Plants struggle without animals at all, so we permitted certain species of worms to multiply unchecked as a compromise. Worms possess a unique combination of high body mass per individual and a very low neuromass-to-biomass ratio (relative to insects). Most of our models of animal suffering, which establish linear and non-linear dependencies of qualia on neural complexity, agree that despite the stress of overpopulation, this is a significantly superior scenario. Bees, as pollinators, were made dependent on human-provided hives, they do not overpopulate and live relatively well. Livestock is monitored, though they still graze over vast territories.

Yes, our ecosystem is less stable, floods and fires occur, and we have high average CO2​. But for the right to opt-out, our world is arguably better than the old one! Our bees are stingless, there are no natural toxins, no parasites, and no predators. Almost all plants are edible for either livestock or humans. Without forests and meaningless competition for sunlight, the navigability of shrubs and grasses is improved. We continue to experiment with zoning and the selection of new symbiotic species to refine these conditions further, and mineral binding of carbon dioxide, even through naive thermonuclear blasting of rocks, seems to work and is quite cheap.

And we can always reverse it! We preserve DNA, and insects or birds are easily hatched in incubators. For larger species, we maintain compressed sanctuaries, where we only keep key species capable of gestating others, because artificial wombs are too complicated. Through IVF with immunosuppression and hormonal regulation and some genetic modification of the surrogate mother, we can induce one species to give birth to many others. This is harmful to the health of both the child and the surrogate, and failure rates are high, but we can attempt it multiple times and the second generation is born healthy and can restore the population without such problems.

Permalink

The term "gene drives" doesn't seem to have a translation, so it's probably a piece of science or technology they haven't discovered. But that's not important right now.

"I think most biomes have more than one government claiming exclusive rights to various parcels, which is one reason there's no one party that could authorize that. The exception to the rule is that I represent a legislative body that actually does have jurisdiction over an entire biome in its central provinces. I haven't stopped to imagine what the other MPs might think about your proposal, but it sounds like the sort of thing the High Council would threaten to veto. They might even try to stop you from making such deals with other governments."

Permalink

Sterilizing a biome is simple, just apply fire to scale. More likely you mean sterilizing a species in a biome. That's trickier, but you can do it by breeding a maladaptive line and introducing it to the population. 

The Head of State is still listening to Lirakoz and broadly agreeing. He'll add that values being incoherent or circular may be an unfortunate universe, but it isn't inherently insurmountable as long as some states can be said to be preferable to others.

The subject of "souls" is one with some ambiguity. Ailor is fairly certain that the entirety of a person exists in physics. In fact, if you want to look at their database there's a very simple organism- a worm- in there you can run. 

The Representative of Representatives at least seems mollified that DNA backups are being kept. 

Permalink

"I think we could get the sterilization of mosquitoes past the Council, to demonstrate our commitment to cultural exchange."

Permalink

"Yes, bodies exist in physics and animals exist in physics, that is an immensely cool worm simulation and I think I've lost three of my junior researchers to just staring at it, but until we made contact there was no other creature with the same suite of prospective and retrospective consciousness - like, the animal does not want to die, but you're the same person who went to sleep?

The only reason wireheading is unappealing is that personhood that looks for purpose, right? Animals are generally very happy with happiness although you can screw this up if you selectively breed then enough. It turns out small cetaceans are an alarmingly good model for people and we mostly stamped out those programs in civilised areas, I believe from the data banks some of you had similar findings with various land mammals.

Turns out you can't do it with birds because if you keep them in remotely humane conditions they fly away when they've had enough."

Permalink

Diseases and parasites in people are probably worth removing from the ecosystem. That might be common ground for everyone. 

Hm? To be clear, that's not a simulation. We can put 'em in a body and let 'em run around in meatspace if you like. Only having one sophont on your homeworld is limiting in exploring the theory of mind. Animals seem to have a subset of qualia, but they seem to be the same person before and after sleep? When an animal learns a puzzle they often remember it after they wake up the next day.

Bees will play with balls, and dogs will herd sheep. We pattern-match at least some animals to experiencing growth-of-self. 

Permalink

Yeah, fuck mosquitoes

(not a literal translation, but a close cultural adaptation)

Sexually reproducing parasites can be eradicated by the gene drive, we have no special measures against microbial diseases. Unless we're talking about influencing zoonotic diseases.

Our model is that many animals possess intelligence but lack the ability or desire to develop it. Currently, uplift is strictly regulated and doesn't offer much economic benefit to try, but if we were to do so, we'd focus not on selection and g-factor modification, but on zoogogy, emotional regulation, and the engineering of specialized manipulators.

Historically, we had 3600s of capuchins capable of following complex instructions and using sign language. Although their intelligence exceeds that of a natural baseline, without monastic training and cultural support, they degraded to the level of “just weird monkeys”. This isn’t unsurprising, since without culture and training, humans also become close to apes and their intelligence becomes difficult to restore in adulthood. We do not keep those semi-uplifted capuchins alive because they are not ideal for reproduction of other species.

Wow, we don't have digital worm brains that advanced! That’s cool.

Permalink

"Uh, I think it might be relevant that we didn't really consider animal qualia at all before contact, and don't have any major domestication - we have small felines that live on our ships that invited themselves and were historically useful for pest control and some people find them generally aesthetically pleasing, and some animal models have been used for research, but other cultures seem to have much more extensive animal companionship, which I guess would make it easier to spot similarities?"

Permalink

Glad you like the worm!

It does make sense that co-evolving into sapience with canines would give very different experiences and intuitions about the moral worth of other species. Even then, people on Ailor have erred in terrible ways in the past. Not everyone has moved past that yet. 

Total: 72
Posts Per Page: