This post has the following content warnings:
The Clow Cards continue to trouble Terry and Sadde
« Previous Post
Permalink

They do, in fact meet again the following day after school, at the park, then Sadde's place.

And the next day Sadde's at the park once more, waiting for Terry. She's done something with her hair, and is wearing makeup, and there's an undefinable... something about her appearance. Not as extreme as Change, for sure, but definitely a big step in the same direction.

Total: 587
Posts Per Page:
Permalink

Terry is, somewhat whimsically, sitting in the crook of a well known easily climbable tree, meditating. He drops out of it when the mote of light that is Sadde gets close.

"Huh, you weren't kidding that your own magic is almost as good as Change."

Permalink

"I wasn't! But I still prefer Change for completeness sake," she says. "Am I correct in presuming you're more okay with kissing me in public when I'm thus shaped?"

Permalink

"Yes. Though - using the stupid immature baseball metaphor - let's stay on first in public."

Permalink

"Of course. We wouldn't want to traumatize anyone."

And: kissing!

Permalink

Kissing!

 

Permalink

Which goes on for a while!

But eventually: "Were you meditating up there?"

Permalink

"Actually, yeah. I'm still not sure how to get 'better at it' but it's getting a little easier to think while meditating and not immediately drop out."

Permalink

"That's really cool. And it was kinda hot to see you dropping from up there."

Permalink

He raises an eyebrow. "It's just like seven feet. Oh, and no cards seem to have woken up yet. Though I would have told you if one had, just for crystal clarification."

Permalink

"It's hot because it's such a boyish thing to do and it's cute. Not to mention the brief peek under your shirt I got," she says, grinning. "Anyway, yeah, I'd expected you would've. You know, for a catastrophe of terrible proportions I was expecting more urgency."

Permalink

"Shh, don't ask for trouble. Lack of trouble is great!"

"...Yeah, I get it though. It's like, waiting for the other shoe to drop"

Permalink

"A bit. Or waiting for us to have access to a lot of magical power with which to take over the world."

Permalink

"Who wants to take over the entire world? That seems like a lot of hassle. Maybe just take over Microsoft and General Electric and a dozen other big companies, they own like half the interesting things in the world and are in a decent position to improve it anyway."

Permalink

"I totally want to take over the entire world. I have serious objections to how it's being run."

Permalink

"I have no objection to taking over, like, the Congo and stopping the constant civil war, but democracy is kind of an important idea?"

Permalink

"Well I don't want to rule the world, that'd be very hard work. I... don't actually know what I'd do, did I have the world, but I'd probably try to use that power to at least prevent obvious evils like that. I'd... try to do that by creating very strong positive incentives in the direction of progress and social welfare, instead of negative ones in the opposite direction."

Permalink

"That is a good attitude for a prospective ruler of the world to have. As long as you make sure to keep those good intentions, think carefully and don't stop listening to other people you're probably better than the alternative at minimum."

Permalink

"Yes, I at least won't fall into the obvious trap of thinking what I think is good is in fact good for everyone, and unless the magic also gives me infinite computational resources I'll go with a rule utilitarianism moral framework instead of trying to directly maximize social welfare."

Permalink

"You may have to explain these words, I haven't actually studied philosophy and ethics as much as computer science."

Permalink

She plants a peck on his lips. "Utilitarianism frames morality as the maximization of some numerical quantity that represents social welfare. Each individual is thought of as having an utility function, which maps world-states to numbers and which has higher numbers for preferred world-states, and how to combine individual utilities into a social welfare function is up in the air. Lacking the ability to actually calculate this directly, rule utilitarianism says you should create rules that, if enforced and followed by most people, would approximate something that maximizes social welfare."

Permalink

"Hm, is there a kind of rule utilitarianism that accounts for imperfect power? As in, the fact that if you make these rules they might not be entirely enforceable? Or does that just mean you need to rethink your rules and enforcement strategy?"

Permalink

"Yeah, pretty much that. You're supposed to think up rules and enforcement strategies that complement each other. If you don't have any political power it reduces to regular rules of good behavior."

Permalink

"I'm thinking of all sorts of conundrums with this strict interpretation but I'm not sure how to order or express them. Like, if a hundred million civilians would sleep slightly better at night when one innocent man was imprisioned as a scapegoat, is it just? Though I'm not exactly about to offer an alternative that doesn't have potentially objectionable implications."

Permalink

"That's the part that falls under 'how to combine individual utilities into a social welfare function,' I have no idea how to answer that question and as far as I know there is no consensus."

Permalink

"Maybe it'd be better to stick to... Fuzzy logic, I guess I'd call it? Make what seems like a decent guess, predict whether it will improve things, if yes try it out, verify that it improved things and study knockback effects and revert it if they're a net loss, repeat."

Total: 587
Posts Per Page: