Sadde and Bell in Worm
Next Post »
« Previous Post
+ Show First Post
Total: 1314
Posts Per Page:
Permalink

"Breaking down is not the danger. Making choices is the danger. My smartest bot is not a person, but it's not a huge gap to clear. My stuff makes choices and it doesn't have to be broken in some fixable way to do it. If the robots could all reprogram each other in the way I would do it, they'd have to be as sophisticated in figuring out what they wanted as I am, which makes the problem worse, not better."

Permalink

"Oh. Hm." Pause. Think. "Why was it you can't limit them meaningfully?"

Permalink

"If they don't have choices, they don't work. I limped along with a non-tinker calculator program until I figured out how to make a custom one settle for deciding whether to read the answers aloud, print them, or expect further inputs to the function and not display an answer yet at any given stage. The more it needs to do, the more choices I have to allow it. If I am going to make a bot that can fly around and tranq people, it has to actually be able to do that and I need to rely on it not wanting to tranq anybody I don't want tranqed."

Permalink







"Your power is terrifying."
Permalink

"My bots have all been really well behaved! I trust them! But if I died or disappeared they are currently programmed to, basically, grieve, because if they went around without me for six months I don't know what bad habits they'd pick up."

Permalink
"Yes they've been well behaved, but the thought of tinker tech that makes actual choices—and furthermore needs to—failing is very scary. I mean I guess I could have noticed this before but I think I didn't have a clear enough picture of it then.

"Can't you program them to self destruct if you disappear for long enough? ...or to want to self destruct, or something."
Permalink

"They might think of that on their own if they do the grieving thing long enough. They also listen to my dad, though, and I'd like him to have the use of them before they break down. And can't fix each other because that's not something I programmed in."

Permalink

"Right."

Permalink

"So, I might do a tree structure with me maintaining a few things that maintain many things that maintain a ton of things. But I'm probably not going to do the self-repairing swarm. Until and unless I have a longer-term sense of my main bot's stable personality."

Permalink

"Yeah. It still sounds like there should be a way to cheat but your bots are way closer to being people than I'd thought."

Permalink

"I will not be too surprised if the main one wakes up properly one day. I'm trying to avoid it, the world isn't fit to bring children into, but it will not floor me."

Permalink

"As far as children go, it will probably be much better equipped to handle the world than most. And as far as parents go it could certainly do far far worse."

Permalink

"That's why I'm not deliberately operating with nothing more than calculator apps."

Permalink

"Does its code change dynamically or something?"

Permalink

"I don't have any useful intuitions about how different non-sapient programs are from sapient ones. 'Waking up' sounds like something out of science fiction, but then again," and there's a miniature of the Leviathan turret in their right hand now.

Permalink

"Like... you can hold a conversation with my bot. It uses my voice and my writing style when it's pretending to be me, it has one of its own when it's talking to me. It can pass the Turing test if you don't know you're administering a Turing test. It does not claim to have subjective experience and has not materialized any desires or behaviors that don't make sense in light of its initial programming and its inputs, and its memory use hasn't jumped, and it might never, but it might."

Permalink

"But its faculties have expanded in personward directions?"

Permalink

"It learns, I said."

Permalink

"There are lots of ways it could learn in not-a-person directions but I suppose it wouldn't have been worth mentioning if that had been the case."

Permalink

"It does not have the US state capitals memorized, because it knows how to use Wikipedia."

Permalink

"Mmhm."

Permalink

"I didn't mean to scare you. You don't need to worry about the bots."

Permalink

"I'm not really scared. It's more of a... an intellectual wariness? Nothing so visceral and what I know of you, however little, kinda nullifies a lot of the terror I'd feel if it was some random tinker."

Total: 1314
Posts Per Page: