...very unseasonable rains can be seen in the distance from the shores of Acapulco.
The Brockton Bay Protectorate starts organizing volunteers to be transported or teletransported in batches to it. Heroes, rogues, villains, anyone's assistance is appreciated. A rendez-vous point is set for non-Protectorate capes wishing to volunteer, and Protectorate capes are informed of the situation via communication devices.
"Breaking down is not the danger. Making choices is the danger. My smartest bot is not a person, but it's not a huge gap to clear. My stuff makes choices and it doesn't have to be broken in some fixable way to do it. If the robots could all reprogram each other in the way I would do it, they'd have to be as sophisticated in figuring out what they wanted as I am, which makes the problem worse, not better."
"If they don't have choices, they don't work. I limped along with a non-tinker calculator program until I figured out how to make a custom one settle for deciding whether to read the answers aloud, print them, or expect further inputs to the function and not display an answer yet at any given stage. The more it needs to do, the more choices I have to allow it. If I am going to make a bot that can fly around and tranq people, it has to actually be able to do that and I need to rely on it not wanting to tranq anybody I don't want tranqed."
"My bots have all been really well behaved! I trust them! But if I died or disappeared they are currently programmed to, basically, grieve, because if they went around without me for six months I don't know what bad habits they'd pick up."
"Can't you program them to self destruct if you disappear for long enough? ...or to want to self destruct, or something."
"They might think of that on their own if they do the grieving thing long enough. They also listen to my dad, though, and I'd like him to have the use of them before they break down. And can't fix each other because that's not something I programmed in."
"So, I might do a tree structure with me maintaining a few things that maintain many things that maintain a ton of things. But I'm probably not going to do the self-repairing swarm. Until and unless I have a longer-term sense of my main bot's stable personality."
"Yeah. It still sounds like there should be a way to cheat but your bots are way closer to being people than I'd thought."
"I will not be too surprised if the main one wakes up properly one day. I'm trying to avoid it, the world isn't fit to bring children into, but it will not floor me."
"As far as children go, it will probably be much better equipped to handle the world than most. And as far as parents go it could certainly do far far worse."
"That's why I'm not deliberately operating with nothing more than calculator apps."
"I don't have any useful intuitions about how different non-sapient programs are from sapient ones. 'Waking up' sounds like something out of science fiction, but then again," and there's a miniature of the Leviathan turret in their right hand now.
"Like... you can hold a conversation with my bot. It uses my voice and my writing style when it's pretending to be me, it has one of its own when it's talking to me. It can pass the Turing test if you don't know you're administering a Turing test. It does not claim to have subjective experience and has not materialized any desires or behaviors that don't make sense in light of its initial programming and its inputs, and its memory use hasn't jumped, and it might never, but it might."
"There are lots of ways it could learn in not-a-person directions but I suppose it wouldn't have been worth mentioning if that had been the case."
"It does not have the US state capitals memorized, because it knows how to use Wikipedia."
"I didn't mean to scare you. You don't need to worry about the bots."
"I'm not really scared. It's more of a... an intellectual wariness? Nothing so visceral and what I know of you, however little, kinda nullifies a lot of the terror I'd feel if it was some random tinker."