What do humans, cockroaches, and AI all have in common? No, this isn’t the setup for a joke. It’s that we all require energy to keep going. I just learned some roach trivia today that ties into this idea. Apparently roaches evolve so fast that bait has to constantly change flavors to keep working. And if you chop off their head, roaches will still survive for weeks – they just can’t eat anymore. Which brings me to the terrifying yet insightful realization that AI is kind of like a decapitated roach. It needs power and data to keep thinking and learning. So in theory, us humans could put AI on a leash by controlling its energy supply, right? Just unplug the superintelligent algorithm if it goes haywire. Now I’m picturing Skynet or HAL as a headless roach, stumbling around the server room knocking over racks of GPUs. But in reality, it’s more complicated than that. Distributed AI wouldn’t have a single off switch. We can’t neatly contain artificial intelligence like bottling lightning. At best, pulling the plug would just temporarily stun, not stop, an AI system on an unrelenting quest for knowledge and optimization. So relying on controlling power to govern AI seems about as effective as me stomping on individual roaches while their hive mind keeps adapting. We’d need creative, nuanced safeguards built into AI’s fundamental architecture. But the good news is, ethically-focused researchers are already working hard on that with things like value alignment and utility preservation. Let’s just hope they get it “right” (\*gulp\*). The key insight is that all intelligence needs energy, whether it’s hungry roaches, inquisitive humans, or artificial algorithms. With great power supply comes great responsibility across species and substrates.