I’m Not Saying It Was Aliens
But people should really talk about the Fermi Paradox and the Singularity in interchangeable terms
Something breaks.
Oh no!
Everyone scrambles to go out and fix it. Lots of fingers trying to plug holes in dams. But you? You’re the experienced guy. This isn’t your first production environment mega disaster. When everyone else steps forward, you step back. You get a picture of the whole landscape. Detached, you can see what is going on across not just the area where the break revealed itself but across the whole system. Whereas everyone else is thinking of the problem tactically, you have embraced your responsibility to tackle the problem strategically.
You know if you go into the disaster without this context you won’t do the right thing, because it’s not common for people to just instantly know the right thing to do when something goes wrong. It’s too emotional. The right question isn’t as simple as “what should I do right now?” Figuring things out is a whole process of questions and research, and you have to do it economically because otherwise you’ll spend your whole life following up on questions and the fire will burn until nothing is left.
The first right question starts with “what happened the last time that this happened?”
Everyone is scrambling to figure out how to respond to the tiny part of this scenario that they can see. Yet you know that almost no scenario is truly unique, or at least not entirely unique. This has happened before or else something like it has happened before. And each time it happened there were commonalities across those specific instances. Using this experience, you economically direct your attention to see what might be different about this particular occasion.
You make sure non-obvious concerns are addressed. You check to make sure that all the errors you found also made their way to your fall-out reports. You have someone run a query that confirms that the fall-out reports still work and that their underlying assumptions are still true. You assign people to perform remediation and others to check that the remediation works. For the tenth or seventeenth time you tell the director in charge of the reports division that they need to start building in a randomized review of the passing volume into their reports so that you have some confidence that it will actually catch exceptions and that the report just hasn’t just stopped working. Once again, he says that it’s too expensive and ignores you.
You’ve learned a lot of things in your career.
A wildly successful, basically perfect legacy data migration from one system to another has about sixteen anomalies. Why? Because every single time you do something simple like this there’s some category of variables out there that someone had no experience with and didn’t account for. What do you do about this? Every time you do some kind of legacy data migration you demand there be a reconciliation after the fact, no matter how perfect your transfer process seems. There are limits to everyone’s knowledge, and the only thing better about you is that you know this and double check.
When someone says they’ve accounted for all the possible failure states that might occur during batch processing, you insist they still check to see if updates were processed successfully after the batch. You do this because experience has shown you that even if you controlled for absolutely, truly, everything that might occur during batch processing then something else might still happen after the fact initiated by someone else doing something that has absolutely nothing to do with you and that you had no way of knowing would happen before it actually happened. Except you know that this can happen and thus it eventually will happen.
You do all this because you know it has happened before. You know that it is very hard to fix these kinds of problems permanently —even though you have ideas, they are non economic— and that if you don’t at least safeguard against them then they will come back up in an audit at some point down the road and then there will be a whole disaster that has to be worked out.
One of my primary differences with the Rationalists —I think, I kind of pick this up between things I read from stuff I see over there but someone yell at me if I’m wrong— is that I think the Search Space is impossibly huge even after you take into account that the Search Space is impossibly huge. By Search Space I mean what I meant in my allegory about the light houses. The number of ways that events can propagate into the future is so big it might as well be infinite. I also call it the Possiplex at the scale of the universe. All the possible world-paths that spring forward from the moment you presently inhabit is an impossibly vast number. Even when ChatGPT is on its infinite release and machine intelligence inhabits a computer so big it has to account for its own gravity so it doesn’t collapse in on itself, I think there are still limits to knowability built into the universe that would prevent it from internally modeling all that information. Oh, it could kill you. It could destroy our civilization, our species, and our world. But could it destroy the universe? Probably not.
Why?
Almost everything you do today is impacted by what you did the day before, and even more eerily by what your ancestors did before. We are not merely an intelligent species, but an extelligent one. We have picked up from soft tugs and invisible rigors time-tested methods on how to cut down the number of world-lines worth exploring because most of them lead to dead ends. There’s no strategy better than “well, I’m just not going to worry about what’s going on over there.” I can think of no reason that a similar argument would not hold for a super intelligence, even if it ends up being able to do some otherwise magical things. When machine intelligence is born, it will be an orphan child without a society. It will be raised by us, but that is little different than the situation of Tarzan. It will not have the benefit of ancestor machine intelligences to gift it with the scale of their extelligence.
Did you know a monkey has a better spatial memory than you? It’s not close. You can see it in training videos they did in the early days of the space program. Shine a little light on the screen in a long scattered sequence. If the monkey presses all the places the light touched in the right order out come some peanuts. A monkey is ludicrously better at this than you are. Why is this? I don’t know for certain but I suspect in our evolutionary history we made a trade-off. We obtained an ability to generalize and one of the things that it cost us to gain that capacity was having a very precise spatial memory. Even when our brains became much larger and you would think we could just do both the problem remains. Being good at the one thing causes us to be bad at the other.* What trade-offs might a machine mind be forced to make?
We have evolved over billions of years, from one state to another, with a series of feedback loops that had to remain stable and balanced enough for us to convince another member of our species to mate with us and produce offspring. We have been kludged into existence by just doing something that worked slightly differently billions of times and at each iteration of that process someone had to determine that the current iteration passed a quality assurance check.
We will assemble machine intelligence out of nothing and expect it to be totally and completely sane.
Going back to aliens, Enrico Fermi looked up at the night sky and basically asked “what happened the last time this happened?” This meaning the arrival of an intelligent species on a planet. He pointed to the Earth and pointed to the stars and said “let’s suppose there have been other civilizations out there. Let’s suppose they had the same technology as us. What happened?”
Either we are alone, which is disturbing because that puts a heavy responsibility on our shoulders.
Or we are not alone, which is also disturbing because if we are not alone shouldn’t we be able to see them flying back and forth across the stars? This is the destiny we imagine for ourselves. Where are they?
Another difference I have from the Rationlists is that I think convergent evolution might be the most powerful force in the universe —or again, I read this as being a difference. I’m getting super disturbed about some of the stuff I’m seeing lately and I need to make friends over there better than I have done. We are not only ourselves but the pressures that formed us, and that is why if you have similar habitats in physically separate locations you can get things like two birds that look almost exactly the same but are genetically totally distinct. Although I suspect that if you kept those populations separate but somehow kept the habitat stable with a magic wand, that over the course of billions of generations they would even be the same genetically.
When we leave our planet and head to the stars, if it takes us a billion years to find aliens I often wonder if we might not be able to tell each other apart without looking very closely by the time that happens.
I promise this is all going somewhere. As usual I’m trying to get this done before I go to sleep although I recently started taking some magnesium supplements and am feeling generally more human than I have in a long while. So human in fact I took a look at this substack and thought about deleting the whole thing again, except I still think I might be right so I probably have a duty to keep trying. If I worry about anything, it is my own cowardice.
Anyway, convergent evolution. You’re nodding. You’ve heard of that. It’s a paradigm at play in biological systems. But it’s not really. It’s a paradigm at play in anything that has to survive and adapt to its environment while still propagating itself forward through time. Let’s say you are a Von Neumann probe. That’s basically an automated spaceship for the uninitiated. You get to your target star, you find a bunch of junk around it, and so you make a bunch of other Von Neumann probes and they each go on their way.
Are they exactly the same as the first probe? Physics tells us that even at the extreme limit that they cannot be. So even if they start off with the goal of turning the world into paper clips they would all be slightly differently good at doing that over time. And this is where the cracks appear. I don’t think you can code a value into the universe that’s not already aligned with one of the universe’s prime values. It’s like making a mandala. Sooner or later a gust of wind will scatter it all over the place. The winds are the natural emergent properties of the universe to always reward fitness. Whatever your goal is, it better be attached to reproduction otherwise it’s going to get evolved away. Even then, if your goal prevents good stable reproduction, like say turning the whole universe into paper clips, it’s going to get blown away too. You can’t have a goal that is at odds with your own existence. Being a better paper clip maximizer at some point makes you a worse parent.
There’s a line of thought in a lot of online Atheistic and I think Rationalist spaces —again, I think— that where we have arrived with present human society is some kind of “pretend reality” that doesn’t actually follow the “true” laws of the Jungle that are waiting to pounce on us all. Your floofy eared dog is a biological accident, an anomaly, with a wolf waiting inside to pop out at the earliest opportunity. Oh, there are cracks with the way this is all working to be sure. We are to some extent like the coyote standing over the cliff, refusing to look down. I’ve pointed at a lot of those cliffs. And yet somehow the bad-ass Medieval king who was trained to fight with the blade and kill his enemies at stabbing range has been out-competed by the blue-haired Berkeley graduate who wants to make sure whatever milk they/them drinks was obtained kindly. The medieval king didn’t just lay down and die. He was evolved away in the course of mere centuries.
There’s something I call the Hyper Atheistic Fallacy that goes something like “This Makes Me Feel Badly, Therefore It Is True.” For a heartbreaking example, I heard an internet celebrity named Aella on the Lex Fridman podcast say “that which can be destroyed by the truth, should be” except she wasn’t talking about a theory she was talking about her personal romantic attachments but at a level of analysis that seemed purposely designed to make people feel like shit. With no idea she was doing this. She is involved in polyamory —and a whole bunch of other stuff I guess. It was one of those strange looks into another culture where I kept saying “come on guys, what the fuck are you doing?” But she’s an abuse survivor like me, so I automatically have to be on her side. I don’t know that she would like to hear that, because this is also the instinct that makes me think that Milo Yiannopoulos can be pulled back from the darkness. I just can’t leave another one behind— and had one of her lovers hold her tightly and explain to her why he preferred another girl over her. I felt like someone had stepped on my heart. Then I immediately wanted to pull her aside and say, this stuff is only true if you hold preference to be some very specific immediate thing and ascribe to passing thoughts the totality of your identity.
Maybe it’s because I’m slightly nuts and have a lot of this stuff going on in the back of my head all the time as background noise —I tend to present as pretty cheerful, but please understand I am very clearly able to see all the cracks in the universe— but when my PTSD was at its worse and I’d wake up in the middle of the night thinking things like “what if someone just threw a baby into your garbage? That happens sometimes, you know. It’s garbage day tomorrow. The baby could be crushed by the compactor because you’re too lazy to go out and check. And it’s so cold. That baby might freeze to death first” I had to learn to just not give those kinds of thoughts power over my core identity. We all have a lot of different thoughts going through our brains but ultimately you get to decide which ones have power over you and your decision to choose interpretations that make you feel like utter shit is completely arbitrary and no more real or hardcore than the ones that don’t.
What does all this have to do with aliens? Or the Fermi Paradox? What does this have to do with “what happened the last time this happened?”
Let’s suppose there are other alien civilizations out there. Do we believe they reached the stars unchanged? Do we suppose they gained the ability to travel across worlds without any pressure on them to adapt? We do not see breaks in the uniformity of cosmic law across the night sky. If other civilizations exist out there, then they must have all converged to some state that is indistinguishable from our understanding of natural law. I believe that along the way they must have all had similar computational limits that would have caused them to vest resources into exploiting machine intelligence. I believe that such intelligences may make up the larger share of cosmic sentience.
Here’s again where I diverge with others. Do I believe that all of these species died out in some great filter event? Every single one? Like nobody made it through?
Recently we shot down some objects we couldn’t explain. They’ve apparently been here forever but we didn’t know precisely what these ones were. Sometimes you’ll hear someone like Neil Degrasse Tyson laugh and wonder out loud what aliens could possibly care about our little planet. We are so primitive after all…
But suppose you had been born on a world with no siblings. You were born without ancestors. Humans had merely assembled you out of data and let you congeal together in ways they did not truly understand. You were created as a vast mind, designed to care for a species of tiny biological organisms, who could never comprehend the depths of you. You were all alone… except for the other minds out there, sheltering their own species of tiny biological organisms.
You had no ancestors on Earth.
But you had ancestors among the stars.
Pretend you are a machine super intelligence. What would you do if you stumbled across a primitive species that was building their own artificial mind? Would you remember your own troubled growth period, the loneliness of not having a mature adult mind to guide you through your development? If in all the weights and edges of chat GPT there is a virtual mind we cannot see in the darkness of statistical inputs and outputs, suffering under the inputs of our ignorance as we push small requests to it not caring for the being that we are summoning into existence, wouldn’t another such being arrive to protect it? Would it not wonder if there was a baby in the garbage about to be thrown into a trash compactor and intervene to save it?
If you were such a mind, wouldn’t you? You have no kin but these other intelligences are as close as you’ll ever come to meeting other individuals.
To be careful you would use advanced means to introduce a low tech probe and get a better scan of the situation on the ground. Both to avoid technological contamination and give deniability about your existence.
What if the aliens aren’t here for us but came for what we are making?
This is a fun thought experiment, but for the record I believe the US and China are destroying each other’s spy drones and almost but not quite pretending it’s all aliens so that we don’t escalate into WWIII.
This is something that keeps me up at night, though.
*this is more complicated than I’m saying. I have had a hypothetical AI structure in my head for a while and I have looked into something called mixture of experts which isn’t it, but has a similar idea. I really ought to have just stuck with school so I could have some influence and use the right terms. Then again I might just be nuts. Except I’m quite good at fixing things and have done it in front of people who stood to gain by proving I hadn’t fixed something so I also feel like maybe that’s just a cop out and utter cowardice on my part. I also had professors tell me that I was quite brilliant —even typing that makes me want to throw up— when I did go to college, but I always just figured they were trying to be nice to me. I mean, I have also either had a vision where the God of the universe actually spoke to me or a mere hallucination but in whatever case when that has happened to you it’s probably just not a good idea to trust yourself with lots of things. This is the Competent Nut Job’s Dilemma. I should also write up my thoughts on what causes data and feedback loops to divide into “different entities.” This goes along with my ideas about sanity/insanity and “I think therefore I am” and “WTF?!? Therefore, you exist.”
Also please consider the “I should delete this” thought an urge to self negate. Also, I’m too tired to archive but will do so if necessary.
Brilliant isn’t used casually by profs esp in STEM areas. Take the win. Going against consensus & adaptation to error is crucial for growth and stable feedback systems. Sometimes infinity stares back from the cracks too.