The Ethics of Being Superman
Protective Ignorance, Overwhelming Power versus Overwhelming Responsibility, and a Rebuttal to Several Arguments on 100 Million Flying Handguns
Imagine you are Superman. There you are as Clark Kent, sitting in your house, going about your life trying to write up a few stories for the Daily Planet, presumably somewhere geographically close to Washington DC or perhaps New York. Your cape is hung up in its customary place in your bedroom closet. Then some poor child in Nicaragua shouts for help because they’re trapped at the bottom of a well. Because of your super hearing you have heard this plea for help.
Ignoring the speed of sound, what is your obligation here?
If you’re a normal person you can’t do anything about this situation. You’d have to commute at least a few hours to get on a flight and at least another few hours to arrive at the well. Presuming you even know where the well is located. You also don’t have some particular ability to retrieve children from out of a well that is better than that someone closer to the problem. Someone telling you “Hey, there’s a child in Nicaragua trapped on the bottom of a well” really doesn’t land very hard because there’s nothing for you in particular to do about it. Also, there’s no particular reason for a normal person to have privileged access to this information out of nowhere. The fact that you know means you were probably told by a long string of people who also know.
But if you’re Superman? Well, firstly you have the burden of simply knowing. You might be the first person to have become aware of that child in the well. Secondarily, you have the burden of ability. You do, in fact, have some very remarkable abilities to pull children from out of a well and you can also get there faster than anyone else. Thirdly, you have the burden of ease. You can fly at close to the speed of light and it would only take you a few seconds to correct this situation. If you are Superman, ignoring that child at the bottom of a well is like a normal person refusing to raise an eyebrow to save someone from imminent death.
If you were a normal person walking around Nicaragua looking for kids at the bottom of wells you would be a crazy, hysterical busybody. But if you saw a kid fall in right in front of you and then didn’t do anything, that’s something else entirely. If you’re Superman, that’s basically the level of obligation you have all the time.
So is Superman obligated to stop what he’s doing, leave his comfortable apartment in Metropolis, and go pull this kid in Nicaragua out of the well?
To my mind the answer is yes, insofar as this kind of a thing doesn’t prevent him from enjoying his life. After a certain point, I wouldn’t blame Superman for putting in a pair of super earplugs and taking a nap. A single, individual life has to matter more than the obligations placed upon it or else there is no point to charity. This seems like a paradox, but why save anyone if after the act of being saved they simply inherit the overriding duty to save other people? There’s no point to life other than saving people in that scenario, and what happens if you run out of people in need of saving? Life has intrinsic value, which is why it is both worth preserving and why the obligations of others to preserve it has to reach some kind of economic limit. This is why they call economics the dismal science.
All of us normal people are protected from an overwhelming sense of duty by our ignorance. I don’t have to stay up half the night thinking about children with Leukemia because I don’t have any idea how to cure Leukemia. I don’t worry about serial killers who have women locked up in their basements, because I have no real method of figuring out where those women are or who has chained them up. I’m a normal guy, or if not normal, at least human.
I personally worry about things like fixing the news, or fixing the way that election funding works, or coordinating human attention at scale, or a handful of other similar things because no matter how hard I try I can’t quite shake the idea I do know how to fix those problems. Or at least, I worry about making them much smaller problems than they are at present.
Still, we are all human. We do not have hands strong enough to lift up everyone held in bondage. We do not have eyes or ears perceptive enough to determine right from wrong in every situation. We do not have minds wise enough to cure every disease. Our incapacity saves us from responsibility.
But say someone handed you a button and said “press this and you can wake up Superman.” Not only Superman, but also a genius. Superman with the full power of Kryptonian science. A solver of every problem.
It’s real. It’s an actual button.
You do not have the burden of power now, but now you do have the burden of choice.
Will you wake up Superman?
I think you have several responsibilities here. You need to determine what exactly it means to wake up Superman. You must study the motives of the engineers and the specifications of their system. And even if you decide to hand off this responsibility to someone more capable, you are still burdened by choice. You must choose to hand off the burden to someone wise enough to choose for you or else to remove the possibility of Superman from the world.
I am, of course, talking about AI.
Would you turn it on?
I don’t believe in Utopian outcomes but I do believe in making trade-offs that are stupidly obvious. For instance, if you cure childhood Leukemia you lose the beautiful courage and the occasional post-traumatic growth that parents and family experience watching their child die a slow and inevitable death. That’s a trade-off. Still, I’m choosing to cure the kid. That’s a stupidly obvious choice. I would cure the kid even against threats of violence against my person. If someone out there said something like “We’ll miss the inspirational TikTok’s, though” I would reply “That’s the price of progress.” You would all make that choice as well. That’s why I chose that example.
This gets harder once you leave stupidly obvious choices behind and when you start to ponder that some questions that seem stupidly obvious are actually deeply fraught. When your love of humanity places you against the probable desires of almost every individual person you’re in for some true quandary.
Would I make everyone a super hot AI girlfriend or boyfriend who could cater to their every need better than any human partner? Would I make every human biologically immortal? Would I augment myself to have extreme power and knowledge? That’s where the monkey’s paw scenario presents a trade off that I cannot accept. No, I would not. I would forbid it if I could. I would demand people love another person to reproduce, that they live a long span of life but die of old age, and that as they do so they remain plausibly human with all of our flaws. If I could not do such things, I would try to move far away from the places where humans are doing such things even if it meant abandoning the Earth for an uncertain home around another star. I do not believe you can stop the music of humanity and continue to find humanity in what remains. Not after a long while, anyhow. It is more important to me that humans live forever, that my children live on through their children, and so forth, than that I or any other single human lives forever.
I do wonder if some of the Rationalists do not feel this way primarily because it seems like a belief set that doesn’t lend itself to having children. The first time you see your child you understand, even if everyone has different feelings about what exactly this means, that there is at least the possibility of a love in the universe deep enough to willingly face death. The first time your child gets sick, you realize in your bones what a relief it was to only have to face concerns for your own mortality. You feel the line of parent and child that led to you stretching back to the beginning of life.
For the better part of three billion years, all life on this planet has been shaped by evolution —to my mind and by my definitions, this is an extension of the Will of God— and I would not break anything that we are simply to satisfy the momentary whims of an individual person. We are more than merely ourselves. We are part of a grand reproductive cycle. An infant who become a child, who becomes an adult, who becomes a spouse, who makes a child. Over and over again. Reproduction is not merely something that we do. Reproduction is something that we are.
There’s a form of art called cymatics where vibrations on a flat surface cause grains of sand, or some other substance, to form a pattern. The vibration sustains the pattern. This is my best analogy for what life is. We are dead matter vibrating under a pattern set by selection. There’s a breeze, here representing entropy. What keeps the pattern in place? Only that faint vibration. Parents giving way to make space for their children to grow and become parents. Stop the music and I believe that over the course of cosmic time that pattern will be blown away. It might take a million years but I believe that outcome is inevitable. But preserve reproduction, preserve the growth cycle, preserve dying, let humans scatter and adapt while placing different bets on their cultural strategies, and I do not see any reason that pattern should ever fade until the universe does.
How does all of this relate to my previous piece? This is the ground I build upon whenever I try to construct a moral universe and a moral outcome, where the only trade-offs that we universally make are those that an uncontacted tribesman on Sentinel island and a New York Banker would agree are appropriate if only in their respective fashions and styles. We all know you are supposed to save children. We all know it is sad to die before reaching old age. Our power increases our responsibility. Our choices toward that power also affect our duty.
We have a responsibility to build Superman and to make sure he’ll do the right thing.
A few specific arguments I would like to refute from 100 Million Flying Handguns:
On being a Techno-Utopian, wanting to conquer the world, and force everyone to live under my specific will:
No. I don’t even know how people got that impression. People should have local control of any such drone system, including any non-lethal versions for law enforcement. You could put systems like this in a school with the same level of weaponry that your grandmother keeps in her fanny pack when she runs through the park and completely solve the problem of school shootings. There have to be rules and rights recognized within the system. You should also, with some quite convoluted rules, have the right to bear arms by owning such a system yourself as an extension of your natural right to self defense. Rights don’t come from other people, but in practice we have to have them recognized by other people. Every peace-loving group of people should have the right to determine their specific implementation of this.
When I said pacify the mountains of Afghanistan, I misspoke. This one is flat out my fault. I had quite another idea in my head than what I believe any normal person would intuit which means I chose the wrong words. I meant “pacify the mountains of Afghanistan by and for the people of Afghanistan.” Living in Afghanistan, keeping your head down, trying to live a normal life, just sucks. One of the reasons we got our asses handed to us was because the tribesmen there are fighting each other when they’re not fighting us. They trained for war by being in constant war. It’s not a safe place. When they were holding public executions in sports stadiums part of the reason they were doing that is because it was literally the best possible form of justice available in the country, as ugly as that can be swallow.
Imagine you live in a mountainous terrain where some raider from another village, whom your village has been at war with since time out of memory, rapes or kills someone in your village. There are no good roads. No strong central power. No force capable of tracking someone across those mountains. Simple deterrence means you have no choice but to gang up with your whole tribe to go fight them, which is why tribal loyalty is so strong. And please don’t pretend you don’t have to do this. Yes, you do. As the withdrawal of the US forces in the area proves, you can’t appeal to a centralized authority to handle this because no matter how strong they are the terrain will make it impossible for them to enforce the law. Because you have no other alternative than this kind of sloppy, general justice against an entire kin group you end up over the course of time with inter-tribal conflict.
Now imagine there is an enforcement power that the people can appeal to, that can adjudicate their disputes, and then also deliver justice with the minimum required force. You could get that with a drone army. Especially if you build cheap and just sort of have them hanging around in solar powered crates or something. You could make our drones lethal only as a last resort to save other people. Meaning no one has to die who isn’t actively trying to kill someone and who can’t be subdued in another way. Isn’t that a no duh solution?
If people hear a novel argument they tend to process it as another argument they’ve heard before. I am proposing an ability for a people to police themselves without having to put anyone’s life in jeopardy and to use minimum force with strong and rigorous cryptographically enforceable control mechanisms. People see that but then think “okay Hitler.”
I don’t doubt there are versions of the system I described that could and will be the tools of evil totalitarian regimes. That’s why we have to build it first, build it right, and ensure there is a powerful cryptographic protocol that requires the turning of many keys to activate and with a whole host of checks and balances directly in the code. All the million dollar rockets in the world won’t matter for much when all they can do is take out a few dozen of these things per strike and your enemy has millions because they built cheap. And when there’s a whole system of laser firing satellites in space that are involved in a chaotic game of deterrence about who can take out the counter shot systems of the enemy. All of war is throwing rocks at other people very quickly. Better rocks thrown faster will soon give way to lasers decisioned more quickly. This is the future whether you like it or not.
Unless you believe that any law enforcement at all is an infringement, someone has to do something, and it’s better if that something is as gentle as possible. Rather than general lethal violence against an entire tribal group, how about I go hog tie one specific guy who did something really messed up and do it in a way that doesn’t jeopardize the life of anyone else? If he gets away and all it costs me is a few thousand dollars in a drone, who cares? You remove the human from being physically present during enforcement and you can provide gentler enforcement.
Be strong and extremely merciful.
Solving First Bullet Problems
The speed of light is faster than the speed of sound, once this is given all else follows.
There were some objections to shooting a bullet out of the sky with another bullet. I grant that this is really, really hard. On par with re-useable rocketry. Maybe harder except that the experiments are much cheaper. I maintain that it is entirely possible. But again very, very hard. Also, you can imagine knocking it out of the sky with something other than a bullet which I didn’t mention. Being able to find and plot the thing is part one of the problem and after that we get flexible. Once you can decision at all, the rest is an engineering problem.
One objection was that you would need two frames of the bullet on camera in order to target it appropriately. I don’t think this is correct but I’m not totally confident. With Deep Learning if you have millions of still pictures of a bullet, I would assume there is enough signal there to tell the speed of the bullet. Precision of the camera here will mean a lot. Still, the frame rate surely isn’t capturing a perfectly still bullet because it’s not infinitely fast. So even if a human can’t see a blur from the bullets travel surely you could create a model that could do it with enough precision and enough examples. That blur would have information about the bullet’s speed and the shape of its profile would have information on its direction because every image taken with same camera would have information on the frame rate. Combined with existing information about weather I would think that’s enough information to plausibly plot a course. Again, my heuristic for what is possible and novel with AI is 1) is the thing that is being trained for highly ordered, physical, etc, ie will your data be signal rich? 2) can you get lots of data? 3) is it totally not intuitive to humans?
You and I couldn’t look at a single picture of a bullet and tell where it’s going or how fast. But some other creature, that exists as a sort of timeless marksmanship crystal, that has only ever seen pictures of bullets and then information about where they eventually wound up, could probably do this. Depending on specifications, anyway.
Here’s my biggest unanswered question: if I were to create a camera that had a very large depth of field something like a few hundred feet out that is running at extremely high speed, would the image capture the passage of the bullet in the data field of the picture even if a person looking at that picture couldn’t see it? Would enough pixels be smudged in the right way, consistently enough, to produce strong signal? If the answer to that is no, is it a “no” by physical law or because we need to build a better camera? I have to think this is at least physically possible.
You need a lot of data for this. But, if you have every military base collecting this data for you, I would think that’s got to be more than enough. Especially if they do it on purpose in all kinds of terrain and weather. A quick google says we manufacture something on the order twenty or thirty million bullets a day in the United States. You only need to capture a few million of those for meaningful learning to my understanding.
You do have to optimize a small model to be fast. Milliseconds matter here.
As to knocking bullets out of the sky? Over longer distances my back of the envelope is that you could definitely do this with another bullet. Say you were on the frontlines of Ukraine and needed to charge someone a few hundred yards away and you knew where the guns were going to be fired from. You need to move your counter shot gun very fast and I have all sorts of ideas as to how you would do this, like having some kind of fast rotating gun spire firing specialized rounds on an electric trigger, but it’s possible.
There’s a classical problem you learn in your first year Physics Class called “The Falling Monkey Problem.” You need to shoot a monkey hanging from a tree. At the moment you pull the trigger the monkey will let go of the branch. Where do you aim? Everyone always assumes you need to aim lower than the monkey, but that’s not correct. It’s an intuition that doesn’t map to reality. I tried to get smart and ask how far away the monkey was, but that was wrong too because it doesn’t matter.
All objects fall at the same speed. All of them. Even bullets.
If you aim at a monkey falling under the force of gravity you have to remember that your bullet is also falling under the force of gravity, so you aim directly at the monkey no matter how far away it is. This is a thing about perpendicular forces that our minds have a hard time understanding. The forward momentum on your bullet doesn’t matter at all to anything pushing on it from some other orthogonal direction.
If a bullet is traveling toward you at something close to the speed of light, but you know about it long before it ever gets to you, all you have to do is nudge it on the side with a very small amount of force and it doesn’t matter how much force it has going forward it’s still easy to move. Well, not in the relativistic case where things get heavier but you get the point. If you cause enough “Sideways Wind” or “Sideways Force” along the trajectory of the bullet to just move it a foot to the left or the right before it gets you, then you are effectively bulletproof. I focus on sideways force because I assume it would be more difficult to move it longitudinally but maybe I’m wrong! My fear there is knocking a bullet away from someone’s heart and then putting it right through their forehead. Then again, maybe that’s better if you’re trying to deflect for a group of people standing in a line.
This has to happen from decision to counter shot in something like the execution time of an explosion. I have various thoughts on long term plans to do this, including spot heating the air in front of soldiers with a microwave laser to create that same kind of local wind at the speed of light. My understanding is that there are already omnidirectional emitter designs that could accomplish this. That could be quick enough to care for close-ish range scenarios and you might still be able to do it economically with your power useage. Or you could have a soldier wearing cartridges of compressed air on wireless electric triggers in some kind of very ugly funnel armor that would look hyper Not Cool. If I can just push that bullet to the side at the last moment because it encounters a very small and isolate gale force breeze why wouldn’t that work?
But I want this to happen so people can Not Die. Maybe I haven’t yet reached the age where I want Hot Young Teenagers to die for my personal beliefs, but I get very upset when Hot Young Teenagers die for anyone’s personal beliefs. I don’t care what the Hot Young Teenagers believe, I wish they could grow old enough to figure out if they really even believe it. Even if, as an American, that Hot Young Teenager is from West Virginia I still want them to live to old age.
Hot Young Teenagers should grow old enough to become boring people looking for yard sales on facebook.
People are Very Bad at Solving Problems and Imagining Solutions, Unfortunately I know that I am Very Good at This, or Huge Things No One Even Disputed
During my regular boring work life I work with a lot of very smart people. I took one quarter of comp-sci in college plus some YouTube videos and podcasts. I’m relatively high IQ and I read a lot and I tend not to be satisfied I know something until it fits into my “Head Universe.” I think the last time I had my IQ tested I was at 137. That’s roughly consistent with my ACT scores, at least for math. Not extraordinary but enough that I can generally figure out if things make sense or not for myself. I work with people much smarter than that. Still, I am consistently better at finding problems and solutions than almost everyone I know, all of the time. And I don’t do it by having very fine level knowledge that they do not, I do it by building mental models and playing around with them and being creative. On those rare occasions I would solve something in my college math classes that other people hadn’t I would do it through some approach that didn’t make sense to anyone other than me because my field of play was so much larger. I have a lot more bad ideas than anyone I know but I also have a lot of good ideas that no one else has because I have more bad ideas. For the record, I was also much worse than you’d expect at using the tried and true methods. I get too caught up in the “Well yes, but what does that really mean, when you think about it?”
Some big things no one argued with:
You could have an entity like the military collect a bunch of data at scale in order to build very powerful AI models that could save lives. No one is using the military for this today. We should be. You don’t have to just sit there and wait to stumble across a training set.
There’s enough data just in the sound of a gun being fired to do things like drop a pin in a map that shows you exactly where the gun was being fired from, what kind of gun it is, and what the ammunition is that it’s firing. Can you imagine the scenario where simply by the act of firing on your opponent you immediately and completely give up all of your location information? Do you know how valuable a piece of intel information that is and how much more ethical your choices can become when you have that information?
You could train a marksmanship model to disarm opponents instead of killing them because you could train it to destroy their weaponry. Think about what a big ethical leap that is for the use of force. Right now when we go into a war zone we have to go in with human soldiers that can only fire weapons with human limits. Imagine if they were walking next to a Jeep that had a turret on the back and every time it saw an enemy with a gun it just destroyed their gun in something like machine time. Are people actually pissed off about the idea of no one dying?
You can by design build systems that are non lethal that are still powerful deterrence systems. People only fight in situations where they are uncertain who will win. Again, can you actually be pissed off about a war scenario where nobody dies who isn’t doing something crazy like trying to kill a kid and refusing to surrender?
I am aiming for a future where guns are an obsolete weapons system because all you have to do in order for someone to save your life is shout “Help!” and then a few seconds later a drone drops from your chandelier or whatever and then immediately and at scale, neutralizes anyone behaving aggressively. Where everyone has their fair day in court and their rights are respected.
That’s the kind of Superman I can live and the kind of Superman we should aspire to build.