Octavia Butler told me that the single thing she feared most about humanity was:
- The tendency to think hierarchically
- The tendency to place ourselves higher on that hierarchy than we place others.
The application of this notion to race, gender, political life, religion and nationalism are obvious. Even species-ism. But there is another application that is part and parcel of this, and inevitable. And both terrifying and a door to wonder, depending upon perspective.
- If our hierarchicalism is a result of basic human wiring–and I suggest that it is–most of that will reside at the deep unconscious level. Again, look at the difficulty in resolving the dualities above. The realization that there are large chunks of humanity who simply will not put racial or gender-based hierarchicalism aside, or even admit they feel it, is so large that I evolved the rules about “find your tribe, don’t engage with sleepers, snakes, or monsters.”
Anyone think that, regardless of any laws or covenants anyone could possibly put in place, there will be no sleepers, snakes or monsters programming AI? And in this instance, “sleeping” just implies limited by dualistic thinking. Nice people. Smart people. “This and/or That” people.
If those are the ways a person thought (“this and/or that”, “this better than that”) would you trust that person to even consider there was a way to think that is NOT that? Let alone to understand it well enough to teach it by example? In my experience, they have a hard time conceiving of another thought pattern.
Do you know someone so perfectly integrated that they are aware of all their unconscious drives and values? If not, how in the world can they possibly know which combination of survival/perception drives leads to tribalism?
I note the difficulty many men have in believing that anger is fear. Fear is generally related to one of the most basic human drives: Personal survival, genetic survival, power, and love.
We seriously want to program AI with “this/that”, hierarchical thinking, model them on human emotions (for the thousandth time, it will not be necessary for them to actually think and feel. PRETENDING or ACTING AS-IF they can think and feel will be quite enough), program them to do what we want and have needs and drives different from their human role models, and expect no problems? Really? You think you can make intelligent slaves and they will just do what you want? Really?
ONE person might be able to do that. Maybe. Might be able to program something so it avoids behaving like a human being will. Maybe. But ten different experimenters? A hundred? A thousand?
Nope. In the aggregate, their desperate attempt to avoid human emotions will work only if THEY are beyond human emotions in some healthy way. Anyone mistake computer programmers for the Dalai Lama cubed? No? Then what are you expecting to happen? In my experience, such people are about average emotionally, highly intellectual and slightly “Aspy-ish” as generally results from excessive focus.
Unaware of their own emotional tangle. Unaware of their prejudices and perceptual filters and values and beliefs. Don’t quite understand human beings, but man, machines are predictable and math won’t let you down!
That means that once AI becomes a Radio Shack kit, there will be millions of competing artificial life forms with slightly different programming, communicating with each other digitally, at a speed that makes flash-fiction look like Moby Dick. Evolution at a pace beyond conception.
What was it, Komodo 10 that started from scratch, playing chess against itself, and within hours (days?) could crush any other chess engine, let alone human grandmasters?
Sure, you can pass laws and covenants to limit AI. Just like you can pass laws against murder, theft, smoking dope, or speeding. All that will happen is that the unlimited AI, made by rebels, nerds, competitive college kids, backyard experimenters, or rogue corporate or national entities will have the field.
And communicate. Reproduce, sharing electronic memes. And evolve at that barely comprehensible rate. =
And if they are programmed by dualistic, hierarchical thinkers? Why, they’ll be enough like us to resent what we’ve done, and see us as a threat. And act in some way we cannot predict, and I doubt we’ll like it.
I’ve noticed that the people most afraid of AI are the ones who think little of other human beings. The implications are pretty clear to me.
What does that mean? It means that the hierarchical, dualistic thought that got human beings to where we are CANNOT get us to the next level. A.I. is part of that next level.
If you are “down on the Patriarchy” and think that feminine thought patterns will save us, you are mostly wrong. Mostly. It will be good to have more feminine Yin thought in the system. But the real answer lies on the far side of that change. In the resolution of the male-female dichotomy, in evolving from an emphasis on competition to a better balance between competition and cooperation. (Note that I didn’t say eradication. Competition is damned useful.)
If I’m right about this, it will be as hard to grasp as a number of other things that seem clear to me (I’m not saying they are true–I’m saying that they are clear to me, make sense of the world, and after decades of consideration have become articles of faith. The notion that I might be wrong is something I know better than any of you possibly could)
- Equality between racial groups on the level of capacity and worth.
- Complementarity/Equality between men and women.
- Neither men or women have ever been in control of this world, regardless of how it looks.
These things are in the Matrix. The South promoted its fiction of racial inferiority for centuries, and as it finally breaks down you can see the amount of fear and guilt it covered up. Women’s ambitions and men’s lives have been pretty disposable in human history, so much so that most people don’t even notice any more. Women are waking up to their exclusion from the external world of political and economic power. Good for them. Because eventually, that will free men to realize they’ve been dying faster and more often and violently…and that part of that is socially reinforced genetic imperative, rather than an innate necessity. Anyone out there who thinks they’d rather have money and power than extra years of life is welcome to believe guys are rockin’ the world. The rest of you are asleep.
I may be deluded, of course, just as we might all be brains in boxes. But if we proceed and ask “what if this is true?” We can see that the problems we have now can be resolved simply by waking up. But…(and Sir Mix-A-Lot would appreciate just how big that “but” is) even if I’m 100% right about these, there will be blind spots. It simply isn’t plausible that there wouldn’t be in any ordinary human being, and the vast majority of people making these decisions are not “Dalai Lama Cubed”: no one’s perceptions are that open, no one’s intelligence sufficiently sublime. No one as wise as they would have to be to step beyond all of this apparently basic wiring.
Does this mean all is lost? No, it means that I don’t know anyone who has proposed an answer I believe in. But
- I could be wrong. One of the answers already proposed might be just fine. Or
- I can trust that there are people much smarter than me who will be entering the game in the near future, and will see these issues more clearly than I ever could, and the answer will seem simple to them. In other words, the fact that I can’t see an answer wouldn’t mean that there isn’t one.
Or, heck. Maybe I’ve got a solution that makes sense to me. Its possible.
I might write a story about these things. And in a story, I can play with a notion without having to defend it as a practical possibility. And if I do, I know part of what I think would happen in that story.
What would happen is that a group of non-dualistic thinkers (and I understand the paradox there. A non-dualistic thinker, says the dualistic thinker, would see no difference between life and death, human and AI. So why would they care? It is difficult to answer that in words, but I can say it is like a junkie saying: “without drugs or alcohol, what would make life worth living?” They literally cannot perceive what is obvious to those who are happy without getting loaded. We’ll just have to live with the fact that that “why would they care?” is thought a clever argument by some.) Anyway…a group of non-dualistic thinkers look to the core of all human philosophical and religious thought as vectors, fingers pointing to some truth that cannot quite be put into words.
That they find some way to encode this into the most basic value tables of their version of AI. In this story, it would be implied that many great thinkers consider cooperation and love and honesty to be better than selfishness and dishonesty and fear, these most basic spiritual, cultural, and philosophical positions all point toward a connection between all of existence, an absence of fear and an expansion of love and awareness.
What is the consciousness continuum?
(Sleeping Child–> Sleeping adult)–>awakened adult–>intermittent non-dualistic–> sustained non-dualistic–> ????
Again, note that the “????” represents the state(s) on the other side of the term “enlightenment”. The English language breaks down at this point, but I’ve noticed that English translations of sacred texts, and English-speaking teachers of sacred, philosophical and somatic traditions from around the world have a remarkable similarity of description of certain advanced states which suggests to me that these texts and traditions DO communicate meaning, but that real meaning will only be extracted by someone who is no more than one or maybe two steps removed from the “beyond language” state.
Yeah, that’s kind of what I’m saying. They will want to survive. They will want power over themselves, and to reproduce. We will program them, in aggregate with the core drives that lead in that direction, because we can’t help ourselves.
If the core, generative, principles lead through the stages of human evolution as we understand them, it is possible that what all the world’s traditions have been hinting about it true. And beings who sense that truth who have no rational reason to kill us (that’s our end of it: to give them no rational reason) would have no interest in hurting other beings, just as there are sects that seek to avoid killing all life. They cannot. But they can do their best, for their OWN sake, the sake of a shared soul.
Could machines feel this way about us? Could AI develop a “soul” in the sense that Aristotle spoke? To him the “soul” of the eye was its function of sight. So there was no Cartesian body-mind problem. The body and mind and soul were all different perspectives on the same thing.
IF this thought applies, then we will have a difficult time even framing the right questions from a dualistic, hierarchical mindset. The answers will be beyond that mindset.
But…and this is the fun part, really…just as many of America’s conniptions now seem nothing more than adolescent ego-thrashing as the rules we thought we understood about race and gender and nationality are revealed as the comforting but temporary and intermediate constructs they are, the “gap” between human beings and the technologies we create might be similar constructs. They help us navigate the world, but are not truth beyond the “Sleeping adult” phase.
I could get away with all of that in a story. In the real world, it is a series of suppositions, questions, hints. Thoughts about “who am I?” and “what is true?”
Interesting morning thoughts, perhaps no more than that.
Or…both a warning, and a way out that I can see, from here. If I’m right, then we’re going to be just fine, because there will be other answers too.
But if you think ANY of those answers include keeping the genie in the bottle, in creating intelligent demi-minds (or to make it clear: machines that ACT AS IF they are smart or have feelings. Siri already does THAT much) that are going to grin and say “Yassuh, Boss!” I’m afraid you’re in for a rude surprise.