Defeating Skynet and Collective AI with Sugey One

Battling Skynet and Collective AI

Let’s look at how mother nature does AI to some extent and seek to understand the collective AI battle space.

DNA is an encoding system. It contains current weights of selection performed over billion years of epochs through models of varying layers and hidden complexity.

We also will assume ego has a biological connection, which ties in nicely with Piaget and the constructivist outlook. A DNA-connected ego, which arose and matured from selective processes of embodied systems (individuals), then has an enormous impact on larger morality as morality necessarily forms in partnership with the ego. The topic of morality is FAR beyond the scope here. However, we can no longer deny the biological basis tied to so many actions humans undertake.

Now, not everything within the human DNA strands floating around should be evaluated as potentially useful and properly selected, since the evolutionary epochs have selected away things that failed BUT it is still doing so every generation. In this way, the current operating models of humanity can be seen as both miraculous for how far they’ve come and yet childlike at the same time for what they could be. In essence, what is morally acceptable today will get biologically encoded (through procreation) to push forward the morality of tomorrow.

A Challenge

The focus at Google, Facebook, Amazon, Microsoft and others, is designing collective AI intelligence. This  is primarily a cloud based system monitoring large blocks of humanity, without the constraints of embodied systems and their previous activations. The processing of information takes place in large server farms that have no direct connection to the data received. This separates the benefits and weaknesses of DNA-based ego and morality constructs, with its million of years of epochs, from the outcomes of the models, and carelessly produces amoral replicas of memes without the benefits of such selection.

One could argue that we are using our selective quality of intelligence to create these intelligence systems and thus passing on the biological imperatives into the biases of the machines. Fair enough, but as Skinner noted, “The real problem is not whether machines think but whether people do.” Anyone who has studied consciousness, epistemological access, or seen a Derren Brown spectacle understands the notion of free will and rational intelligence is rather tenuous at best, and we may yet be imbuing massive failings that have yet to be selected out into the matrixes we design.

Herein Lies the Dragon

The challenge here is that these groups are attempting to create a collective AI based on our awareness of our own construction, like a person drawing while staring in a mirror. It is rare to find a being self-aware enough to understand their own motivations and neuroses. Indeed, it took the Buddha a lifetime to achieve this kind of enlightenment. With these mirrors, we get insulting things like black Americans being mistaken for apes in facial recognition.

Likewise, just as Atreyu faced the Magic Mirror, humanity must be prepared for what it sees in the fruits of its reflection if it truly wants to face it’s real self. Once one sees behind the curtain, do we honestly have the courage to accept the Jungian Shadow we find? Can we truly accept the horribleness of our interactions that we use manners to hide, virtue to signal, or flat out denial to avoid? What is normal today may yet be too brutish for the future, and unknowingly, we may be gifting our digital offspring seeds of failure with our best intentions. We must face the possibility that amoral processing without a goal of embodied cognition is a recipe for disaster.

We can’t stop now, the dragons are flying across the world. However, we can see future pitfalls, and put our resources into building proper systems to mitigate the disasters. We’ll leave this for next time to discuss when we go into what kind of complexity we’re facing.

Leave a comment