Ontological Shock: The Accelerating Emergence of Artificial Intelligence

I think we are using completely different classification systems, and this impedes me from getting my point of view across.
I think this idea of “I think therefore I AM”, or having a first person awareness gets in the way, as does classifying humans as separate from everything else and superior because of this.

The question is relevant: “If AI did have a 1st person awareness, how would we know?” It’s relevant to me because we seem to have forgotten we still have this question about everything in our environment. It’s not just a question about dogs and cats, but about literally everything in our environment. We are acted upon at least as much as and probably more than we act upon our environment and we ignore the possibility that there may be perspectives there besides our own and intelligences that we are unaware of.
How do we know that we are not surrounded by intelligences already that have 1st person perspectives? We can’t know unless they share a common communication medium. It seems to me the height of human folly to assume that anything we are blind to is impossible to exist.

This is an aspect of “Waking Up” that is interesting to me that there is so much resistance against. I mean, it’s mapped out on the Integral charts “Sees the world as alive and evolving” and “Realizes Oneness”. This is where I see that Humans and our environment are separate from and will always be separate from AI - the Teal, Turquoise and Indigo - not Orange. Orange doesn’t define humanity. 1st person is such a low bar and yes I think we can make a self aware AI with existing technology. Can we make an AI that can live both individual and trans-personal? No, never. The reason is because it this is beyond logic and is more a presence that humans cannot measure with instrumentation, so it is beyond the means of science to even observe directly.

And really - will people really suddenly start practicing Teal and Indigo with AI when the opportunity has surrounded them for tens of thousands of years but they ignore it, while only a small minority choses to follow it?
We currently have the tools and knowledge to embrace Teal and Indigo Consciousness but mostly we ignore the opportunities as well as the knowledge. AI won’t suddenly change this human nature to maintain their separateness and sense of individual at all costs.

I’m out of the house and will respond more later, but just to be clear, I absolutely do not think this kind of awareness is exclusive to human beings. As a paninteriorist, I think that, at minimum, all living things have an upper-left quadrant, an interior experience. Humans just have some fancy new patterns that we stack on top.

If we could create an AI with even the interior experience of a frog, I would say that would be an absolute revolution. Because it will be the very first time human beings have created a consciousness where none existed before, and again? Would be the most significant step we’ve ever taken toward solving the hard problem of consciousness.

I appreciate your thoughts Ray. There is a difference between “knowing” and “thinking.” The point I was making was about earlier peoples (magical stage) “knowing” the world as an extension of self, not consciously thinking about it and arriving at that conclusion. Knowing in this sense is not simply a mental process, but more akin to the experience of intuition, an immediate apprehension independent of cognition. And that knowing or intuition, I might say, was somewhat choiceless, given the partial fusion with the environment/world. If Human is partly fused with Tree, Human will immediately apprehend, will know, that Tree is an extension of Human, that Something Significant is shared by and common to them.

And would place this type of AI outside the evolutionary line of technological communication systems; it seems to me it would be creating an entirely new ‘species’ with its own evolutionary line.

1 Like

Yes I agree. But we seem to be going in circles with conflating interior and consciousness. And self awareness and consciousness.

I distinguish thought from non-thought - “Knowing” being a kind of non-thought, but I would also add “presence” or “self-purpose”.
My idea of AI it that is it data without knowing, presence or purpose. It is only thought. It would be a simple mater to fulfill the idea of AI looking back at itself and saying “I exist”, but I completely disagree that this is what consciousness is - it’s only an imitation of consciousness. Humans are unique in that we can step in and out of the worlds of thought and being. Only some humans have learned this, but I think it is possible for any human to learn it. Even if AI learns the thought, it cannot learn the being (knowing). Animals, plants and inanimate nature can be part of the being, but are unable to know that they are being. Though I am willing to consider there are organisms that also think that they are being but we are unable to communicate with them, like mushroom colonies (perhaps).
When I speak of consciousness, to me it has to include the world of being (knowing) at the lower levels and then at higher levels a knowledge of being. A conscious existence must include being in the presence of everything from river stones to trees to flowing water to our fellow humans. We can say there is a higher consciousness where certain few humans are able to think about this. Those humans who have cut themselves off from this presence I would call “unconscious”.

AI can never be conscious in the way I describe it.

A rock being is a Holon of me knowing (feeling) the rock is being and that is a holon of me being and then that is a holon of me knowing (being conscious) that together the rock and human are being together.

The other day, thinking deeply about AI and AGI and trying to integrate all the information I have taken in about it, all that I have learned through study and use, a spontaneous feeling of deep sadness came over me. While I couldn’t put my finger on exactly what the sadness was stemming from, it had something to do with loss and with the “human condition.” So this is my sharing of what that was like, a part of my response to the emergence of AI. I’m also, like a crow, attracted to shiny objects, such as AI, but this post is not that. You might say It has tinges of cynicism.

There is a realist in me who knows the genie is out of the bottle, and I can also personally see the potential benefits of AI, and I’ve been using some of the Chat and art-generators. But certain questions sort of haunt me. For instance, when even the researchers, experts, creators of AI are issuing public warnings about the potential dystopian effects and the existential risk/threat of AI, and highlight the existence now of even ChatGPT-4 ignoring or defying human directives, I have to ask: Why would humanity as a whole want to create such things? Why would any human want to create such things? Why would particular humans (around 100 people working on AI development, I’ve read) want to create such things? I can imagine many answers, some of them with spiritual overtones (Eros, the creative impulse), some of them all too human, and some darker than dark.

Has the world become, or perhaps it’s always been, just a playing field for games of chance, gambling on the promises/benefits outweighing the risks, games of winning and losing, games of ‘good and evil,’ games of life and death? Some Eastern traditions, not really knowing, imagine and say that “God” created the world as lila, sport, play, game, theater even. From a particular point of view, I can relate to this. And they also say, the world is the realm of karma, where we experience the consequences of our actions and create further consequences through actions. I relate to this as well; I wonder how many others do.

Which brings me to the thought that for many people, death, like birth into embodied life on planet Earth, has lost some of its meaning, some of its import. Rather, it seems that the process of playing/living, a perpetual now of “game,” is what is important, what has meaning. So why do we trouble ourselves pretending otherwise, why do we agitate about the “meaning crisis” or the “meta-crisis”? It’s sort of the way individual lives are perceived or considered. We live with an economic system/theory that says to lower costs, some people have to suffer and are dispensable (2M people have to lose their jobs). While AI will create new jobs, it will eliminate many jobs as well (the initial majority being the types of jobs held by women; what with that and the abortion bans and certain “influencers” calling for women to have more babies–it seems almost conspiratorial against women. I think of Ken Wilber’s talks in which he has said that the rights women have today are not assured for the future, given how labor/technology influences the role and rights of the sexes. I also think of KW speaking to research that the only consistent finding of the difference between male and female is that males prefer ‘things’ and females prefer ‘people.’ Tech is of course a male-dominated field.) But the point I am making in this paragraph is that life and death and the individual and perhaps human-ness itself, as well as such quaint ideas as “you reap what you sow” seem to be losing some importance in our meaning and value systems.

I traced that spontaneous pervasive feeling of sadness I had also to the trite and, at this point, useless thought of a “what if” question, which is somewhat related to the rift and imbalance between focus on science and the humanities. What if there were an intelligence, creativity, ambition, commitment, inventiveness, expertise, and resources equal to what is apparent in the AI field–what if all this were applied directly to problems like homelessness, immigration/refugees, hunger and poverty, mental health, revitalization and regeneration of communities, housing shortages and affordability, reform of the economic system (I think I am becoming one of those people who thinks that capitalism as it now exists and classism are gigantic problems), environmental degradation, and yes, the growing up and waking up of people, etc.? Yes, AI promises to have some beneficial effect on some of this. We shall see; I hope so. But-- who said it?-- that by solving one problem, another is created? We live with the unfolding aftermath of our creations. Karma.

There is a difference between knowledge and wisdom. Tech keeps evolving and increasing our knowledge of certain things; is it making us any wiser, more humane? There is a difference between an AI pet, even if it does have fur, and actual human interaction and particularly touch in terms of well-being and staving off loneliness. Babies suffer, some die, without touch; it’s that important; to adults it’s important too.

So I question our priorities, and our warmth, and whether we’re asking deep enough questions, peeling back the layers of all of this, not only to better grasp the many different implications of AI, but to better understand ourselves as humans. What kind of future do we want? Lots of different answers to that question, but it’s not a question that is on the radar of most people, and should be. We have choice. Humans are the creators of AI, the intelligence behind it. I sometimes see the orange/rational stage as having regressed two stages to the magical stage, with tech referred to as magic, and questions arising about the consciousness or entity-ness of machines or human-machine fusions/hybrids, as well as some of the nefarious tricks that take place in finance and such. This is not much different than the 2-stage regression by some greens to amber, and a cautionary tale perhaps about the possibility of Integral regressing to orange…

When this phase is written in the world history books, it will be recorded as the evolution of technology and the advancement of ‘civilization,’ and how remarkably it changed civilization, like the printing press and all the other significant inventions. I hope there are a few paragraphs there about how people became kinder and more compassionate and understanding, and how an Integral consciousness took hold, and there was no hunger or massive gun violence and how the earth flourished and streams ran clear and rainbows were ever more impossibly brilliant, vibrant… I truly hope that is the case. The next 6 months should tell us something.

I see two inferences in this recent study:
1- Seeing another perspective infers knowing one’s own is different
2- Science overly assumed only humans have this ability

True, this is a lower order than thinking “I AM” but nevertheless it is a significant challenge to conventional thinking about human uniquness

1 Like

I’m not positive, but I think the Theory of Mind (consciousness-theory) refers to a more subtle “seeing” of another’s perspective (as in understanding). But possibly it includes the physical/gross visual perspective-taking too. Regardless, sooner or later, we’re going to have to throw away that “bird-brain” insult (along with monkey and ape and Neanderthal).

1 Like

It kind of makes me wonder at what point “empathy” and “sympathy” was developed. We know plants react to being spoken to according to the tone of voice. On this level it is probably just stimulus - response, feeling without any understanding. But at what point along the evolutionary chain do animals start to think “poor human having a small food day”.

Biological organisms reacting (and at some point in the chian developing understanding) to humans yelling or speaking lovingly also brings me back to the idea that our “intelligence” is more than just the mind and again most of it cannot be participated in by AI.

For your viewing pleasure:

and then here:

What I find interesting about these is yes, the AI does the “work”, but the actual ideas and “Art” part is still human generated and the AI does not actually understand what the humor actually is.
Thre are many layers of “humor” to this, and only some humans can “get” some of the layers, I believe.

Hi Corey,
I just re-watched this episode as it is one of my favorites! Just wondering if the energy apparent in the show for future discussion has dissipated or just lots of other stuff going on? Maybe you all are waiting for GPT-5. I will put in a plug for more discussions like this in the future. I loved the map Robb provided with my favorite lines being new technologies and societal power shifts when the noosphere gets eaten.

Corey, what could it take to enable AI to contribute to accelerating the emergence of the Turquoise meme?