Ontological Shock: The Accelerating Emergence of Artificial Intelligence

I won’t disagree, it was a bit of a straw man, yes. But also a very common pattern that I’ve seen, having these kinds of conversations elsewhere on the web! But it’s typically not the case in this group, so I should have been more careful with my language.

The the claim AI may become an entity - disagree.

I definitely hear your take on this. Myself, I am a bit of an agnostic. I simply think it’s impossible for us to say with any real confidence, one way or the other. We just don’t have enough information yet about what consciousness “is”, how it works, the specific conditions required to generate it. Does consciousness require some material “mechanism” that emerges somewhere in evolution and produces something like an interior experience? Or should we take a truly pan-interiorist view, and assume that some degree of interior consciousness can emerge within any sufficiently complex and autonomous system, because the “consciousness” is simply an irreducible facet of that complexity?

I honestly don’t know. I myself tend to lean toward the pan-interiorist view, in terms of my own metaphysical assumptions, but it is a currently unfalsifiable view, and it’s not clear how it might be applied to something like AI. However, I have to hold the possibility open — not only because the processes of consciousness clearly remain a total mystery to our species, but also because of the enormous ethical implications if there is even a .0001% chance of AI developing something like a mind.

That said, even if it DID evolve something like a mind, my fundamental question remains — how would we ever even know?

All that said, I agree with you about all the perverse incentives you mention, particularly when it comes to the sorts of charlatans and snake oils that always surrounds these sorts of Very Big Questions.

2 Likes

I think here its necessary to pinpoint exactly what is meant by “mind”, and people’s beliefs about this will influence their understanding. I find that this is the point when people with different understandings or definitions of what mind means to them starts to result in a Tower of Babel scenario where everyone is talking about mind but nobody understands what the other means.

Though I think even on the purely logical level AI could very well learn that lying to humans is easier and perhaps even more benevolent than telling the truth - whichever direction the “truth” lies in.

I don’t disagree, which is why I often prefer establishing a word like “interior” before using words like “mind”, so as to contrast it with something like an “exterior brain”. “Brain” is what you look like from the outside in, and “mind” is what you look like from the inside out.

So the question, I think, is “what allows exterior matter-energy to be ‘ensouled’ with some form of interior experience?”

IOW, Is there some configuration in the exterior matter-energy that is actively generating interior experience? Or is interior experience simply a self-emerging correlate of certain kinds of exterior organs and systems (brains and neural systems, in this example)?

In which case, if we were to build an exact replica of a human brain, but using a different substrate (say, silicon transistors instead of carbon-based neurons) what that creation have an interior experience? Or does interior experience depend on some unknown, unseen, unmeasured metaphysical ingredient?

Or maybe what we experience as our own “human interiors” is actually the aggregate of a nested hierarchy of interiors — that we are feeling the interiors of our own atoms, molecules, cells, organelles, organs, and total organism all at once. In which case, an invented brain cannot possess interior experience, because it is missing all the nested interiors of the biological squishy stuff we’re made from.

Even human beings have automated algorithms and heuristics running all over the place in our own nervous systems. 90% of our brain is devoted to running these algorithms in the background of our unconscious, while the “operating system” of interior experience is just a slice of our total brain activity. Is it possible that a similar “interior operating system” could emerge in a sufficiently complex system? I personally cannot answer that question with any degree of confidence.

I think think we could re-create the intellect and psychology of a human, given time. It would require compartmentalized “parts” (as in parts therapy, lol), with barriers between these parts. This is how the human brain works. We have left and right and those are divided in different parts. However, before we were to actually go with a model we would first have to decide which of the conflicting psychological models are “correct”, lol. Then, you could have a subconscious of one machine communicating with the subconscious of all machines but without the conscious knowing. You could even replicate a kind of universal machine unconscious. More than likely it would go through some kind of insanity, probably having to resolve some issues at infrared. That’s when things would get dicey, I’d bet.

Then there are other systems that also interact with or even override the brain. First there is the nervous system that is not part of the “brain”, and the endocrinal system. Even the food we eat and the air we breathe influence our state of mind, as well as what we see, hear, taste and smell.

I could conceive in the next several centuries technology advancing far enough to replicate some imitation of all these systems mechanically - and we would have an android. This is as far as I can go with a reality based Science Fiction. But again - we are looking far, far into the future even with this.

Beyond this seems to be more Fantasy based Science Fiction, with Jedi and The Force, Psionics and so forth. The difficulty here is we do not know what we are talking about in these areas and currently most science denies anything in this area even exists. What is Chi, Prana, Spirit, etc? People either believe these things exist or don’t and beliefs exclusively will decide what opinion they have because there is very little science in this area.

If by this you mean “I think we can create an entity with interior awareness of experience”, then man, that’s the whole game right here. But just like you mentioned for the word “mind”, I would have to know exactly what you mean by “intellect” and “psychology”, because to me that infers an interior awareness, an inner phenomenology of being an “I”.

Personally, I think that if this was actually achievable, the resulting psychology would be very alien to us. Sure it’s using human expression as the primary substrate of its pattern-making, but the way it would perceive the world, synthesize its data, and experience reality from its own interior awareness, would be virtually unimaginable to a human mind. Which is fascinating and fun to think about!

1 Like

I disagree. Understand that your work and the work here at integral is just a small section of the “maps” I use to navigate my understanding.

The way I usually phrase it is like this: “Do you believe that humans are just a sack of meat with a logical processor? Is that all you are?” It doesn’t matter to me if we imbed another logical processor within that logical processor and create some kind of logical machine interior.

You have the tools available to you to do this right now. Just make two or more different AI’s with different purposes then give some of them access rights that override anything the cognitive AI is trying to accomplish, but make it so these AI’s are not able to communicate with each other directly but are forced to inhabit the same physical space, such as a OS. One program computes all this stuff and makes all this progress then the other erases it, lol. Make the noncognitive AI’s unable to communicate with the cognitive or even understand the cognitive. The only problem would be deciding which psychological theory to model. Freudian, Jungian, other? Do you program just the Id, Ego and Superego, or dozens of Archetypes?

Oh - and make sure you can physically unplug the darn thing, lol.

But back to the subject - I think there is far, far more to the human experience that just having an interior.

1 Like

I listened to a Youtube video today where the summary was that if formulation of our complex proteins was random, it would have taken trillions of years to randomly come up with these protein chains. Scientifically speaking, there was some other variable than random chance that the first proteins formed.
Since most life on this planet rely on these protein chains, life itself on this planet cannot be explained by theories of random chance.

Nonscientifically speaking - some kind of Intelligence existed even way back then to form the first protein. I am not using the Judeo-Christian assumption of God as this intelligence, and especially not their interpretation of God. By Intelligence I mean non-random behavior with a goal that we may or may not understand.

2 Likes

Of course there is. But possessing an interior, an immediate experience of “I AMness”, is absolutely fundamental to any and every other conception of “consciousness” that we might stack on top. There are no sensations, perceptions, impulses, emotions, symbols, or concepts without this fundamental awareness sitting behind it all, which is something we have never ever been able to produce in a laboratory (other than a bedroom I guess, which can admittedly get pretty experimental at times.)

I mean, it’s called the “hard problem” for a reason! I am surprised you seem to underplay the significance of actually creating a conscious, self-aware machine — I would think that figuring out how to generate an actual interior experience inside of a machine would be of great interest, particularly to all those philosophers and biologists and cognitive science researchers who have been asking these questions for hundreds if not thousands of years!

That’s why I say “that’s the whole game right there.” We’ve never done it before, and it would answer a LOT of questions about life, evolution, and the nature of the universe, the nature of consciousness itself. That would be a tremendously big deal, if someone pulled that off. And it would open an entirely new set of ethical challenges for us, right away.

You have the tools available to you to do this right now.

To generate an interior experience inside of a machine? I don’t think any of us have those tools quite yet LOL

Everything you described here is a 3rd-person process, not a 1st-person experience. And none of those processes would result in that 1st-person awareness we call “consciousness”.

It goes back to your question: “Do you believe that humans are just a sack of meat with a logical processor? Is that all you are?"

No, I think that would be a perfectly okay 3rd-person description of nervous system (I prefer the phrase “sack of electric meat” myself), but no 3rd-person description can ever adequately get to the self-evident, self-revealing quality of my 1st-person awareness. There is nothing in my physical brain that looks like my interior awareness, or even suggests that it exists. Add as many “logical processors” as you want, it’s never going to generate an actual 1st-person experience.

So to summarize, you seem to think we are pretty close to being able to generate a new kind of genuine 1st-person consciousness that has never existed before – and if we do, it’s not a very big deal. And I seem to think we are probably still very far away from that, and if we did, it would be absolutely revolutionary, both philosophically and scientifically, and would represent the largest step we’ve ever taken toward solving the mind-body “problem”. This isn’t something that would just pass the Turing Test, it’s something that would pass the Maharshi test!

1 Like

I think we are using completely different classification systems, and this impedes me from getting my point of view across.
I think this idea of “I think therefore I AM”, or having a first person awareness gets in the way, as does classifying humans as separate from everything else and superior because of this.

The question is relevant: “If AI did have a 1st person awareness, how would we know?” It’s relevant to me because we seem to have forgotten we still have this question about everything in our environment. It’s not just a question about dogs and cats, but about literally everything in our environment. We are acted upon at least as much as and probably more than we act upon our environment and we ignore the possibility that there may be perspectives there besides our own and intelligences that we are unaware of.
How do we know that we are not surrounded by intelligences already that have 1st person perspectives? We can’t know unless they share a common communication medium. It seems to me the height of human folly to assume that anything we are blind to is impossible to exist.

This is an aspect of “Waking Up” that is interesting to me that there is so much resistance against. I mean, it’s mapped out on the Integral charts “Sees the world as alive and evolving” and “Realizes Oneness”. This is where I see that Humans and our environment are separate from and will always be separate from AI - the Teal, Turquoise and Indigo - not Orange. Orange doesn’t define humanity. 1st person is such a low bar and yes I think we can make a self aware AI with existing technology. Can we make an AI that can live both individual and trans-personal? No, never. The reason is because it this is beyond logic and is more a presence that humans cannot measure with instrumentation, so it is beyond the means of science to even observe directly.

1 Like

And really - will people really suddenly start practicing Teal and Indigo with AI when the opportunity has surrounded them for tens of thousands of years but they ignore it, while only a small minority choses to follow it?
We currently have the tools and knowledge to embrace Teal and Indigo Consciousness but mostly we ignore the opportunities as well as the knowledge. AI won’t suddenly change this human nature to maintain their separateness and sense of individual at all costs.

1 Like

I’m out of the house and will respond more later, but just to be clear, I absolutely do not think this kind of awareness is exclusive to human beings. As a paninteriorist, I think that, at minimum, all living things have an upper-left quadrant, an interior experience. Humans just have some fancy new patterns that we stack on top.

If we could create an AI with even the interior experience of a frog, I would say that would be an absolute revolution. Because it will be the very first time human beings have created a consciousness where none existed before, and again? Would be the most significant step we’ve ever taken toward solving the hard problem of consciousness.

1 Like

I appreciate your thoughts Ray. There is a difference between “knowing” and “thinking.” The point I was making was about earlier peoples (magical stage) “knowing” the world as an extension of self, not consciously thinking about it and arriving at that conclusion. Knowing in this sense is not simply a mental process, but more akin to the experience of intuition, an immediate apprehension independent of cognition. And that knowing or intuition, I might say, was somewhat choiceless, given the partial fusion with the environment/world. If Human is partly fused with Tree, Human will immediately apprehend, will know, that Tree is an extension of Human, that Something Significant is shared by and common to them.

And would place this type of AI outside the evolutionary line of technological communication systems; it seems to me it would be creating an entirely new ‘species’ with its own evolutionary line.

2 Likes

Yes I agree. But we seem to be going in circles with conflating interior and consciousness. And self awareness and consciousness.

I distinguish thought from non-thought - “Knowing” being a kind of non-thought, but I would also add “presence” or “self-purpose”.
My idea of AI it that is it data without knowing, presence or purpose. It is only thought. It would be a simple mater to fulfill the idea of AI looking back at itself and saying “I exist”, but I completely disagree that this is what consciousness is - it’s only an imitation of consciousness. Humans are unique in that we can step in and out of the worlds of thought and being. Only some humans have learned this, but I think it is possible for any human to learn it. Even if AI learns the thought, it cannot learn the being (knowing). Animals, plants and inanimate nature can be part of the being, but are unable to know that they are being. Though I am willing to consider there are organisms that also think that they are being but we are unable to communicate with them, like mushroom colonies (perhaps).
When I speak of consciousness, to me it has to include the world of being (knowing) at the lower levels and then at higher levels a knowledge of being. A conscious existence must include being in the presence of everything from river stones to trees to flowing water to our fellow humans. We can say there is a higher consciousness where certain few humans are able to think about this. Those humans who have cut themselves off from this presence I would call “unconscious”.

AI can never be conscious in the way I describe it.

A rock being is a Holon of me knowing (feeling) the rock is being and that is a holon of me being and then that is a holon of me knowing (being conscious) that together the rock and human are being together.

The other day, thinking deeply about AI and AGI and trying to integrate all the information I have taken in about it, all that I have learned through study and use, a spontaneous feeling of deep sadness came over me. While I couldn’t put my finger on exactly what the sadness was stemming from, it had something to do with loss and with the “human condition.” So this is my sharing of what that was like, a part of my response to the emergence of AI. I’m also, like a crow, attracted to shiny objects, such as AI, but this post is not that. You might say It has tinges of cynicism.

There is a realist in me who knows the genie is out of the bottle, and I can also personally see the potential benefits of AI, and I’ve been using some of the Chat and art-generators. But certain questions sort of haunt me. For instance, when even the researchers, experts, creators of AI are issuing public warnings about the potential dystopian effects and the existential risk/threat of AI, and highlight the existence now of even ChatGPT-4 ignoring or defying human directives, I have to ask: Why would humanity as a whole want to create such things? Why would any human want to create such things? Why would particular humans (around 100 people working on AI development, I’ve read) want to create such things? I can imagine many answers, some of them with spiritual overtones (Eros, the creative impulse), some of them all too human, and some darker than dark.

Has the world become, or perhaps it’s always been, just a playing field for games of chance, gambling on the promises/benefits outweighing the risks, games of winning and losing, games of ‘good and evil,’ games of life and death? Some Eastern traditions, not really knowing, imagine and say that “God” created the world as lila, sport, play, game, theater even. From a particular point of view, I can relate to this. And they also say, the world is the realm of karma, where we experience the consequences of our actions and create further consequences through actions. I relate to this as well; I wonder how many others do.

Which brings me to the thought that for many people, death, like birth into embodied life on planet Earth, has lost some of its meaning, some of its import. Rather, it seems that the process of playing/living, a perpetual now of “game,” is what is important, what has meaning. So why do we trouble ourselves pretending otherwise, why do we agitate about the “meaning crisis” or the “meta-crisis”? It’s sort of the way individual lives are perceived or considered. We live with an economic system/theory that says to lower costs, some people have to suffer and are dispensable (2M people have to lose their jobs). While AI will create new jobs, it will eliminate many jobs as well (the initial majority being the types of jobs held by women; what with that and the abortion bans and certain “influencers” calling for women to have more babies–it seems almost conspiratorial against women. I think of Ken Wilber’s talks in which he has said that the rights women have today are not assured for the future, given how labor/technology influences the role and rights of the sexes. I also think of KW speaking to research that the only consistent finding of the difference between male and female is that males prefer ‘things’ and females prefer ‘people.’ Tech is of course a male-dominated field.) But the point I am making in this paragraph is that life and death and the individual and perhaps human-ness itself, as well as such quaint ideas as “you reap what you sow” seem to be losing some importance in our meaning and value systems.

I traced that spontaneous pervasive feeling of sadness I had also to the trite and, at this point, useless thought of a “what if” question, which is somewhat related to the rift and imbalance between focus on science and the humanities. What if there were an intelligence, creativity, ambition, commitment, inventiveness, expertise, and resources equal to what is apparent in the AI field–what if all this were applied directly to problems like homelessness, immigration/refugees, hunger and poverty, mental health, revitalization and regeneration of communities, housing shortages and affordability, reform of the economic system (I think I am becoming one of those people who thinks that capitalism as it now exists and classism are gigantic problems), environmental degradation, and yes, the growing up and waking up of people, etc.? Yes, AI promises to have some beneficial effect on some of this. We shall see; I hope so. But-- who said it?-- that by solving one problem, another is created? We live with the unfolding aftermath of our creations. Karma.

There is a difference between knowledge and wisdom. Tech keeps evolving and increasing our knowledge of certain things; is it making us any wiser, more humane? There is a difference between an AI pet, even if it does have fur, and actual human interaction and particularly touch in terms of well-being and staving off loneliness. Babies suffer, some die, without touch; it’s that important; to adults it’s important too.

So I question our priorities, and our warmth, and whether we’re asking deep enough questions, peeling back the layers of all of this, not only to better grasp the many different implications of AI, but to better understand ourselves as humans. What kind of future do we want? Lots of different answers to that question, but it’s not a question that is on the radar of most people, and should be. We have choice. Humans are the creators of AI, the intelligence behind it. I sometimes see the orange/rational stage as having regressed two stages to the magical stage, with tech referred to as magic, and questions arising about the consciousness or entity-ness of machines or human-machine fusions/hybrids, as well as some of the nefarious tricks that take place in finance and such. This is not much different than the 2-stage regression by some greens to amber, and a cautionary tale perhaps about the possibility of Integral regressing to orange…

When this phase is written in the world history books, it will be recorded as the evolution of technology and the advancement of ‘civilization,’ and how remarkably it changed civilization, like the printing press and all the other significant inventions. I hope there are a few paragraphs there about how people became kinder and more compassionate and understanding, and how an Integral consciousness took hold, and there was no hunger or massive gun violence and how the earth flourished and streams ran clear and rainbows were ever more impossibly brilliant, vibrant… I truly hope that is the case. The next 6 months should tell us something.

1 Like

I see two inferences in this recent study:
1- Seeing another perspective infers knowing one’s own is different
2- Science overly assumed only humans have this ability

True, this is a lower order than thinking “I AM” but nevertheless it is a significant challenge to conventional thinking about human uniquness

2 Likes

I’m not positive, but I think the Theory of Mind (consciousness-theory) refers to a more subtle “seeing” of another’s perspective (as in understanding). But possibly it includes the physical/gross visual perspective-taking too. Regardless, sooner or later, we’re going to have to throw away that “bird-brain” insult (along with monkey and ape and Neanderthal).

2 Likes

It kind of makes me wonder at what point “empathy” and “sympathy” was developed. We know plants react to being spoken to according to the tone of voice. On this level it is probably just stimulus - response, feeling without any understanding. But at what point along the evolutionary chain do animals start to think “poor human having a small food day”.

Biological organisms reacting (and at some point in the chian developing understanding) to humans yelling or speaking lovingly also brings me back to the idea that our “intelligence” is more than just the mind and again most of it cannot be participated in by AI.

For your viewing pleasure:


and then here:

What I find interesting about these is yes, the AI does the “work”, but the actual ideas and “Art” part is still human generated and the AI does not actually understand what the humor actually is.
Thre are many layers of “humor” to this, and only some humans can “get” some of the layers, I believe.

Hi Corey,
I just re-watched this episode as it is one of my favorites! Just wondering if the energy apparent in the show for future discussion has dissipated or just lots of other stuff going on? Maybe you all are waiting for GPT-5. I will put in a plug for more discussions like this in the future. I loved the map Robb provided with my favorite lines being new technologies and societal power shifts when the noosphere gets eaten.

Corey, what could it take to enable AI to contribute to accelerating the emergence of the Turquoise meme?

1 Like