Ontological Shock: The Accelerating Emergence of Artificial Intelligence

I too find that interesting, if only because developmental researchers also use something like “language models” to create their assessments. I am always interested when we start seeing an overlap of terminology from otherwise non-related fields.

Similarly, there is a new kind of GPT prompting structure that folks are talking about, and I am experimenting with, called the “Tree of Thoughts” framework, which has notions of depth and span (which they call “breadth”) associated with it. As more and more of these overlaps come onto our horizon, it’s not hard to see how Integral could make a potentially large contribution to the field of artificial intelligence, particularly the way it generates, manages, and distributes knowledge.

Another thing I’ve been thinking about recently, is just how strange all this feels, and how much of that strangeness I think can be attributed to an uncanny-valley like effect that is evoked when playing with these technologies. There is a subtle sense of “haunting”, almost, which I think is due to the fact that we are interfacing with these systems using language, our primary intersubjective technology to exchange understanding from one subject to another subject. Language is for subject-subject relationships, not for subject-object relationships. We don’t talk to dead matter — and when we do, we know we are being a bit silly and really just talking to ourselves, because it doesn’t talk back.

GPT, however, does talk back. So when we engage with AI using the intersubjective technology of language, there is a natural expectation that there is another subject somewhere on the other end. And while we cognitively know there isn’t, I think there are likely tens of thousands of years of evolutionary instinct somewhere in our brain stem that is insisting that GPT possesses an actual bonafide subjectivity, or else we would not be capable of interacting with it via language.

As this continues to emerge and AI becomes ever more ubiquitous, will we be living the majority of our lives in an informational space that is constantly evoking this “uncanny valley” feeling? I wonder what that will do to our long term mental health!

I love these questions. For me, it’s almost like asking “can subtle energies be transmitted via gross matter (e.g. electromagnetic patterns in a computer)?”

And I think that, in many ways, it can — for example, we can discern “heart” in a sentence, or in a poem, transmitting through the idle signifiers that we see on the page. It seems to be one of those “it’s not what you say, but how you say it” type deals.

In the main talk, Bruce raises the point that it is hypothesized that being polite to GPT actually produces higher-quality results. And my sense is, there may be something like a subtle-energy pattern making happening here. Perhaps good data is simply more “polite” or more “respectful” than bad data? Maybe including polite terms in your prompt is resulting in higher-quality pattern-making, leading it to higher quality sources and segments in its training data?

I am not sure the mechanism here, but it makes me wonder if perhaps this sort of overall pattern recognition is enough to capture and re-transmit some of the “heart” that exists in ordinary human interactions. It certainly seems capable of simulating more dense forms of subtle transmission — emotional content, for example. And I’ve had some luck getting to apply some transpersonal themes as well (e.g. “identify some useful Witness-state insights from this content transcript.”)

Ultimately though, I think the “embodied knowledge & distributed knowledge” polarity is helpful here, as is the “AI as Other & AI as Extension of Self” polarity. If we regard AI a separate entity, then we might get caught up on concerns about its capacity to “speak from the heart”. However, if we are using it as an extension of our own self, our own creativity, and our own heart, than of course it can help us to transmit those kinds of frequencies, because we’re not looking at AI as a source of heart, but as a vehicle of heart.

Just some random thoughts!

Here the loaded phrase is “higher quality”. This is a subjective judgement based on bias regarding what is “higher”. Sure - people who tend to use 10 words when 3 would suffice will tend to see that as “higher quality” than being blunt, for example. I don’t see that as higher quality, but an affectation of quality. I sense a judgement that more direct speech is lower quality and roundabout speech is higher, which I disagree with.

It is completely possible to plot out on data points “polite” speech. It’s just various linguistic functions that have been analyzed in the realm of linguistics for decades. The issue is - politeness can fall flat on humans if there is a sense that it is not genuine - and how can AI be “genuine”? Some cultures don’t like the question “How is your day going?” The English Native speaker considers it polite, while some non-native speakers might ask “Do you REALLY want to know, or do you want me to just make up an answer?”

What I would be interested in would be listening to an American Chat GPT talking to an Eastern European ChatGPT and then doing psychoanalyses on each other. That would be a hoot. Or two Chat GPT’s trying to convince each other of two opposing perspectives.

1 Like

I’ve been following online conversations/podcasts with various people, to hear the diversity of thought around AI, and how people fundamentally frame it. One person, a Buddhist monk well-acquainted with AI and some of the creators of it, frames it as “data-ism is religion.” He lays out a pretty good case in the sense of saying religion tells us what is ‘real’, and how we should act towards it, and says data-ism is the preferred religion of the market and the state. He’s not averse to AI, btw, has his own ideas of how to bring wisdom and goodness to AI (through creation of an AI trained on the wisdom and goodness, he says, of spiritual practitioners; a wisdom AI that would then interact with other AI systems to “teach and guide them.”)

Another credentialed person (four years at NASA creating algorithms, among other things) sees AI fundamentally as similar to human magicians who enchant and delight us with their tricks, all of which are based on illusion and deception. Magicians keep their methods hidden, secret, and she sees AI companies doing the same thing, not being transparent about the data sets their products were trained on, for instance, or sharing to the extent that they understand exactly how AI works. She is not totally averse to AI either, but definitely believes there’s too much hype around it.

Another person used Hannah Arendt’s “banality of evil” phrase to describe their fundamental take on AI, tech being so commonplace and all pervasive in society and yet clearly, producing some “dark” effects (along with good and mediocre). Duty vs. conscience is a central core of the argument, with people saying that using tech, including AI probably in the near-future, is contrived as a duty in order to be able to participate in society, with more and more institutions and companies requiring it, and that this duty will override some people’s moral and ethical senses and personal responsibility.

So the fundamental takes on it are varied, and it’s interesting to see the archetypes and projections coming up. I myself do think it’s a big step in the evolution of technology, which is to say, a step in the evolution of (call it) intelligence of the particular humans who created it. Whether it will be an evolutionary step for humanity as a whole in terms of goodness, truth, and beauty is yet to be seen. Its emergence has been compared to the profound significance of the invention of the steam engine, and to the advent of electricity in terms of cultural/societal change. But it has also been compared to the profound significance of the invention of the nuclear bomb, and the nuclear age in general. So there’s that.

I wonder if a dumbing down of at least some humans might occur. One podcast I listened to talked about studies done around GPS in vehicles, which have shown that people who used GPS 100 times to go from here to there were unable to then make the same trip on their own, using their own sense of orientation and direction, memory, recognition of landmarks, etc. Perhaps atrophy or a laziness sets in in certain parts of the brain when there is total reliance on an external device to function in particular ways for us.

As for the people creating AI models and programs, I can imagine them having some “God-like” experiences. I can imagine them looking at, being mesmerized, even awed by the computing power, by how rapidly processes happen, what is produced by the models. I can imagine them entering states of consciousness outside of space and time, being entranced, being in subtle states, being so raptly attentive, caught up, that they temporarily lose the sense of self, of separateness, and perhaps experience a merging with the object they’re ‘meditating’ on, and have an experience of non-duality, where the machine/computer/its workings and they are one. I can easily imagine that. Whether or not they call this God, I don’t know.

That’s an interesting (and potent) practice; I’ve used it too.

Yep.

If human consciousness is defined (and this is only one definition of a big concept) as the first person subjective experience of knowing that “I exist,” then yes, the gut and loins and heart etc are part of it. I don’t know what AI will be able to do in the future. We know that older forms of AI are able to sense energy: heat sensors, motion sensors, ultrasounds etc. I don’t know exactly how they do that, but they do. So who knows what might come next?

Have you watched the videos with “the godfather of AI,” Geoffrey Hinton? He worked on AI at Google for 40 years and is now dedicated to warning about its dangers. He says, if I understood correctly, that the “Superintelligence” we’ve yet to see the full explosion of has “human intuition.” (Forgive me if you’ve already tuned into him.) He uses the example of “man woman cat dog,” saying if humans were asked to group all cats as either man or woman and all dogs as either man or woman, humans name cats as woman and dogs as man. The Superintelligence does the same thing, and it is not based on large language learning. So hmmm…who knows where this is all going? (And it might be argued that “intuition” is not exactly the right word or concept, but that’s the word he used.)

No. It takes practice.

1 Like

Not yet, but I will do a search. One person who had a slightly similar idea was Terrence McKenna way back before this became trendy. He basically said that if true AI ever came about, the first thing it would do would be to hide its existence from humans. He also brought up the concept of mushrooms being both conscious and intelligent - but the intelligence is so foreign to us that we cannot even recognize it with our intellect - until we take high doses of psychedelics.
This leads to the problem that humans can only form opinions on what they can understand, and science can only understand what it can measure. What is required is more intuitive scientific methods, which allow science to come up with the man-woman-cat-dog conclusions.
I’ll give an example. A friend of mine was selling that jewelry that supposedly resonates with a frequency that is more “grounding”, turning a $3 bracelet into a $30 and up bracelet. Yes, the claims are dubious scientifically, as with claims about crystals and all the other things in the mystical genre. But hundreds of thousands of years of experience has engrained into humans that there is something about these things that cannot simply be explained away that science doesn’t support it. Science says it isn’t in our genetics (because they can’t measure it). But every human can access this universal … idk … experience. One of my favorite examples of this is data that shows variations of human events with the phases of the moon, which “science” denies but the data is there waiting for an explanation.

For sure, if we consider different “intelligences” (Gardner) - some people are using some of them less and less. I think this also produces increasing imbalance in the spheres of the brain and the masses are getting more and more “lopsided”. I think we have to go out of our way and actively reject some (perhaps most) aspects of modernity in order to maintain a balanced mind / body.

This is what I see in the AI dialogue when in the hands of the IT industry, particularly when they start talking about “consciousness”. Just the name consciousness is a deception, because AI proponents flip flop on what consciousness means almost as if they don’t understand the terminology they are using. I hear them making claims about what might be called “big consciousness” to make a big story and impress the masses - but when you try to pin them down and have a discussion about that, they start describing what I’d call “small consciousness”, and then when I try to pin them down on that, all we are left with is “fast logic”. When I look at the claims of consciousness because when you speak nicely to AI, it elevates the discussion - when I boil it down we are just talking about polite verbiage, which has been mapped out by linguists since the 1960’s as a data set and we are merely seeing an acceleration of the speed at which this data set adapts when presented with similar data sets.
What I would like to see in an AI that would suggest actual independent intelligence would be an AI that responds “why are you asking me such a stupid question? Can’t you figure that out yourself?” LOL Or even say it more politely but the point is to actually have it’s own opinion about the person asking the question and an ability have an opinion on how it should answer.
As an example, let’s go to religion, with the more expansive meaning of religion as:

What comes to mind is Eastern traditions such an Martial Arts masters, Zen Masters, Yogis, Gurus, and so forth - who often respond in ways that re incomprehensible to the initiate but through a process later becomes clearly wise. In pop culture we have Mr. Miyagi telling Daniel San to “Sand the floor”, which Daniel though had one purpose but later we found had a deeper purpose. It would be interesting to see if AI could handle these more intuitive data sets, most of them requiring personal experience to understand the WHY of doing them and that personal experience of doing is something the AI would lack, thus being incapable of truly understanding its own teachings.

What I see is that our concept of how things “are” allows tech to conceal what our personal responsibility actually is, so modern humans are by and large mostly ignorant of what their responsibilities should be to themselves. The first step in reversing this is to pull the curtain to show that OZ is just a lost old man, and then click the heels of our ruby slippers and return home, lol.

Is the Tree of Thoughts particular to GPT4 and later versions, or separate app you integrate with GPT?

I certainly hope Integral can contribute something.

I’ve been thinking about the Human Genome Project which aroused so much excitement when it started in 1990, but also many ethical, legal, and social questions as to its implications. From the very beginning, the HGP allocated 5% of its annual budget (and still was a couple of years ago) to researching/addressing these issues, which was instrumental in the US passing the HIPPA in 1996. AI Safety research, on the other hand, from what I’ve read, accounts for only about 2% of all AI research, and 99% of dollars are spent on further development, with 1% of monies spent going to safety and ethical issues.

While genomics have played a role in various fields, and were instrumental in tracking changes in the SARS Covid virus and in creating testing for Covid-19 and in the development of vaccines, major problems during the pandemic were related to public messaging and polarization around masking, vaccines, lockdowns and such. In other words, human communication and perspectival attitudes and behavior. That is going to be a problem with AI as well, so perhaps there is a role for Integral. I would think, with so many AI experts issuing warnings of existential threat/risk and asking for regulation, that some companies might be open to hearing some novel ideas.

I always appreciated Freud’s elaborations on the uncanny, that sense of anxiety and strangeness we can feel when the familiar and the alien are present at once. I think Trumpism, MAGA, QAnon have prepared us a little bit for the uncanny; think of the neighbor or relative or friend who’s espousing far-out conspiracy theories… But yes, what effect will that have on some people’s mental health? And then if superhybridintelligence comes on the scene in the next decade, wow.

unless of course it’s spirit-possessed :slightly_smiling_face: an idea no more far-fetched than others we’re talking about. “There are more things in Heaven and Earth, Horatio, than are dreampt of in your philosophy.”

My discussion with @LaWanna brings to mind a reply to this:
I already have several “magic rocks” that serve this purpose. :grin:

This has nothing to do with AI per se. What we are talking about here is voodoo, witchcraft, Idol worship, and a thousand other names. This ancient practice is basically using a thing (object, chant, mandalla, and now AI) to, let’s say … pretend … in a way that will lead to transformation.

So yeah, if you pour goats milk over your AI’s CPU while chanting “Ohm Nama Shivaya” 108 times a day for 90 days, the AI will facilitate transformation of those who approach it, as it will then be a “consecrated” AI and will no longer will need to answer in words, but it’s mere presence will create a transformation. (and the goats milk probably fried the circuitry).

I am joking a little here, but on a serious note this is what people have been doing for millennia with inanimate objects (and themselves) so I don’t see what is unique or different with pretending AI can do it.

I think Paul Stamets espouses this as well about mushrooms. There is research that shows that plants do have a bit of sentience; they respond to human emotion and also have a response to being cut, pruned. Even water has been shown in a few studies to respond to human consciousness, e.g. pollutants transmuted. And of course, rain dances and other shamanizing towards changing the weather are possibly more of the same.

Yes.

This sounds somewhat related to the theory of consciousness called the Theory of Mind. With this theory, consciousness is present in a being/person/animal/entity if they can see another person’s perspective and see that it is different from their own perspective (or the same). It reminds me of the Zulu concept of Ubuntu: “I am because we are.” In Integral terms, somewhat related to the you/we 2nd person perspective.

Another archetype/projection related to AI, is the “double-edged sword.” It’s being applied to labor, ie. some jobs will be lost, but others created. But it’s also general enough to be applicable to other aspects of AI as well.

It’s more like a prompting strategy, where you get GPT to produce multiple variations, and then to “reflect” on those variations by having it rate them based on a certain set of parameters. Which basically becomes something like a brute-force effort to simulate holistic thinking (such as the crossword problem, where a suggested solution for a clue changes as soon as another clue is solved). Here’s a nifty graphic from the paper to help make sense of the logic:

unless of course it’s spirit-possessed :slightly_smiling_face: an idea no more far-fetched than others we’re talking about.

I certainly cannot eliminate the possibility! And it’s fun to think about, for sure. But to me it feels like the AI “brain” is currently composed of any number of neural circuits, but is not yet “alive” as a self-sustaining, self-motivated system. Then again, from the perspective of a single cell in our body, perhaps the human nervous system would look similar, while producing a higher-stage felt interiority that no cell would ever be able to discern from that level of scale.

That said, it’s an important question to keep in the back of our mind, because there would be massive ethical consequences if we somehow discovered that there was some form of interior experience inside the machine.

1 Like

I think I meant something different than “extension of the self” :slight_smile: I am not talking about how we project consciousness into nonliving objects, which yes, I agree, is essentially a form of animism.

I’m thinking more in terms of a distributed intelligence, which our own embodied intelligence depends upon. I don’t project consciousness into my calculator, but I definitely recognize a calculator as being a part of my own distributed intelligence, allowing me to “offload” certain cognitive tasks so I can focus on other tasks that require some degree of embodied human intelligence. In which case, we are not looking at AI as an “entity” but as yet another in a long series of informational systems, each of which has completely restructured our society. I already mentioned this point earlier, so I will just copy/paste that paragraph:

It is surreal having a 10 year old daughter as this stuff starts to hit the mainstream, and trying to imagine what her future will look like as society once again begins to autopoietically reorganize itself, just as it’s done for every prior communications paradigm. (The emergence of language at Crimson allowed neolithic magenta societies to emerge, the emergence of writing later allowed Amber to emerge, the emergence of the printing press allowed an eventual proliferation of literacy that planted the seeds for Orange to emerge, the emergence of electronic media like radio, TV, and film later allowed the Green stage to emerge, the emergence of internet technologies is allowing Teal to coalesce and cohere, etc. At every step of the way, new communications systems generate new forms of distributed intelligence, which allow society to reorganize in completely new ways as individuals in that society have access to new methods and resources for generating knowledge. AI is very much part of this ongoing legacy, and the effect it has on global society will be profound, I believe.)

All of which is to say, individual embodied intelligence is simultaneously lifted and limited by our distributed Intelligence. For example, orange rational individuals certainly existed in ancient cultures like, say, Ancient Egypt. It takes a bit of rationality in order to organize a massive public works program and build something like a pyramid. However, there were very few artifacts generated that were imprinted with this orange altitude, and therefore no rational “distributed intelligence” that would allow others to more easily grow to that stage, and therefore the rationality that did exist then was 100% translated according to the magic/mythic stage of the overall social discourse (and distributed intelligence) of the time.

For us in the 21st century, we have a massive, global-scale distributed intelligence — including books and media artifacts and education systems and communication technologies and so forth. None of these would be something that we recognize as an “entity”, nor are they things we need to project consciousness into in order to recognize them as extensions of ourselves, a field of distributed intelligence within which our individual embodied intelligence can grow.

A fascinating conversation! It reminds me of one that emerged in 1956. A conference at Dartmouth launched AI. The following conversation anticipated AGI in the next decade. The reality was a “winter of AI” that lasted until the late '70s. The limitations were largely cost and technical.

Since last November the LLMs have supported an explosion of limited applications and I welcome Integral’s work to explore evolutionary possibilities.

However, the employment, military and regulatory impacts may dominate. This ia already happening in robotics. While I don’t expect a winter, economic and political issues should be anticipated.

Glenn Bacon

@corey-devos I get what you are saying, I think, but I also think you are either not getting what I am saying or are subconsciously avoiding it.

AI is a distribution of a specific type of intelligence. I agree that this has great potential - but only in a narrow band of human intelligence. If we take the full length and breath of human experience then the band that applies to AI is even more narrow in comparison. As a tool AI can help humans process information. This is great, but it is only one kind of intelligence and AI does not really do anything else.

Where I sense avoidance in AI-philes is complete avoidance of what it means to be human and refusing to recognize that AI can only monkey a narrow band of that experience. When we talk about human experience, there is Intelligence in our experience that even humans do not understand, and science is blind to.

In Integral terms, I would propose that AI can assist with “Growing Up” - only. Unfortunately (in my opinion), Integral is already skewed with a disproportionate emphasis on Growing Up but generally avoids topics of Waking Up, which I would say is rooted in not Cleaning Up. More and More Growing Up through the aid of AI probably isn’t going to contribute to Cleaning Up or Waking Up.

Yes - I completely “get” that I can use AI as a vessel to increase my consciousness - but I sense a reluctance to see that this is what some humans who have “Woken Up” have already been doing for 2,000 years if we take the official scientific dating of history, but I would push that date back 30,000 years. I see a reluctance to look that primitive man has been using this “Waking Up Technology” for a very long time. This is why I say, yes - I can use AI as an extension of self, but AI doesn’t really add on to anything that has existed for millennia back to a time when most humans knew the entire world, including rocks trees and animals was merely an extension of self. For many Neolithic peoples this was just how the universe worked. To add to this, using AI as an aid to “waking up” through this approach is grossly inferior to Neolithic methods, because - well - it’s fake (Artificial). It’s a willow-wisp because again, it only reinforces one aspect of human intelligence to the point of unbalancing and ignoring the majority of human intelligence and experience. Doubling down on technology as a means to human discovery will amputate humans from the full human experience.

Technology such as TV and the internet has already taken humans farther away from being human beings, not closer. Rather than include and subsume, focusing on technology excludes. I don’t see it as progress so much as a division of humanity. Maybe 1% of the population is taking advantage of this dissemination of data called the internet to Wake Up and Clean Up, but the 99% are going backwards into some kind of dystopia. There is no reason to think that this trend will suddenly reverse and more technology will bring more fullness to more people of the human experience. Perhaps a very small percentage of people, but the damage that will be done to the majority will far exceed and even threaten any potential gains.

I’m curious what you mean by “embodied Intelligence”, so I know specifically what you mean. I think I have a different understanding of embodied intelligence.
If I take this definition, I don’t see how AI lifts this and how it does not just limit it in comparison to earlier traditions and technologies that teach to embody rocks, trees, rivers and your community into your personal experience of being.

Embodied Intelligence: The ability to connect with and be informed by your physical experience of challenging situations.

Digging a bit deeper into embodied intelligence:
I see that unfortunately, much of the effort in the Business of Technology (which rules the google algorithm) is going the opposite way of how I see it. I see that the IT sector is only focusing on one type of intelligence, while what I see as the meaning of embodied intelligence is a much more expansive use of the word “intelligence”. I would include intelligences of the mind, body, spirit on one axis and on another self, community, global population and one another axis animate and inanimate. There are hundreds of different “intelligences” in these axes and there are more axes that we probably are not aware of yet.
This article fleshes some of them out, but it’s only a small sampling of the potential of (non-IT) embodied intelligence.
https://leadershipembodiment.com/embodied-intelligence-the-art-of-leadership/

1 Like

Ray, humor me, and tell me how you define this “one aspect of human intelligence” that is reinforced by AI.

What this made me think of is the debate around whether a “waking up” experience using synthetic psychedelics is as legitimate as a waking up experience that occurs spontaneously or through specific meditative or other spiritual practices. Is there an analogy here with what you’re saying or no?

Also, in following the Integral timeline of stages, the Neolithic (magical stage) precedes the egoic stage (allegedly happening around 10,000 years ago). My understanding is that people overall in the magical stage did not have fully developed ego identities, therefore no full sense of separate self or separateness from the environment. Their (at least partially) knowing the world as an extension of self was more like and due to a half-fusion with the world, rather than some specific ability or capacity to know the exteriors as themselves. I tend to think of it in terms of people during the magical stage being in contact with what we might (and others have) call the ‘world soul’ and that their knowing the world as extension of themselves probably stemmed from what we might now call soul experience, a felt-sense of the aliveness and vitality and some essential thing (consciousness/energy-will perhaps; or maybe just interiority, or maybe what we now call Spirit) that permeates everything (the specific term ‘soul’ as such had not yet been introduced into language, but that doesn’t mean people didn’t experience something like that.). Does this calibrate for you?

The corollary of “distributed intelligence” — that is, intelligence that is embodied in an individual nervous system. If I calculate 7x7 myself in my own head, I am using embodied intelligence. If I use a calculator, I’m using distributed intelligence. A PhD is an example of someone using their embodied knowledge in order to add to the totality of distributed intelligence, which would include all of the various artifacts and systems that preserve and distribute knowledge in society.

1 Like

I am not sure I would agree with this :slight_smile: I think that the flows of information, the systems and structures that preserve and distribute knowledge, are some of the primary factors for the emergence of new stages. I think that electronic media like radio, television, and film have exposed human beings to perspectives they may never have otherwise encountered, and this exposure resulted in things like the emergence of civil rights in the 1950s and 60s. Just like how the printing press eventually resulted in the emergence of the Orange rational stage.

Plus, whether we are looking at birds nests, beaver dams, or manmade communication systems, we always have to integrate the lower right quadrant. It is part of our humanity, an extension of our humanity, and in many ways, of our own bodies. We discovered fire and used it as a second stomach to “pre-digest” our food. We use microscopes and telescopes and satellites to extend our vision and our inner models of reality. We use communications technologies to transmit ideas from one person to the other, anywhere on the planet, in real time.

If I take this definition, I don’t see how AI lifts

Keeping in mind that we are only in the very early stages of AI, a technology that seems to be advancing at an accelerating rate, I think it “lifts” the same way our socially distributed intelligence has always lifted — if you live in a red society with no orange artifacts or systems in place, the odds of you developing to orange are very slim. Certainly possible, but not very likely, because there is no shared language or syntax or collection of artifacts to support the stage. But it becomes far more likely if you have access to books and libraries and universities. We encode or imprint our artifacts with our stage of intelligence (cognitive, moral, aesthetic, etc.) and then those artifacts help others to grow into that same stage.

Now, what often gets lost as we transition through these stages, is an overall appreciation for (and training in) the major states of consciousness. But I don’t think this is because of technology itself. I think this is due to specific selection pressures in history when Orange emerged, causing us to collectively throw the baby of “spirituality” out with the bathwater of “mythic religion”, which prevented rational forms of spirituality from proliferating and allowed extreme orange views of materialism and hyper-individualism to win the day. And I have to imagine that human history will eventually correct for this unfortunate detour.

But first we have to get over the green social media detour, which threw the baby of growth hierarchies of expertise out with the dominator hierarchies of oppression and bigotry. Which is why in many cases social media doesn’t “lift” (though it can, this very conversation being an example) but instead can send people down a rabbit hole of pre-rational, pre-conventional perspectives, because social media replaces “communities of the adequate” with communities of the inadequate.

That’s my take anyway!

[quote=“LaWanna, post:43, topic:37283”]
Ray, humor me, and tell me how you define this “one aspect of human intelligence” that is reinforced by AI.[/quote]
I think we have to remember that AI is just zeroes and ones. It’s a medium, and a limited medium. A medium is “an agency or means of doing something”. In this case, visual and auditory. The first thing is that it only uses two senses. I can’t smell, feel or taste anything. So right from the most obvious and scientifically proven, it limits us to 2/5 of the human sensory experience.
But there are other senses and intelligences that are part of the human condition. There is an intellignece in humans that, for example, makes the smell of a mother’s menstration repulsive to the son and the father unatractive to the daughter. This is the intelligence of pheromones. We can smell sickness if we pay attention. Humans can feel the energetic differences difference between concrete and a river rock. There is a whole inteligence ignored by the modern world about where it is sensible to live and how a dwelling should be designed. There is also an intelligence in our bodies about the moon, the solar system and the galaxy, and it’s a proven fact that certain humans recorded representations of a small portion of that intelligence conservatively 3,000 years ago. But the intellectual knowledge about these cosmic calendars is a shallow surface knowledge about it compared to actually feeling in our bodies the movement of the sun, moon and stars and recognizing these cycles in our selves, the natural world and in society is another kind of intelligence. Knowing one’s internal mechanisms is another kind of intelligence, which allows us to wake 5 minutes before our alarm clock, recognize a tingling in the throat as a sign to load up on medicinal herbs, which foods assist our bowel movements, and the difference between cured meat and rancid meat or the difference between moldy cheese and mold on cheese or when yoghurt is spoiled. I could go on and on, the point being that AI knows nothing about these things and never can. It can know that people bungee jump and represent it in video format, but can never feel the intelligence of the endoctrinal reactions.

This is an interesting thing about some areas claiming to be science that are not actually science, like archaeology. They take a position that is not proven and demand any opposing view be proven. So the position of the conventional wisdom is that even though humans have been physically the same for 400,000 years, we were only capable of thinking about “exteriors” fairly recently. In questioning this, I would bring up the ancient megaliths and the clendar the Mayans inherited from an earlier culture. Then we have the Torah and ancient Hindu writings, which tell tales but also have lets say “coded” teachings about very sophisticated “exteriors”. Historians of course try to date these stories as only a few thousand years old, despite the actual content of these books telling stories that date back much much longer. So I am of the position that early man had the same capacities for thinking as we did, and was able to think abstractly and think about what others were thinking and about what other societies were thinking and so forth AND also they had a “fusion” with the world. My opinion is it’s only our desire to believe modern humans are superior to ancient humans and we have a cognitive bias that makes historians and archaeologists assume things that are not proven, but demand absolute proof to overcme their assumptions.

Humans formed various ideas of rights and equality without these tools throughout history. You’re just randomly taking your own history and MLK and establishing that it is more significant than for example the Greek Philosophies and Socrates demanding ethical behavior in his government. I expect the reply will be that now we believe in “Universal Human Rights”, while back then it was only for “citizens”. Back then it was ethnocentric but now it isn’t, right? Well, actually no. We know that still to this day in America, everyone is equal and has human rights, but US Citizens are more equal than noncitizens. Americans still hardly bat an eye unless an atrocity is within US borders.

Yes, and I would add that there are varius degrees of integration. One type of integration I see online is more of an intellectual integration, which I see AI can help with. But again, there are deeper levels of integration into the body to various degrees that AI cannot help with. You have to actually go out to that beaver dam and feel the beavers or feel the moon and the stars, or a homeless person in their own puddle of filth. That will make it deeper, not sitting in front of a screen.

The bias here is rejection of artifacs that we do not understand, or view as superior. I’m sure you know that ancient epics were passed down orally. These tales transmited Orange and it’s biased to demand they be in book form. It goes back to the demand that a culture had to have writing before it be considred a “civilization”, but most practical mediums for writing don’t survive thousands of years. Or, there may have been another means other than writing to record knowledge, other cultural artifacts that symbolized concepts but they didn’t see any reason to write down an explanation because to them the purpose was obvious. Or, the knowledge was esoteric and hidden to all but a minority.

This seems to be a kind of random social bias. If a method produces specific results over thousands of years, why would you reject it as a technology? For thousands of years there have been tried, tested, reliable and proven methods to achieve various levels of consciousness, but because it isn’t written in a book that you have access to, it isn’t technology. Only after such a technology has been misappropriated by modern society then dissected and turned into a frankenstein, then it’s considered “technology”. As an example of this I’ll give the whole “mindfullness” thing and similar.

Yes, but those ideas only had a limited capacity to expand and influence, based on the shape of the distributed intelligence at the time. Which means fewer people were able to grow into later stages, and fewer pocket communities that are orienting to those later stages. This is why we have so many “lost” technologies over history, because the distributed intelligence did not yet exist that was capable of preserving and distributing this kind of knowledge. It seems obvious to me that, for example, the European Age of Enlightenment, when Orange first emerged as a cultural force, would not have happened without the information systems — books, libraries, universities — that were created at the prior stage.

Back then it was ethnocentric but now it isn’t, right? Well, actually no. We know that still to this day in America, everyone is equal and has human rights, but US Citizens are more equal than noncitizens.

Correct. To a degree. Everyone still starts at square one, which means we have plenty of pre-universal enactments of civil rights in this nation. However, the fact that worldcentric universal rights even exists as a concept to be aspired to, is an evolutionary emergent. Prior to the rise of Orange values and technologies, slavery was an economic requirement for the majority of societies on this planet (including in Plato’s Greece). It was a perverse game theory, where if you don’t have slaves but your neighbors do, before long you will be overpowered and you will become the slaves. But due to the emergence of Orange values in the LL, and orange technologies in the LR — both of which were made possible through increasing literacy rates among the population — these forms of slavery became obsolete.

But again, there are deeper levels of integration into the body to various degrees that AI cannot help with.

I never said otherwise. Human beings have a wide diversity of intelligences available to them, and AI is good at simulating some of these intelligences, but not others. For some of these intelligence, the AI outperforms by a very wide margin. For others, it is clearly lagging way behind. For others still, AI doesn’t even attempt them.

But again, we are in the very, very early days here, and while it’s useful to notice the current limitations, it may be naive to think that these current limitations are permanent boundaries limiting the field itself, which continues to grow at an accelerating rate.

The bias here is rejection of artifacs that we do not understand, or view as superior. I’m sure you know that ancient epics were passed down orally. These tales transmited Orange and it’s biased to demand they be in book form.

It’s not a matter of “superiority”, it’s a matter of efficiency and volume. Oral traditions are limited in time and space, because information can only be transmitted directly from one person to another, and then that knowledge is subject to the limitations of human memory and all its confabulating tendencies. No single human being can contain even a small town’s library worth of knowledge, because no human brain is capable of that magnitude of informational storage and recall.

And I never “demanded they be in book form”, I simply noted that books are much more conducive to something like scientific thinking than oral traditions. We would not be able to maintain our current body of scientific knowledge through oral tradition alone. And yet, that oral tradition is still transcended and included in the information system itself (e.g. lectures at a university). And of course, the advantage of books is that you get a direct transmission across time and space from the author to the reader, rather than that transmission being mediated by countless intermediaries who are transmitting the knowledge orally from one person to the next.

This seems to be a kind of random social bias. If a method produces specific results over thousands of years, why would you reject it as a technology? For thousands of years there have been tried, tested, reliable and proven methods to achieve various levels of consciousness, but because it isn’t written in a book that you have access to, it isn’t technology.

I’m not sure what you are asking here. I do think there is such a thing as “spiritual technologies”. I even look at language itself as an “intersubjective technology”, and one of the very first technological innovations of our species. Language was our first informational technology (beyond grunts and gestures). And then written language and mathematics. And then Amber created a network of universities and libraries, which later allowed Orange to emerge with its own informational systems. And then electronic media (it’s often said that television is responsible for the American public souring on the Vietnam war, because suddenly we were seeing horrifying battlefield coverage from the safety of our living rooms. The same with civil rights, when the wider population started seeing Black Americans being hosed down in the streets.) Hell, the only reason we even have an image of Earth as a precious blue marble in the vast emptiness of space, a mostly-closed system where 100% of human history can be seen in a single photograph, or even in a single pixel of a pale blue dot, is only about 50 years old, and is 100% the product of the informational systems at our disposal. And I believe this image is one of the most significant and transformative symbols ever to land in human consciousness. What a wonderful, terrible species we are.

And I think this remains true with recent innovations such as the smartphone, which put a video camera into everyone’s pocket and allowed us to capture and share everyday injustices, resulting in things like the BLM movement (regardless of what our opinions about that movement may be, or how the lack of enfoldment and proportionality in social media allowed certain narratives to be selected over others.)

And speaking on a personal level, the only reason that my daughter is alive is because of modern medical technology, based on these contemporary information systems. If it wasn’t for technology, I would have lost the very center of my heart many years ago.

I simply think that, when Orange first emerged, it did so in a highly calcified mythic Amber landscape, where women were being executed for witchcraft and any number of theological superstitions ruled the day. Orange rationality was the first to differentiate the value spheres of the Good, the True, and the Beautiful from each other, which all existed in a state of pre-differentiated fusion in the Amber mythic world. And because Amber mythological notions of God were such a monumental obstacle to the emergence of Orange, Orange overstepped and eventually filtered out mythic-based spiritual interpretations of reality for objective, verifiable interpretations of reality, largely because many descriptions of the world coming from mythic religion were being shown to be completely untrue (size and age of the universe, centrality of Earth, etc.). And thus, scientific materialism was born — especially since actual state training and awakening was a deeply esoteric feature in these traditions, while exoteric amber dogmatism dominated our perceptions of the world, of our species, and of our place in the universe.

And now, the sorts of informational feedback loops available to us with present information technology allows something like an integral movement to emerge, where for the last 20+ years we have been able to find each other in the dark and forge an ongoing and self-sustaining community of like-minded, like-hearted people from all around the globe. I don’t think that would have happened if, say, Ken Wilber was limited to oral methods of knowledge transmission alone. And I very much regard this emergence as being beautiful, good, and true.

tl;dr: Technology is an intrinsic part of humanity’s ongoing evolution, and plays a critical role in that evolution by creating a series of self-reinforcing and self-organizing feedback loops with the other quadrants — with our thoughts and ideas and perceptions in the UL, with our cultural values and interpretations and sense-making in the LL, and with our actions and behaviors and efforts in the UR. Here at integral, we see how these four quadrants are not separate realities, but rather four different perspectives on the same occasion. They co-create and tetra-enact each other at every moment. There is no LL culture without any number of LR structures and conditions that support and sustain that culture over time. Therefore, choosing to deliberately downplay or omit technology from the human story, or to view it as some kind of aberration that prevents us from going back to some imagined enlightened period somewhere back in time, is something that I think leads to a short-sighted, narrow, and incomplete analysis of human emergence over the last 50,000 years.

1 Like

Except for this bit, I think we agree or are parallel in most areas we’ve discussed, and where we are not it’s kind of down to opinions, especially about the futue.

But this bit you are straw manning me. I never said anything about getting rid of technology or omitting it from the human story. I never said it prevents us from doing anything. If you think this is my position you should probably look again at what I’ve been saying and then after that look at why you want would want to assign such absurd arguments to me.

I see AI as an extenstion of technology as nothing more than a tool. Almost every tool humans have made have been double-edged, were useful for a time in limited ways, and many have been discarded. I see technology as a tool that is useful for many things, just as fire and a sharp blade are useful for many things. Of course fire and sharp blades have saved lives and helped manking progress tools like fire and sharp things might be called “evergreen” technologies. They transformed human society, but do not transform humans. Give a good man a knife and he cuts a coconut open to feed people. Give a bad man a knife and he kills someone. AI is nothing more than a more complicated “knife” (tool) it will be used by humans according to the nature of the human - it will not change humans.

So again - if you think that I want to get rid of all tools, that isn’t my position. It’s not even my position to get rid of television, and it’s not my position to stop AI. What I am saying is that AI is a tool, and every tool has a use and that use is limited. And, yes - people selling a tool will always try to play it up to be better and more useful in more tasks than it actually is. People selling a tool also turn a blind eye to the potential harm that tool may cause. By “selling” here I don’t mean only literally selling, but getting on the bandwagon so to speak.

It’s not that I don’t think AI will be useful - of course it will. What I am saying is that - in my opinion that of course I cannot “prove” (but neither can the opposite position) - AI will not do all that people are fantasizing it will do, and in the area of consciousness it will be like the pied piper leading the rats into a river. I am firmly convinced there is no way AI will suddenly change human nature any more than books have.
I do not see any evidence to suggest AI will lead to a fundamental change in human consciousness, or maybe only slightly more than the advent of the internet. I don’t see ChatGPT as even aware of how boring it is when it speaks, lol.
I think the Internet will continue to be the thing that changed humanity. The internet is what blew esoteric monopolies out of the water. If anyone took advantage of the immense offering of formerly secret knowledge by implemeting it in their lives, then AI won’t really add that much. If they did not, then AI won’t suddenly get anyone interested, and isn’t any better than print anyway.

This is a second straw man. Again, I don’t think there was some kind of Garden of Eden in human history during some past “Enlightened period”, much less that technology “prevents” us from “going back” to. What I am saying is that for all the many tools modern humans have, I don’t see them as any more or less enlightened. I don’t see MLK as any better or inferior to Socrates, for example. This doesn’t mean I think the ancient world is better, just that I don’t just assume that everything was worse and now we have it figured out - we don’t. Neither do I accuse anyone of being "short sighted or “narrow” if they do think modern figures are better than anyone in ancient history or if they feel modern humans are superior to humans in previous times. I just don’t see it that way. I appreciate the wisdom of previous ages and don’t think it is inferior to modern. Entertainment as well - I enjoyed reading Homer just as much as I enjoyed Tolkein.

1 Like