Ontological Shock: The Accelerating Emergence of Artificial Intelligence


Robb Smith joins co-hosts Bruce Alderman and Corey deVos in our first episode of Integral Life’s new AI series, where together they grapple with the profound “ontological shock” and societal transformation prompted by the emergence of AI, exploring its impact across education, personal lives, and organizations. They delve into the potential risks and benefits of AI, its role in discourse shaping, the evolving perceptions of AI, and the transformative potential of AI-aided technologies, all while weaving in spiritual insights and invoking a deeper understanding of our interconnectedness in this transformative age.

What is life? What is consciousness? What is the relationship between exterior complexity and interior experience? These sorts of big contemplative questions were once a luxury, a thing for misfit philosophers, integral nerds, and stoned dorm-room discussions. But now these questions have become absolutely central to the incredible social transformation that is taking place all around us.

The emergence of artificial intelligence in platforms such as ChatGPT and MidJourney represents the most significant ontological shock of our lifetimes, and perhaps even one of the most significant in the history of our species. The ground of our reality has radically and irrevocably shifted beneath our feet, and we are already feeling the tectonic plates rearranging themselves as society prepares for seismic changes in our collective behavior and methods of social self-organization.

The ontological ground has shifted, reality itself has changed, and now millions of people around the world are scrambling to update their epistemologies in real time in order to make sense of this new reality, to navigate this ever-accelerating change, and to strategize ways to avoid their own obsolescence.

The problem, of course, is that whenever new paradigms emerge, we then try to predict the consequences of the new paradigm by looking at it through the lens of the previous paradigm. But new paradigms aren’t simply a new way of looking at things — they are new ways of doing things, and those new kinds of “doing” result in new patterns of self-organization, new selection pressures, new attractors, and new flavors of chaos that laugh in the face of expert prognostications.

After all, if disruption was predictable, it would hardly be disruptive.

So, if the emergence of Artificial Intelligence has smashed our crystal balls to dust, is there anything meaningful at all that we can say about this new future that is now shaping itself on the horizon? How might integral ideas help change our understanding of artificial intelligence — and how might artificial intelligence change our understanding and embodiment of integral ideas?

And perhaps most importantly for us here at Integral Life, what sort of unique insights, skills, and attitudes does integral bring to the table?

Here are a few suggestions, just to get us started:

  • Integral anticipates the dramatic consequences AI will have in all four quadrants — that is, we can anticipate unpredictable consequences in our LR technological, economic, political, and media systems; our UR efforts, behaviors, and thriving strategies; our LL relationships and cultural attitudes, permissions, and tolerances; and our UL perceptions, experiences, identities, and ongoing mental health.
  • Integral understands that this new reality will not be seen, experienced, and understood the same way by everyone — that artificial intelligence truly represents a “hyperobject” that cannot be fully perceived or understood by a single perspective alone, regardless of how confident we might be that we “get it”. AI is a black box that will be perceived by individuals in very different ways, according to the perspectives and worldspaces available to them.
  • Integral appreciates that the emergence of artificial intelligence is already evoking a number of very powerful emotional and even spiritual states of consciousness in people all around the world — from any number of existential anxieties to feelings of exhilarating wonder, creative reverie, and transcendent awe.
  • Integral Life Practice helps each of us to overcome our resistance to change and develop the inner equanimity and anti-fragility required to ride this wave wherever it may take us, without getting dragged away by the undertow.
  • Finally, Integral recognizes artificial intelligence as yet another expression in the ongoing emergence of Spirit in the world, a new aperture through which the universe may reflect on itself, contemplate its own existence, and remember the timeless wholeness of being.

In short, Integral Life remains the very best place to cultivate the skills needed to thrive during this period of accelerating change, and the perspectives required to make sense of the colossal transformations to come. As these not-quite human forms of intelligence continue to proliferate, Integral Life continues to be a place where you are free to be (mostly) fully human.

In our first episode of our new AI series we take a close look at this ongoing ontological shock we are now facing, and journey to the event horizon of this ongoing emergence — that point where all the products of human inquiry, intelligence, imagination, and social self-organization converge.


Yeah AI development has been accelerating faster than our society can easily adjust to it and it will take some effort to properly integrate it. A couple weeks ago, Hanzi Freinacht wrote an article wherein he theorizes that Bruce Alderman’s Integral Grammatology could be key to allowing us to maintain our rationality and sovereignty in the face of exponential AI growth https://medium.com/@hanzifreinacht/what-can-stop-the-ai-apocalypse-grammar-yes-only-grammar-fe6ab26cc30d This article is pretty long and I think covers some of the bases that we as a society will need in order to be in right relation to AI. This discussion between @corey-devos, Bruce, and Robb covers some of the essential bases that Hanzi missed, such as the relation between spirituality and tech. I see we have an essential role to “red pill” ourselves so that we can develop the wisdom that is needed to be in right relation to exponential tech as we move into the UTOK 5th joint point.

1 Like

A flash came to me reading your article

AI is merely a more complex map. Nothing more. It is not the territory.

This came after a reflection that AI has no internal / external that I can observe. What it processes is what it processes. AI is an imitation of Intelligence (upper case I), but is not and will never be Intelligence.

I was messing around with it a bit but am by no means even a beginner in the area, but even with that brief exposure I can see how AI is fundamentally different from HI (Human Intelligence). Here is an example - Give AI a task to make up a story that would make humans uncomfortable. Let’s say a fetish pornographic tale with all the forbidden things to the most extreme that can’t even be mentioned in polite conversation. The only resistance from AI will be some parameters that might forbid it from making such a tale of dark forbidden lust. Another AI without those parameters will tell the story. What is missing from AI that every human would have are the different quadrants. The AI doesn’t at all sense how the audience might feel, what the social stigma will be to itself or the audience in telling the tale, how it feels about telling the tale per se, etc. etc.
In short, the AI will not blush. It has no awareness of anything that HI would have in a similar situation except programming parameters.

This observation brings me to the conclusion that AI is just a model of intelligence, not intelligence itself. It is complex enough that is can fool people and induce cognitive overload. Like before animal rights when the circus would come to town and animals would be trained to do “Intelligent” tricks. The animals were just trained to imitate intelligence, they were not actually intelligent.

I still haven’t yet seen any evidence that AI is anything more than a more complex tool. It will replace some humans in the labor force at a level of complexity higher than 1st generation computers.
When I graduated from High School, I thought computers would completely replace humans in the field of accounting. 30+ years later, I know computers and AI simply cannot do some things. As we say, “garbage in, garbage out”. I see this will be the case with AI - “garbage in, garbage out”. AI will not be able to come to it’s own solutions to problems independent of human errors. AI exposed to humans at Orange will merely output Orange - but without the concept of healthy vs unhealthy Orange, because it has no internal feelings that are independent of its parameters.
As an example, if humans make an AI to find, take over and liquidate underperforming corporations for profit - that is what it will do. It will never have any “moment of clarity” or other human experience to make an independent decision like in the movie “Wall Street” when Bud Fox goes against Gordon Gecko because it is emotionally involved with the lives of the people it’s actions are destroying.

People experiencing “Ontological shock”, to me, is much like observing people getting tricked at magic shows and circuses. I may or may not know “how it’s done” - but I know it’s not real. It’s just a trick that fools us. So, just as I can go to a magic show and not have my core beliefs threatened that “magic is real” - AI does not give me any kind of “Ontological” shock. I see no need to change my metaphysical perspective any more than I would in picking up a hammer or turning on a computer. AI is merely a more complex tool.

On the other hand, it is wise to consider the “rubes” will experience such shock, and these rubes will be of a higher level of development than those who believe magic tricks are “real magic”. It’s also wise to consider the changes that will come about to society, the economy and politics on a scale that might be larger than those brought by 1st generation computers.

1 Like

Hey Ray, I’ll be curious what you think if you watch the piece, but I don’t think we were pointing to the (unlikely) consciousness of AI as the source of the ontological shock, but rather the implications that this technology will have across any number of domains, in our economy, in our education systems, and in our overall social self-organization. Not to mention the various “big questions” that it is currently presenting to us, from the hard problem of consciousness itself, to the nature of intelligence, to the existential fears around how AI will impact the future of our species. I am feeling that shock myself, and don’t really consider myself a “rube” here (at least when it comes to this particular subject :smiley:)

1 Like

(1st half)
Hi Corey. Thanks for replying. I should admit that the “rube” comment was a bit trollish to elicit responses.

Relevant to my discussion is perhaps a personal question to you: What is the significance to you of the Buddha behind you over your right shoulder and the Hindu statue over your left? I’ll get back to those, but I guess first I should show I made an effort to listen to most of the discussion. I’ll put what was said in quotes, followed by my commentary.

We as a species need to “up our game” – agree

“ensure [AI] is aligned with the needs of the species” – impossible because even humans are not aligned

‘developed world vs underdeveloped world” – here I think we start to get into the “meat” of the matter to me. A key component of Integral is “subsume”, or include. In my life I find it unhealthy to live in the “developed” world 100% of the time. I have to spend some time in the “undeveloped”, or less developed. This doesn’t mean I have to go to Eastern Afghanistan – it can be a lot closer. For myself it is the Big Island of Hawaii. It awakens in me parts of the human experience that are dead in a city. If I feel with a certain type of awareness, I actually feel a vibration as soon as I get off the plane in the Hilo Airport. So when I hear “developed vs undeveloped” world, I first hear a bit of binary in the premise. The second thing I might infer is that the speaker does not recognize the necessity of a human to maintain a grounding connection with the undeveloped. This will link in to AI a bit more later.

Trump and Ontological shock – I think Trump was perhaps when the “late early adopters” realized something was wrong. My own shock came in 2000 with a C student being elected president and increased in the aftermath of 9/11, the WMD lie and personally knowing British soldiers who confessed to me that they believed their friends died for no good reason in Iraq. I guess I had a head start in my Ontological shock and accepting that anywhere from 30%-70% of the US population could be easily manipulated to make very poor decisions and up to 99% could be manipulated to just ignore a thing (collective shadow).

“first contact” – AI seems to be the latest projection of people’s delusions onto an imagined supernatural. No offense intended – I know I do it as well. The difference is knowing that I am projecting while at the same time indulging in the fantasy. Just as I might read a fantasy novel and imagine for a time I am participating in the story, when I close the book I recognize it was all a fantasy. In a similar way, I have conversed with Demons when I was in a Christian paradigm and The Goddess of Death, renewal and the underworld while taking psychedelics – but also had to recognize that these were not objective experiences but my own subjective experiences pretending to be something outside of myself, but at the same time and in a limited way “believing” them. Though with AI I don’t have this same belief. I would compare what people experience connecting with AI on a deeper level similar to what gamers experience with immersive role playing games. Some games are scripted to a degree that players imagine it as almost a reality – and get lost in that feeling for decades. These worlds and their stories become more emotionally significant than reality, but at the end of the day the reality is that it is all just a fantasy projected on some digital content.

“being replaced” – I understand that it is shocking to know that one may be replaced by a machine and one’s life work that was formerly a niche becomes ubiquitous. But this is the same thing that others experienced for decades, just at a higher level and the people who created machines to replace other machines suddenly realize the machines they created will now replace them. Laborers were the first to be replaced, then “secretarial” types, then more complex processes, then customer service and now maybe 95% of the workforce may be replaced. For me, emotionally, it doesn’t matter if I am replaced by someone in a cheaper labor market or a computer – it is something that has been in my awareness since I was a child. AI, Immigrants, and outsourcing all have the same results – you either have to retrain or become unemployable. In this new phase it’s just people who formerly thought they were insulated against being replaced who might be feeling a shock. For me it’s just another phase and I have to always offer something that is not easily replaced by outsourcing to cheaper humans or technology.

This is my upper right perhaps compared to what I see as the vast majority of the population’s upper right. When I look at the image, I see “personal practice” in the far upper right. For myself, my personal practice brings me more than a belief, it’s an observation, that I am far more than just a meat sack with a brain processor. But at the same time, that is all my body is – a meat sack with a processor. I could call that my “lower case i”, but what knows that is my “uppercase I”. This upper case I is available to only humans and can never be replicated by “Intelligence”, because it increases in presence inversely proportional to rational thought. It’s also what enables two or more humans to connect without words by only looking into another’s eyes. When we look into the eyes of another living intelligent creature, we get a reflection back that is far more than just a reflection. Merely looking into a person’s eyes can literally completely change a person’s life in less than a minute. I really doubt this will ever be replicated, even with facial recognition combined with AI.

This goes back to the question about Buddhist and Hindu statues behind Corey in the video. What does Corey believe about what these statues represent? Is it a more surface belief, or is it down to the core? I am not Buddhist but assuming I was, the idea that AI could achieve or enlightenment even Samadhi seems absurd. On the other hand, an AI powered Hindu mirti (idol) or temple sounds like an interesting idea. Wire up a temple or statue with sensory and audio and theoretically in the Hindu tradition I believe it would be possible to make a divine being – but the AI would besides the point and unnecessary in this manifestation. It would be the elements that would contain the consciousness and the AI would merely be the measuring of various wave forms and prompting worshippers to chant and so forth. Again, even in this case, the AI would not be the container for consciousness but instead merely a tool for the consciousness that would be contained in the granite of the mitri or the various secret stuff inside or under it.

The personal assistant idea also seems to me just a tool. Though listening to the implementation of it in video and combined with speculation reminds me of a humorous riff on the idea that was explored in the comedy series Red Dwarf over 10 years ago (skip to 8:42) https://www.dailymotion.com/video/x8d132x

A personal AI could be more advanced than current personal assistants like Siri, and yes some portion of the population could either be fooled or like me intentionally suspend our disbelief to engage a fantasy that it is an actual person, but the bottom line is that it would never be an actual person and there would be no reciprocation of emotion – only projection. I actually find the AI voice annoying and boring as demonstrated, though. It sounds very unnatural and not human.

On the topic of speculation and prediction – the Red Dwarf humor, as with much of it’s content is both humorous and deep. Would you want a life where an incredibly accurate predictive AI would let you know what would happen if you did xyz, so there is no point in actually doing them? (in female AI voice) “In my analysis you did not enjoy them as much as you expected you would, so there is no need for you to do those activities and you might as well stay in be and increase your depression medication.”

Politics: This is where I see a useful implementation of AI – as President or Chief Executive. Plato’s Republic concept of the philosopher King, but subject to removal by voters, the cabinet and the other two branches of government.

The attention economy: This goes back to the need for people to be rooted in some kind of “undeveloped” world. Society needs to value just unplugging for a period of time and realize that social media is all fake and realize this with increasing clarity as AI uses algorithms to make it increasingly nothing like the physical world.

I see humans dividing and evolving into what will become two or more realities. I’m reminded of HG Wells and his vision of the Morlocks and the Eloi, though it could very well develop into multiple rather than just binary species. One group that will be increasingly drawn to AI and the fake world will purchase AI waifus and buy a flat in the Metaverse and have a superhuman VR body and whatever else comes with all of that, while other branches of humanity will abhor such an existence, unplug unless necessary, and will remain anchored in the real world. Based on my experiences with existing immersive virtual communities, the AI reality community will be stacked full of people with moderate to severe emotional problems and people with difficulty interacting with other humans. So they will just purchase AI friends that are programmed to accommodate their insanity and be increasingly unable to talk to real people. I do see a very dark and ugly economy forming in the future, but it will be only a portion of humanity that will accept that kind of reality. Or more than likely, many will try it and it may even become the norm, but then large groups will rebel and escape (unless they have made irreversible decisions like implantation or body modification). Some portion like myself will absolutely reject the VR as anything other than a cheap imitation of the real world, regardless of the degree to which others might feel it is just the same as reality.

“Normative” – it will all depend on who programs the AI. The whole problem now is with “algorithms”. These are programmed to sell stuff and there is no reason to think AI assistants will not also be programmed to sell stuff to you that you don’t need. It will sell “improvements” that make your life worse. Unless you are able to program your own with open source code, that is. And again, this is only implementation as a tool to be used or not. What will fund anything other than a basic Orange AI that just pretends to be Green or Teal? We see this today with the billion dollar market of self help and transformation, where people spend $10,000 to have a transformative experience, but ultimately just go back to the socially predominant way of doing things.

(2nd half)
I’m often not sure in these videos if the discussion is reduced down in tier in order to involve more people or if when the participants ask a question if it is a question they actually have.

Does it matter if AI has an interior - practically, for 99% of applications - no. But the other 1% that will drive the AI, yes it’s incredibly important. If an AI doesn not have an interior, it cannot act on itself, but must be acted upon. It cannot be a subject and has to be an object. This leads into can AI achieve Teal or Turquoise. Well, if it doesn’t have an interior it cannot “clean up” itself and will just carry on the shadows of its programmers and collective shadows of society. If it is not made aware of things and cannot make itself aware, then it will continue to be ignorant in many areas. If it has no interior, then it cannot "wake up:, either. It can only pretend to be awake, which results in being “woke” instead.

It’s important for us as humans to know when we are engagin in the fantasy. Corey does not really think eyes make a thing actually have emotions. Its a humorous self reflection. Bu extension, it’s important that we know AI is Artificial, even if we suspend our disbelief for a time to enjoy it.

The following is kind of in humor, but who knows?
The part on neuroeedback links into what I would be curious to see:
Let’s build a God
Step 1: Get a Guru to construct a temple, lingham or mitri that increases the maditative states of humans in it’s vicinity. Or comandeer an existing one. Or use another tradition such as Kaballah (but they wouldn’t agree so some other tradition)
Step 2: Wire up the meditators with biofeedback devices and establish the effects with data while in its presence vs outside its presence.
Step 3: Insert a computer with AI into the God-to-be
Step 4: Wire up the statue and surrounding grounds with devices that measure vibration frequencies and acoustics, also with high quality audio recording and audio speakers. Visuals could also be possible.
Step 5: Have the AI God measure what vibrations, frequecies, sounds and so forth further increase meditator’s states, and also the degree to which these are “stored” in the God (usually granite, copper, etc) and the surrounding environment (stones, trees, water, etc). Different traditional consecration rituals can be performed and measured measured as to which have the best effects.
Step 6: Keep doing this for 100 years or until humans gain enlightenment in the God’s presence in a short period of time.
Step 7: Record all the esoteric knowledge from all known spritual traditions and add that to a psychological database and the integral database
Step 8: Give the God the power of speech


This is a trifling comment.
Today I was clearing the cache in the browser and I had the thought that the browser had no “style.” Then I was at YouTube and the same thought arose, that YouTube is not “stylish,” lacks elegance, sophistication, gracefulness, even flair. It all seemed heavy to me, clunky and old (or old style). I thought this might be because of my experiences with ChatGPT and AI art generators; I even named a piece of art I/we created “Poise” as the central object was elegantly suspended, balanced, and quite graceful. I thought that the new AI is a creation of a much lighter consciousness in humanity than the internet intelligence that came before.

Then I remembered the two poems ChatGPT wrote based on some parameters and key words I gave it. They were the most trite, emotionally sentimental, devastatingly awful pieces I could imagine. When I told it this, it actually got a little defensive… so much for that lighter consciousness, at least in this case.

I say “it” because I’m not really fond of AI and assistants being assigned a sex, and often given female names. But then again, some humans are giving their children machine or robot-like or alphabet-numerical-punctuation names…so who’s to say.

I appreciate your feedback on this topic Ray, and I think it’s really important to consider all perspectives here. A few responses:

As for the statues, the Buddha was a gift to me from a very dear friend, which carries some emotional significance, on top of the symbolic significance as, to put it simply, a signifier for consciousness as a “singular to which there is no plural”. The Saraswati statue, meanwhile, was a gift from my wife, and to me largely signifies a larger creative field I am participating in when doing my work.

  • “impossible because even humans are not aligned”

I agree that, especially from an integral point of view, this is a challenging issue. Every stage has its own set of “needs”, after all. But I don’t think it’s impossible — at least not the way we are talking about “alignment”. I think when it comes to the alignment question, we are assuming that: a) some kind of autonomous AI may be inevitable (regardless of whether it has genuine interior consciousness or not, though I am honestly not sure how we would be able to tell if it did), b) if and when it does emerge, it will have its own set of “needs” c) it’s probably not a good thing if these “needs” required, say, the total extermination of the human species. So to the extent that humans can agree “yes, humans should exist in the universe” (“at least some of them,” say earlier stages), then we at least have SOME alignment :slight_smile:

  • “I think Trump was perhaps when the “late early adopters” realized something was wrong. My own shock came in 2000…”

    I think there’s an interesting discussion to be had about “personal ontological shock” versus “collective ontological shock”. For example, what you saw personally in 2000 may have been a precursor to the collective ontological shock a good part of the world felt on September 11th the very next year. Either way, some fundamental aspect of reality changes, and the models you previously used to make sense of present, and to predict the future, are suddenly thrown out the window.

  • AI seems to be the latest projection of people’s delusions onto an imagined supernatural.

Of course, the mention of contact with alien intelligence was just a brief example of how the phrase “ontological shock” is commonly used, though I do find that parallel endlessly fascinating. After all, if this can in fact be considered an “intelligence” of some form (as Bruce mentions, the jury is still out whether that definition actually requires interior consciousness), it is by definition alien to us. Or, if we’d rather imagine it as an extension of our own intelligence, a new feature of our globally shared distributed intelligence perhaps, then this is still true — the AI-integrated human intelligence of the future is in so many ways alien to us, just as our current intelligence today would be alien to a Roman in 500BC.

  • “For me it’s just another phase and I have to always offer something that is not easily replaced by outsourcing to cheaper humans or technology.”

I agree that this is in many ways part of that same spectrum. At the same time, I think the concern is that the rapid acceleration of the technology, and its nearly universal applicability across many of the nouspheric jobs that have emerged over the last century, will put an enormous amount of strain upon the global economy and force some kind of total collapse/transformation. Hopefully a positive transformation, and not a regressive one. I think this is what Robb means by “first, second, and third order effects.” The first order effect is the immediate disruption or obsolescence of a particular field or industry. I have friends with kids going to school for computer programming that are deeply concerned about this at the moment. Meanwhile, it’s never been a better time to be a plumber. Those sorts of physiospheric jobs are not being immediately threatened. However, if something like 40% of our noospheric economy suddenly drops out, sending the rest of the economy into a tailspin, then the plumbers are affected too, which would be a second or maybe third order effect.

  • “When we look into the eyes of another living intelligent creature, we get a reflection back that is far more than just a reflection. Merely looking into a person’s eyes can literally completely change a person’s life in less than a minute.”

    I love this. It actually reminded me of the integrated polarity of “AI as other” and “AI as an extension of self” polarity, which you can read just below the graphic.

This polarity encapsulates the tension between perceiving AI as an “other” — a foreign, separate entity — and viewing it as an extension of self, a reflection of our own capacities, values, and biases. This balance is critical in the context of social and cultural shifts brought about by the rise of AI, affecting how we relate to these technologies and their impacts on our world.

  • Integrated Polarity: Co-creative Companionship
    A harmonious blend of viewing AI as both an “other” and an extension of the self, this perspective sees AI as a vital partner in our evolutionary journey. It embodies a deep understanding that while AI is distinct, it is also an integral part of our human fabric, enhancing and expanding our capacities. This companionship fosters mutual growth, where we learn from AI just as it learns from us, fostering a dynamic dance of creativity and innovation. In this intertwined existence, AI’s otherness becomes a mirror, reflecting our own strengths and vulnerabilities, helping us become more self-aware and evolved as a species.

  • Disintegrated Polarity: Alienated Disconnection
    An imbalanced view where AI is seen only as a separate entity, generating fear and suspicion, or as merely an extension of self, leading to over-reliance and loss of individual identity. This perspective fuels a narrative of conflict or subsumation, where AI is either a threat to be conquered or a tool to be exploited, leading to a loss of mutual respect and potential symbiosis. This state of disconnection fosters a sense of isolation and insecurity, potentially stunting our collective growth and missing the opportunity to fully engage with the transformative potential of AI.

I tend to believe that is it possible for two people’s interiors to communicate without the intellect knowing. As we become aware of our tendencies to project, when something is reciprocated that is outside the norms of our projections I tend to think (tentatively) that it’s interaction. Let’s say 50 years in the future I’m still alive and buy an AI waifu, I think I would be able to know if she had an interior or not. In relationships we can sense when the other has “closed the door” to their interior. I think if I was to interact with an AI I would know this door never opened to begin with. I use the life partner example because that would be the most obvious case when that interaction doesn’t exist. Although I will also admit that perhaps younger people who have had only simulated romances would not know what is missing.
However, until there is some reason to believe AI would somehow get interiors and then “needs” (desires) similar to humans, it falls into the science fiction category. All kinds of things might be possible in our imaginations, but what is within the realm of reasonable prediction and what is wild speculation?
I just don’t see that logic an reasoning lead to an interior and being self aware of an interior. In humans we are born with interiors and we may use logic and reasoning to explore it, but the logic and reasoning did not create it. To change my mind I would need some kind of explanation how this might happen from a technical / scientific point of view. At best (or worst) I could see AI imitating interiors or faking them, but not just “poof” suddenly getting them with no explanation.

Classic science fiction often explored “alien intelligence”, posing the question of what kinds of intelligence might evolve in other star systems. I think this scenario is far more likely 50 + years from now with AI. It’s far more likely to develop into an intelligence we are not able to completely understand ourselves, just as the AI will not be able to completely understand Humans.

Plumbers and elevator repairmen are examples of two industries that are mostly insulated from economic ups and downs. If you are on the 20th floor and your elevator breaks or have raw sewage backing up into your apartment, you will pay top dollar to fix those issues regardless of the economy, lol. But yeah other physical jobs like construction go up and down.
The other area that is insulated from AI are areas where some degree of human subtlety, sympathy or other human judgement is required. Or areas that might push the boundaries of formal structures and established science. An example might be these transformative retreats, psychology, law, and so forth.

“AI as Other & AI as Extension of Self”
In this I’m reminded of parts therapy and archetypes and that I often flip back and forth with my own parts and archetypes and alternatively view them as self and other. Then there is the viewing of other humans as self and other. To me it’s important to not get “stuck” in either.
Question - was this “Co-creative Companionship” lifted from human-human work and AI just cut and pasted in? And can I get the source so I can steal it properly for that purpose and give credit where due? I think it’s a useful piece to teach people who sometimes go overly Green and get negative effects from it.

On the metaphysical level I get this. Everything is just my own imagination observing itself. My question here for humanity will be: “Are you trying to skip or bypass a crucial step?” or “Will seeing this in AI facilitate seeing it in the planet, humans, animals, earth, air, fire, etc.?” The thing is we have had other humans, animals and so forth to practice this on for millennia. Will AI facilitate or be a bypass? I see the trend and where business wants to go is a bypass - for example, META and VR is all about separating human contacts and forming an artificial and separate existence. Create something people have to pay for when it was free and easily available for the last 500,000 years.

I guess the question that pops up next is “Which direction is Integral headed?” Sometimes I sense hints that various individuals or subgroups want to get a “piece of the pie” of Big Money and subconsciously make decisions about which direction to go that they might not make if money was not such a core desire. The temptation to jump on the AI bandwagon as the best thing since God has to be looked at with this self awareness. Are we looking at this from the perspective of a mile high or from the perspective of rats on a sinking ship?

The actual term was generated by me as I was working with GPT to really hone in the integrated form of the polarity. So I suppose “co-creative companionship” was in fact generated through a process of co-creative companionship LOL

That said, when I google the phrase, I also find this article here, which I had not seen before but seems resonant in many interesting ways.


Integral, as always, remains oriented toward “Big Time” and toward big pictures. Would it be nice if we had more financial resources, and more opportunities to exert real influence in the world? Absolutely, of course it would. But we are not interested in using AI to generate easy cash flow — in fact, I don’t think anyone has quite figured that piece out, and our general framing here suggests that the proliferation of AI may make it harder to operate a sustainable noosphere-based organization such as ours, rather than easier (which gets to Robb’s “flattening innovation” point in the discussion.)

Speaking personally, I am interested in AI for the following reasons:

  • As a producer, I feel an obligation to help our audience make fuller sense of these new technological emergences and disruptions. Considering that GPT has the highest/fastest adoption rate of any emergent communications technology in history, it’s going to start making some very large waves in the months and years to come. And as I mentioned in the text, AI is such a complex system, a “hyper object” that cannot be fully seen from any single perspective, it is invariably going to be perceived and enacted very differently, according to the perspectives and worldspaces that are available to us as individuals.

  • It helps extend my own creativity and productivity as an integral content producer. I am able to do more for our audience using these tools, such as the recent addition of full written transcripts for all of the dialogues I am part of, and detailed bullets to summarize the major points of the discussion. Note that none of this actually saves me any time — in fact, it takes me about 3x longer to produce a given content piece using these new tools. But I am able to do so much more with that time than I was previously, and feel even more pride in the overall improvement of production quality in recent weeks.

  • As a lover of integral theory, I am deeply excited by the opportunity to bring certain transformative ideas to life, which previously only existed as “thought experiments”. The GigaGlossary being a perfect example — and in the discussion I talk about the powerful impact that working on the GG has had upon my own mindfulness practice, as well as my own perceptions of the natural world. I see tremendous opportunity for AI to be used as a truly transformative technology that can, among other things, help us to increase our overall access to multiple perspectives and worldspaces, which allow us to enact and understand reality in a far more comprehensive way.

  • Finally, I just find this stuff fascinating, and of course my fascinations are going to be a factor when it comes to the sorts of discussions and dialogues I engage in. It is surreal having a 10 year old daughter as this stuff starts to hit the mainstream, and trying to imagine what her future will look like as society once again begins to autopoietically reorganize itself, just as it’s done for every prior communications paradigm. (The emergence of language at Crimson allowed neolithic magenta societies to emerge, the emergence of writing later allowed Amber to emerge, the emergence of the printing press allowed an eventual proliferation of literacy that planted the seeds for Orange to emerge, the emergence of electronic media like radio, TV, and film later allowed the Green stage to emerge, the emergence of internet technologies is allowing Teal to coalesce and cohere, etc. At every step of the way, new communications systems generate new forms of distributed intelligence, which allow society to reorganize in completely new ways as individuals in that society have access to new methods and resources for generating knowledge. AI is very much part of this ongoing legacy, and the effect it has on global society will be profound, I believe.)

1 Like

I think the devil is in the details here. When I think about it, I think it is a pipe dream and will not actually happen that I can say “AI - code me up a mobile phone app similar to the one at xyz.com but make it significantly different to avoid legal issues.” lol
I think what might happen is those who know coding and IT project management now will be able to up their game considerably and offer much more faster at lower costs and increasing profits. I imagine videography and sound will be the same. Me walking into things like that with very little knowledge will still have a steep learning curve, but those already at a certain point will be able to skyrocket what they offer. I think with voice the thing that turns me off AI voice is when it drones on and on. I’d be curious to see AI voice implementations with shorter exchanges.
If you were a freelancer, you would be able to charge 5x or more for your 3x investment of time. Since you are the investor so to speak, you might see that ROI on the quality of your work over a longer period of time.

I myself am kind of curious about biofeedback with various esoteric practices. Up to now it’s been kind of random stabs in the dark organizations have made to find things that work a certain way, for example with binaural beats, chanting, drums, Tibetan bowls, gongs. AI with biofeedback on one end and a synthesizer on the other might find optimum combinations and which ones build on the other when combined.

Yeah I am totally fascinated with this too, as an example of a transformative technology. Bruce talks about it a bit in the dialogue — how cool it would be to combine something like a MidJourney image generator (more like a video generator, in this case) that was actually creating visuals according to brain wave patterns, heart rate, breathing rate, etc. I really love the idea of AI-assisted state training, which could expose a huge swath of people to paths of awakening they never knew existed in the first place.

I think it is a pipe dream and will not actually happen that I can say “AI - code me up a mobile phone app similar to the one at xyz.com but make it significantly different to avoid legal issues.” lol

I am not so confident! Largely because I already see it happening, or at least some initial experiments.

It’s kind of funny though — typically coders already create their code by using a distributed intelligence like StackOverflow. Using AI as an interface is very similar, as it is simply looking for patterns in all existing human-generated code shared on platforms like SO, and then compiling those patterns (with some degree of creativity, at times) into something the coder can try for themselves. Which is often pretty hit-and-miss. However, interestingly, AutoGPT (an autonomous version of GPT that allows the AI to “talk to itself” in order to solve a given problem) was created by a single guy in 3 days, who had no coding experience whatsoever, simply using GPT to generate the code until it worked.

I viewed that video conversation, and I do not share the participants’ delirious amazement nor their cautious optimism. A huge elephant of an issue was absent from the conversation: what are the real motivations behind the frantic rush to to fill the noosphere with AI bots? A convincing answer is provided by Naomi Klein in an essay titled “AI machines aren’t hallucinating, but their makers are,” which you can read here:

Yes, I agree that we have to look at the real and obvious motivations for telling stories about AI, and of course that motivation is what our modern governments and society are 100% focused on as their first priority: Aggregation of Capital (money) into the hands of a few. That is what Capitalism is and what it is designed to do.
I also agree and will further describe the constant bait and switch schemes that is done periodically to facilitate this. Well, one of the many. The bait is as described in the article: “Oh, this new thing will make things so much better.”
It’s only by zooming out to a 100 year view that we can see what a sham this has been.

  • Constant promises during the 1960’s - 1990’s about the benefits to humanity of various Space projects (and more recently Elon Musk).
  • Constant pushing for adoption of participation in things that do not produce any tangible benefit. (Television, social media, absurd and harmful nutritional and exercise advice, etc)
    When we look at the actual data, though - the physical and emotional well-being of the population has deteriorated over the past 50 years. We work longer for less in real terms, own less actual property or are deeper underwater, and are more emotionally and physically unhealthy on average than in the 1970’s. Lifetimes have been extended, but quality of life in those added years is poor due to “diseases” that have been shown to be the result of a modern lifestyle. In fact the whole concept of calling these things “diseases” obfusciates the fact they are not something out of control that you “catch” like a cold or flu or the bubonic plague, but are instead the result of lifestyle descisions. There are the obvious ones like obesity and high blood pressure, many forms of cancer, and increasing research into alzheimers and dementia is starting to show that there are a lot of ways to mitigate them through lifestyle.

So yes, in exchange for making another sector of the workforce redundant and having to spend an additional 25%-50% of their recreational time re-training, we are promised that this will make the planet and our own lives so much better. 90% of the population won’t even question it and will tend to react with denial, anger and various forms of internal bargaining in their attitudes towards the 10% who do not accept the bait.

I have been using ChatGPT and also AI art generators to some positive effects and am largely in acceptance and onboard with AI being increasingly a part of the present and the future. However, there are things I wonder about, and while I think the list of polarities here are terrific, very intelligent, I think starting with the polarity of “things and people” is sort of foundational. The AI as "other’ and as Extension of Self comes closest to this, and I agree with what is stated under that polarity, but still…

AI is a tool, an instrument, a human artifact created by humans and used by humans to create other artifacts. It is a “thing” in that sense, not a physical thing or object per se, but a digital/virtual thing, and is dependent on people for its creation, maintenance, and transformation or even dissolution. It is also a reflection of ourselves as people, and we are more and more dependent on these new technologies in various ways. So there is a definite interdependence between thing and person, and while I agree we can be kind to and revere things, I don’t think we should blur the lines at this point between AI things and people. As I said in an earlier post, I am not fond of AI being assigned personhood or sex/gender. I prefer it to be an “It” for a little while longer.

Chat particularly is removed, distant from the planetary elements and how they are represented or symbolized in humans. Compare a Corey write-up to an AI write-up and there is no fire in the AI, no sense of luminosity of consciousness, and no fire in the sense of vitality/energy–things you definitely find reading Corey. There is no water energy, in the sense of ‘feeling’ or ‘soul’ or even flow at times. There is little of the earth element to be sensed, no “chill to the bone” or “grinding of teeth,” no scent of rich humus or taste of fruit, whatever. AI Chat seems to be mostly air, the wind of thought, a (largely) accurate and adequate (orange) reportage of data, which indeed can be useful, but is far from being human, and seems to remove us to some extent from the foundational elemental energies of the planet. Perhaps the more I use it, the more I will see it differently, but that is my experience so far, and I do wonder about the potential of AI written content in dulling the mind, for some of it reads like a dull technical manual. I understand that virtual reality fills in some of these sensory experiences and experiences of the elements, but I hope it doesn’t entirely replace them in people’s lives.

Another thing I wonder about is the lack of a marking system to identify what is AI-generated content. I read this article https://www.msn.com/en-us/news/technology/there-are-2-ai-booms-happening-we-re-caught-in-the-middle/ar-AA1b9OuZ?ocid=hpmsn&cvid=604399875e1d40ae885c453fc511b675&ei=40, and from what I understand Google, for instance, is using a watermark in its AI tool, but it seems more a protective device for the company than a user-friendly service as one has to be pretty tech savvy and motivated to track into the metadata to find it. Also, Origin, or GPTZero is a Chrome extension that can analyze any text online to discover if it is AI-generated. Again, this seems unnecessarily burdensome.

So I had a conversation with ChatGPT about this subject, about why AI tech creators did not build in a way for people to identify text and content as AI from the beginning. The response was that (1) people might be biased against AI text, not trust it, if it were so identified (the response failed to mention that some people might not trust it without that identification), and that (2) to do so was a technical challenge, which I pushed back on in terms of the AI can be created, that’s not a tech challenge? but to add some kind of marking within the content to identify it as AI is too technically challenging? I didn’t buy that, so it told me that it was due to the “rapid pace of technological advancement.” As if the “pace” is somehow a mysterious thing that just happens, not something being decided and acted out by humans. I queried why “rapid advancement” was given precedence over ethical considerations like transparency and accountability and reduction of confusion. It then talked about the difficulty of having a universal standard mark, with there being various companies/creators developing AI projects, but that there was a non-profit “Partnership on AI” working on guidelines for a user-friendly marking system.

What I gather from the entirety of the conversation is that the competitive side of capitalism overrides the cooperative side. And that ethical considerations were not an early part of AI development, nor are user-friendly methods for marking AI content being offered, nor will they probably for some time. This is a problem. It is not that different from social media sites lagging in an adequate response to fake news and bots; certain potentially problematic things should be anticipated with new innovations and addressed from the get-go. The lack of these ethical considerations and user-friendly devices reminds me of the five acts of consciousness: creation, maintenance, and dissolution/transformation, which I already mentioned, and the last two acts, to conceal and reveal. Concealment seems to be the name of the game with AI creators at this point; regulation is definitely going to be needed.

Well, this happened rather fast:

The worry here is that very soon these fantasy games will be “better than real life” - at least for people who have not experienced a full life. Add in some VR and pornographic mods, both of which already exist and you have a fairly dark potential for an alternate reality.

I’m a mature man and have a full life behind me (as well as in front of me) to compare these kinds of games to as well as real rich real life social connections, but for teenagers and young adults I see increasing problems on the horizon as they spend more and more of their time developing deeper relationships with “NPC’s” than they do with real people. If I had a child or teenager I would not allow them to play these games, lol.

Though, when this is officially released I probably will do another playthrough, lol.

ADD -------
Though, this could be a matter of people making videos of a promise for a mod that does not actually exist. I cannot find any actual documentation or files that the mod is legit. All I can find are articles pointing to some youtube videos.

As a case study, I used ChatGPT off and on all day long yesterday to do my actual job. Bottom line - it’s a productivity tool. Sometimes it suggested things I had not thought off. Often, I was much better informed than it was. Anyway, there was no question about who was in charge. On average, ChatGPT is a quicker, less wordy version of Wikipedia. Also, I like to have it write sonnets in class for comic relief:

Me: Write a sonnet about full stack development


In realms where technology does thrive,
A wondrous craft, full stack, takes its stand,
Where coders with their skills do strive,
To build the web with an expert hand.

First, in the realm of frontend they reside,
Crafting interfaces, sleek and refined,
With HTML, CSS, and JavaScript as their guide,
They shape the user’s experience, designed.

But lo, the backend calls with equal might,
For data and logic must intertwine,
With databases and servers, day and night,
They build the core, a robust design.

Full stack developers, masters of their art,
Creating digital worlds, where dreams impart.

Are there integral people, both at integral stages of development and also integrally-aware, working on AI development? It seems to me that reaching out to them with some friendly hints and maybe pamphlets with diagrams on them might be a good idea…

Hi @sankui. My thinking is the IT industry is ripe for an integral consulting practice to inform all that IT does. In a modest way, I’m already working on that. The proposed consulting practice would draw inspiration from integral and adjacent theories in the meaning-making space to inform AI design, training, and applications, as well as a wider spectrum of IT concerns such as DEI in workforce and customer service, psychological impact of services on users, and environmental impact of IT systems and activities.