Ken Wilber / AQAL on AI & GPT with a focus on risks

Dear colleagues, I was asked by a local coaching/business consulting organization to present Ken Wilber’s views on Artificial Intelligence (AI) and GPT. The presentation will be in the beginning of April (this year, 2024).

Even though I have made Russian translations of about a dozen of Ken’s books, I thought it would be good to ask for your help rather than try and do research on my own.

Could you please help me and provide links to texts and talks and also quotations and descriptions of Ken’s up-to-date views on the topic? If there are any threads and messages in this forum, please share those threads and messages with me.

I saw there were a few talks on YouTube and also I edited the Russian translation of Boomeritis where Ken explores some issues related to AI/transhumanism (actually, my partner Tatyana and I also coached some theatrical directors and actors in St. Petersburg who did a stage adaptation of Boomeritis a few years ago—very loosely based, I need to say, but still an official stage adaptation). A Theory of Everything also touches the issue of AI, but it was pre-GPT era.

I will make the presentation on April 3, 2024. Would really appreciate your input.

As for myself, I am skeptical of AI. It would definitely change our technological environments and many technical improvements are to come soon, yet it would also debilitate developments of some of our own capacities and intelligences. 90–99% of content will soon be an AI-generated spam. The art of translation will be lost in the upcoming generations. And the question of subtle energies that texts convey—I am not sure that AI will be able transmit those in the same degree that a decent human interpreter translates. Also, there is the issue of creativity: human spirit (say, in translation process) always draws on creativity, and there is always novelty, while AI is mostly horizontal reshuffling of the old semiotical bits and pieces.

I would really appreciate your help in articulating a more complex Integral Wilberian view on the subject than what I already have.

I am especially interested in an AQAL Integral analysis of short-, mid- and longterm downsides/risks of AI/GPT tech proliferation. There are so many glorifying narratives that I find it difficult to find a sober [AQAL] analysis.

P.S. Previously, I posted this inquiry to FB Integral Global group.

I dont know if anyone has actually spelled this out for AI but here is my position:

(I am merely a layman without any published works nor PhD. The following thus contains no value in an academic presentation.)

Also important to note that the following will make it difficult to “sell” AI and the large budgets for AI based projects. Because our economy, education and government are based on large bloated budgets, the following will generally be unwelcome in most large organizational presentations because they are likely counting on a large line item for AI implementation across many departments

I - not possible with AI. AI does not have emotions nor states of mind. It has no mind as we describe the human mind. Its thoughts are limited exclusively to logic. Out of all the multiple intelligences, AI is limited to only those that involve measuring and repetition. It cannot generate new concepts by nonrandom means.

IT - AI resides 100% within its own environment. It is not able to know for example what a human is, or even what fingers are.

WE. - AI is only able to “share” to the degree that humans hallucinate this and project it onto AI

ITs - AIs view is corrupted and fundamentally flawed. It is able to render complex interobjective tasks but with limited objective capabilities and light speed processing, AI is only able to create interobjective concepts that have glaring flaws and if implemented without human review will likely have substantial errors at the foundation level

Levels: AI is only able to simulate any first tier level based on the input and parameters it recieves. AI is not able to achieve 2nd tier in a true sense but may be able to fool humans that it is second tier, just as any number of millions of human charlatans have fooled the masses throughout history up to this very day.

Grow Up - AI is limited as outlined above

Wake Up - AI cannot “wake up”. It is completely unaware and incapable of learning or experiencing a waking up phenomenon. For example, it cannot achieve samadhi or enlightenment.

Clean Up - Since cleaning up requires stepping outside ones own view, AI is currently unable to do this. Perhaps in a future “twin brain” or “multiple archetype brain” implementation AI may be able to self execute Parts Therapy or similar cleaning up techniques

2 Likes

One thing that occurs lately is that AI can mediate human-to-human communication. Everything digital already does that. Everything language, for that matter.

1 Like

Well yes marks on a stick helped humans mediate how many goats they owned and later text then telephone and video. I guess if we include these as “we” then ok. I see AI as a tool, not as part of who I am nor a family member or member of the community, for example.

I guess I see that my values an meanings are in me, not a book. For me when I share meaning it may be superficially in the medium but the actual sharing is in the mind and is a perspective. All that has ever been written is irrelevant to me unless I accept and internalize it

1 Like

Thank you, very useful! Too often PhDs are into their own agenda, so thank you!

I do hope more people chime in. The date of my presentation got rescheduled for a few weeks into the future.