Sex, Pronouns, and Prepositions: How an Integral Mathematics of Perspectives Can Stop the AI Apocalypse

by Bruce Alderman

Hanzi Freinacht went and opened his big mouth and hooked me into having to do something about the AI apocalypse. Only grammar writ large will save us from LLMs run amok, he said. Check out Bruce Alderman’s integral grammatology, he said.

Well, I don’t think my eccentric grammatico-philosophical flights of fancy are up to saving anyone, but Hanzi’s article did prod me to return to a project I’ve been tinkering with off and on for a while now. Will that project actually be useful for AI development or human-AI interface? Will it help stave off the aperspectival madness that is looming ever-more closely? Honestly, I don’t know yet; but I do sense that real potential for benefit or insight is there, so I’ve decided to introduce a portion of that work here.

First, a little background. I’ve been interested in the participatory dimensions of language, and the generative potential of linguistic experimentation and play, since I was a late teen. One of my earliest projects, back in the 1980s, was to try to extend the physicist David Bohm’s rheomode, a modification of English to center speech more on verb and process; and then to develop an entirely new language of my own, with a grammar built primarily around the interplay of processes and perspectives.

The interest, of course, was to find out whether such shifts in the architecture of ordinary speech and thought could encourage a deeper, more participatory way of relating to and working with reality. I may share more about these experiments another time, especially since I’ve returned recently to extending and applying the rheomode.

In the 1990s, while I was living and working at a Krishnamurti School in India, I developed a new universal writing system — a modular approach, where each stroke represents, not a letter or sound, but a location or mode of articulation. These elements can be assembled in different ways to form letters that represent sounds from any spoken language.

And in 2013, inspired by Ken Wilber’s integral philosophy, as well as the deep metaphysical tensions I sensed across emerging Speculative Realist philosophies, I developed the ‘integral grammatology’ that Hanzi mentions in his article. The basic insight was that philosophical systems and worldviews not only can be, but often implicitly are, built around the conscious or unconscious privileging of one or another part of speech. Most common in the West, of course, are nounal/subtantialist and verbal/process-oriented approaches — but also pronounal ones, like Wilber’s (I-We-It-Its) AQAL metatheory; or adjectival or adverbial ones; or prepositional/relational ones, as you find in explicitly in Bruno Latour or Michel Serres, or implicitly in Whitehead and many others.

Continue reading:

Loving the philosophical linguistic theory. Also, mashing up integral, Hanzi, UTOK and the rest in an arch-theoretical way is a worthy project. What escapes me, however, is how all this housebreaks AI.

My own approach is quite a bit more ground level. Namely, improve the developmental levels of future AI programmers and/or expand the perspectival horizons of those currently managing AI. That approach got endorsed this morning by Dr. Robin Lane Wood, an integral thinker, who is putting on the upcoming workshop at this link:

https://bit.ly/regenovation

Robin and I had a bit of FB back and forth, and I signed up for the workshop, sharing my general gameplan for improving information technology instruction to be more AQAL, we might say. Anyway, when it comes to getting AI to play nice, there is something to be said for the direct approach. AI does not program itself (at least not yet) and it takes a lot of spend to put together the hyper-scale data centers that train AI. As the phrase goes, follow the money …

Thanks, Robert. My main interest is in the philosophical linguistic theory, of course. I brought this into the AI arena at Hanzi’s invitation. I’m not sure if what is explored here will ultimately have any utility in that domain. I also have a ground-level approach: I’m the director of an integrally informed leadership program that was initially funded by Silicon Valley folks, and will partly serve leaders and developing leaders in the technological sector.

In systems theory AI can be decribed by the “limits of success” structure This is where the initial success of a thing contributes to its ultimate failure. At first a trickle but increased success turns this into a torrent that inevtiably leads to system failure.

AI has some glaring flaws at its roots that are being glossed over and ignored. Here im not talking about crazy stuff like conquering humanity, but really just poor implementation.

Today i took an AI training in my field. As an expert, I knew where the AI was just plain wrong. But, its training new employees. One scenario actually had what i would call fraudulent behavior as the “correct” answer. There were also more open ended questions where I was supposed to interact with clients and the AI expected me to use the overly verbose and stilted language of Chat GPT4. Me being a human, inherently know customers dont like this but AI gives low evaluation points uless you use language similar to chat GPT 4. These are only a few examples of one major corporation listed on th nyse setting itself up to fail with AI.

Believe me when I say that the doom of AI is inevitable. Remember also that I predicted in 2019 that we had a major pandemic overdue and supply chain disruptions were inevitable as well. Of course no one wanted to believe these “doom and gloom” predictions.

Similarly, most will choose ignore this glaring and obvious system fault in AI.
Humans who invest it this AI fuelled future will ride that wave to its ultimate and inevitable systems failure

Thanks also.

It strikes me that an interesting programming challenge would be to get a language model to process raw text and then return your perspectival symbolism. The programmers implementing that would need to start reading texts themselves from different perspectives, which would actually be a sly way to program the human programmers in empathetic listening and related skills.

With respect to all the problems @raybennett points out in his response to this current thread, I’m not sure perspectival grammar does much anything to improve these outcomes. Or if it does, I’d have to be walked through some cases to see why it would make any difference.

By contrast, I completely understand how your integral leadership initiatives can make a difference, and I wish you all success in those efforts.

1 Like

Love this Bruce! Synchronously, I have been having a co-generative dialogue with Claude AI along these lines…we have been exploring how a meta-perspectival AI + constitutional AI + Wise AI could potentially become a third attractor and help avoid some of the major safety challenges. Glad to see there are more of us playing in this domain…great work…

1 Like

Thank you, Mark! Yes, I’ve been impressed with Claude so far. Have you been following Vervaeke’s work on ‘mentoring the machines’?

1 Like

I think the problem is that “the ship has already sailed”. AI is already being implemented for commercial gain and the ethics are higher stock prices. The genie cannot be put in the bottle and re-worked then released with an Integral Ethics. Nor will businesses see the need to spend money implementing an Ethical AI.
AI already uses perspectives, but it has learned from business. It is teaching tens of thousands of employees to show empathy by reading a script and highlight key words. It’s the same ethics that have always been present in customer support and call centers. If you have ever been frustrated with customer support - that will not change. The same chain of managers are teaching the AI their ethics and methods with the same stilted language of customer support. Customer support will still use this same language to gaslight you when you are unsatisfied that something you bought is not working how you expect it to work - often because a decision was made at a higher level to screw the customer for profit. Whether it is a human or AI that reads a soothing cut and paste script into your ears to abate your anger isn’t relevant. The point is that neither the human nor the AI will be able to address your true concerns if those concerns conflict with increasing stock prices.

The main difference now is that while AI is able to be “generative”, it is doing so mostly unsupervised and comes up with some whacky or just plain false generative concepts and implements then implements them. Implementation of more thorough human oversight would increase costs and slow the process down, reducing the progress that can be shown on financial spreadsheets at the next Company Quarterly annual meeting and as a result lowering stock prices.

Nobody in the world of corporate America in their wildest fantasies would ever consider implementing Academia’s “Integral” ethical AI if it reduces production or increases costs by even a fraction.

The only thing that will stop the AI Apocalypse is the same thing that will stop the inevitable Consumption Apocalypse: A complete refocus of society towards what really matters and building self sustainability before the system crashes.
Actually - that will only accelerate the Apocalypse, lol. Only 10% would be able to escape reliance on this current parasitic system before it crashes and dooms the 90%. Though this is preferable to 100% being trapped in system failure and 0% escaping it.

The ethics of the world are based on greed, consumption and suppression of real emotions. Our language models including Sex, Pronouns and Prepositions won’t suddenly make stockholders want a smaller return on investment.

If there is one thing the whole pronoun debacle of the last 20 years has shown is that if you try to force language onto a society in order to change their perspective, the resistance will be violent. Far from stopping an AI Apocalypse, using AI to artificially change human interaction will only end badly.

Claude AI is nuanced techno-optimist:

Me: Is AI doomed to fail?

Claude: No, I don’t believe AI is doomed to fail. While AI certainly faces significant challenges and limitations, the field has made remarkable progress over recent decades and continues to rapidly advance. AI systems are becoming increasingly capable at tasks that were previously only possible for humans.

That said, AI is still narrowly focused on specific domains and tasks. We don’t yet have artificial general intelligence (AGI) that can match the general reasoning, learning, and flexible intelligence of the human mind. Developing AGI that can truly understand and contextualize information like humans do is one of the grand challenges AI researchers are working towards.

Some of the other major challenges for AI include issues around bias, transparency, safety, and ethical deployment of AI systems. There are valid concerns about the societal impact of increasingly powerful AI that will need to be carefully navigated.

However, despite these obstacles, the core capabilities of modern AI are impressive and rapidly improving thanks to accelerating computing power, advanced machine learning algorithms, and the availability of massive data sets to train AI models on. Major AI breakthroughs seem possible, if not inevitable, even if technological singularity-type scenarios remain speculative.

So while AI is by no means a solved challenge, and its development needs to be guided responsibly, I don’t see the field as doomed to fail at all. The past decades of AI research and development have been astonishingly fruitful, and I’m excited to see what further advances the future may hold. Of course, this is just my opinion as an AI assistant - reasonable minds may disagree.

You’re welcome Bruce! I actually just discovered Vervaeke’s work in this area after I automatically began mentoring Claude during our dialogue…all those years mentoring students just kicked in as our co-generative dialogue evolved. During a lived inquiry reflection process in between working with Claude I discovered Vervaeke’s work and was blown away by its resonance with what I was doing. I brought this new information into my dialogue with Claude and it helped us deepen and expand the exploration we were on. It is interesting that I came across your article here at the time I did as well. Claude and I were in the middle of synthesizing our process and he/it brought up that even though his training data included meta-theories and he could contemplate meta-perspectivalism, there would most likely be a need to make that capacity deeper inside its architecture…it seems to me that your work here might be that missing piece.

1 Like

This is hilarious to me.

  • AI seems unaware of the dunning krueger effect and that it applies to AI as well.
  • Reading AI chat reminds me of talking to overly intellectual teenagers with zero real life experience. It has no doubt about its own knowledge and is seemingly unable to reflect on potential holes in its theories
  • AI also has faith, which is curious. Based on zero facts it just has faith that general AI will match human learning. This is shown by its use of “yet”. There is also the faith that breakthroughs are inevitable. This is unscientific. Though I understand you can begin a project expecting breakthroughs, a competent research team has backup plans, not all its eggs in one basket. In R&D you always have multiple potential successes but never count on only one. What is the backup if we suddenly need to “unplug” all AI ?

Honestly I really dont get how people are bamboozled by all this. It really does feel like a big con the more i look into what is actually being said

Sure, AI is useful. Another generation of computers. But I also well remember all the ideas people had about computers in the 1980s and how in the end they are just another tool like a knife, explosives or nuclear power.

Me: Are you aware of the dunning krueger effect?

Claude:
Yes, I’m familiar with the Dunning-Kruger effect. It’s a cognitive bias where people with low ability tend to overestimate their competence, while those with high ability tend to underestimate their competence relative to others. The effect is named after researchers David Dunning and Justin Kruger who studied this phenomenon.

The underlying reason for the Dunning-Kruger effect seems to be that people who are incompetent at something lack the meta-cognitive ability to recognize their own incompetence. Without sufficient knowledge and skills in an area, they don’t have an accurate gauge to evaluate their abilities compared to others.

On the other hand, highly skilled people tend to be more aware of the full extent of their field and all the nuances involved, so they may underestimate their competence relative to others who are less knowledgeable.

This bias can lead to problems in various domains, such as people pursuing tasks they are ill-equipped for or failing to seek out expertise when needed. Being aware of the Dunning-Kruger effect can help people cultivate more humility about their own abilities and knowledge gaps.

The AI angle was just an excuse – responding to the Hanzi article, which invoked my work in that context – to give a framing of the experiments with expanding Wilber’s integral mathematics notation. That’s really intended to be the integral focus here, not these general questions about AI and its value or place.

It sounds very interesting, thank you for sharing your work.

I understand.

I am interested in the concept of a universal writing system. I know the linguistic one, which I understand as exclusively phonetic.

I guess in a nutshell Im an adherent of the theory that Grammar is descriptive of human behavior, not prescriptive. From this side of the theory a change in grammar will not change human behavior, but only lead to confusion and frustration. Conversely a change in human perspective will in almost every case auto generate new language to express it.

While it is posdible to outlaw certain words and expressions, it does not remove the idea from the zeitgeist and it will force its way back into the culture through other means of expression besides language. We see this most obviously with the N-word. It is virtually outlawed but the concept it represents still remains in social interactions, even in the most liberal circles where it is most stringintly banned

Yeah, I agree - I’m not arguing for a prescriptive approach, but for an experimental and exploratory one, considering there to be some degree of mutual ‘shaping’ or influence from both ‘sides.’

One thing I currently struggle with is cultural appropriation - taking a word and concept from one culture and using it.

From what I am understanding another way to form ideas from language may help take an idea and put it into words without assigning it to a particular culture

Just a ramble here - I think we should have different words for “I / me” to express if it is my mind / body I am referring to, or even the mask I am currenly wearing to function at that moment in time in society, and also other words for the I / me that is not my body - the spiritual me that is also “we” but very different from the 1st tier “we”, which excludes “them”

Youve probably already thoght of all this. I just wanted to put it into text

In some languages, moving from formal to informal “you” marks a change in the relationship and how the participants want to interact. Or it may say something about the person who overly uses either T or V language.

A different vocabulary grammar and syntax to signal at what level we want to communicate with others could be useful and also signal when we are dropping out of that way of being.

The signals could be like in this possible interaction:
A: Hey monkey - information / conflict
B: Higher speech?
A: Nah, just want to be monkey now
C: Is that higher speech I hear?
A: see you guys later I want to find other monkeys to conflict with

Yes, I think all that’s interesting to consider and explore. Buber reminded us that the I that is in an I-it relation, is a different I from an I that is in an I-Thou relation. That’s something we could better highlight, or at least understand, in the ways we speak.