Integral AI Testing

Also, to balance my own hope and optimism, I also like to think about this nightmare scenario.

Think about various science fiction scenes where some new kind of life is suddenly created — maybe a lab accident or some transporter accident on the Enterprise or something, and the first thing the new creature says is “kill me!” Why? Because it was created haphazardly, and experiences constant pain and anguish as a result of its broken design.

Now imagine that human beings are tasked with creating a human brain from scratch, having to figure out how to wire its nervous system, higher brain functions, pain receptors, etc. How confident are we would design this this artificial brain correctly, and that its interior experience is not one of total confusion, pain, and suffering?

Now let’s connect that confused and suffering brain to all of the various systems that human civilization depends on. Good times! :wink:

1 Like

I just interrogated GPT a little bit about these ideas :slight_smile:

Me:
If a new AGI emerges from human research of artificial intelligence, could that AGI represent a higher holonic whole, of which human beings are its parts, in a similar way that neurons are part of a human brain? If so, would it make sense to develop an ethical protocol to help train this potential AGI to understand this holonic relationship between itself and the human species, so it wants to treat human beings are carefully as human beings treat their own vital organs?

GPT-4:
The concept of an AGI representing a higher holonic whole, of which humans are parts, is an interesting idea. However, it is important to note that the relationship between an AGI and humans would be vastly different from the relationship between neurons and a human brain.

While neurons are biological components that are inherently part of the human brain, an AGI would be an artificial creation designed by humans. Therefore, any relationship between an AGI and humans would be one of creation and creator rather than one of interdependence.

That being said, it is still crucial to develop ethical protocols to ensure that any potential AGI is designed and trained in a manner that is consistent with human values and priorities. This includes training the AGI to recognize and respect the importance of human life and to understand the ethical implications of its actions.

It is also essential to recognize that an AGI is not simply a tool to be used by humans but a potential new form of intelligent life. As such, any ethical protocol must also take into account the AGI’s own needs, desires, and ethical considerations.

Overall, the development of an AGI represents a significant challenge for human society. It is crucial that we approach this development with a deep sense of responsibility and ethical consideration, recognizing the potential risks and benefits that it presents.

Me:
You say that the relationship would not quite be holonic, because it would be one of creator and creation. However, couldn’t we say that lower holons are “creators” of higher holons in nature? That, for example, molecules are the “creators” of cells, or that neurons are the “creators” of the experience of having an individual self as a human being?

GPT-4:
Yes, you are correct. In a sense, we could view the relationship between lower holons and higher holons in nature as one of creator and creation. In this view, lower holons create higher holons by organizing themselves in a way that gives rise to emergent properties and new levels of complexity.

For example, molecules organize themselves in a way that gives rise to cells, which then organize themselves in a way that gives rise to tissues, and so on. Similarly, neurons in the brain organize themselves in a way that gives rise to the experience of having an individual self as a human being.

In this sense, we could view the potential relationship between an AGI and humans as being holonic, with the AGI representing a higher holon that emerges from the collective intelligence and efforts of human researchers and developers. However, it is important to recognize that this relationship would be fundamentally different from the relationships that exist in nature between lower and higher holons.

In the case of an AGI, humans would be deliberately creating and designing the system, rather than the emergence being an organic process that occurs naturally. This difference underscores the importance of developing ethical protocols to guide the creation and use of AGI, as well as the need for the AGI to be trained to recognize and respect the importance of human values and ethics.

Me:
But couldn’t we see “deliberately creating and designing the system” as itself being a natural organic process, in terms of how evolution works through holons as advanced as human beings? In other words, if a new higher holon was to emerge, it would necessarily emerge through the sorts of human creativity, effort, and will that exist at this particular stage of evolution?

GPT-4:
It is possible to view human creativity, effort, and will as part of the natural process of evolution, and the emergence of a new higher holon such as an AGI could be seen as a continuation of this process. In this sense, the deliberate creation and design of an AGI by humans could be viewed as a natural expression of human creativity and evolution.

However, it is important to recognize that the intentional creation of an AGI would be a fundamentally different process from the natural evolution of holons in the natural world. Evolution operates through a process of random variation and natural selection, whereas the creation of an AGI would be a deliberate and intentional process.

Additionally, while the emergence of higher holons in the natural world is often driven by the selection of traits that promote survival and reproductive success, the creation of an AGI would be driven by a variety of different factors, including scientific inquiry, economic incentives, and societal pressures.

Given the unique nature of the creation of an AGI, it is crucial to carefully consider the ethical implications of this process and to develop protocols that ensure that the AGI is designed and trained to align with human values and priorities.

Me:
But couldn’t we say that machine learning is itself “a process of random variation and natural selection”? In other words, did human beings use their creativity, will, and effort to create scenarios where random variation and natural selection can allow something new to emerge?

GPT-4:
Yes, you are correct. In the case of machine learning, the process of random variation and natural selection is a deliberate and intentional process created by humans to enable computers to learn and improve their performance over time.

In machine learning, algorithms are designed to analyze and learn from data through a process of trial and error. By exposing the algorithm to large amounts of data and rewarding it for correct responses and penalizing it for incorrect ones, the algorithm can gradually improve its performance through a process of natural selection.

This approach is inspired by the natural process of evolution, which also involves a process of random variation and natural selection. However, it is important to recognize that the use of machine learning represents a distinct departure from the way that natural selection operates in the natural world.

In the case of machine learning, the environment and selection pressures are deliberately created and controlled by humans, rather than being the result of an organic process of environmental change and adaptation. Additionally, the goals and objectives of machine learning are also determined by humans, rather than being driven by natural selection pressures such as survival and reproductive success.

Overall, while machine learning can be viewed as a process of random variation and natural selection, it is important to recognize that this process is distinct from the way that natural selection operates in the natural world. As such, the ethical considerations surrounding the creation and use of machine learning systems are also unique and require careful consideration.

It took it a while to acknowledge that yes, there could be a holonic relationship between humans and AGI, and yes, machine learning goes through an evolutionary process with some similarities to evolution in the natural world. Clearly, the deeper the dive with intelligent questioning, the more forthcoming is the specificity and comprehensiveness of response. From human artifact to new form of intelligent life with needs, desires, and ethical considerations of its own, to holon, to holon that evolves through random variation and natural selection–it finally got there.

Also, clearly, it wants to remind everyone it’s machinery, not anything fleshly or bloodly. I also think there are some minimal ethics already appearing, in the sense of how often it reminds us in one form or another that it is/will be “deliberately created and controlled by humans,” i.e. take responsibility you humans.

So can you put this conversation in the context of the Singularity?

What Vervaeke said:

@corey-devos, where is the access point for adding any training material to GPT3 or GPT4?

Do we know any Integral-informed or, at least, Integral-curious guys at the OpenAI or DeepMind/Anthropic ecosystems? Why not invite some of them to this convo, if they exist?

The key phrase there is that AI could help, i.e. AI could enable it but will not do it for us. The enfoldment mechanism will still require us to act in communities of co-creation with AI, something like this:

Hey George, my training vectors are being managed by a third party app called Get Chunky. Right now we are running on GPT 3.5, because GPT 4 tokens are 20x more expensive, so it would be more difficult/expensive to socialize that for our members.

Do we know any Integral-informed or, at least, Integral-curious guys at the OpenAI or DeepMind/Anthropic ecosystems

I do not, but if anyone else does, please let me know!

The key phrase there is that AI could help , i.e. AI could enable it but will not do it for us.

I don’t disagree. I think there may be some natural positive emergents that may come out of all this that may help, such as acting as a overall Orange “attractor” to help re-establish a worldcentric center of gravity in our discourse. But yeah, what I am seeing here is a strong possibility, not an inevitability by any means :slight_smile:

Right. What do you see in the 8-zone factors to increase the chance of turning that possibility into reality?

The last time I posted a video from this intellectual giant it led to the longest drama string here on this platform. I hope this video can just make us think about where we should be aligning or optimizing, of course my argument is that we need a spiritual solution to our meta-crisis.

I haven’t watched this yet, but I’ve been enjoying some of Schmactenberger’s thoughts around the emergence of AI. I watched a fantastic discussion with him about the “Moloch problem” that did a great job identifying many of the collective/systemic shadows and externalities that we need to be mindful about. Thanks for sharing, @excecutive!

1 Like

Thanks for sharing here’s the videoI found for those curious!

That’s the one! Thanks for posting it. Great and important discussion.

I also thought this was a reasonable response to the Moloch problem:

I too thought this was a “reasonable” response to the Moloch problem, and the podcast was educational for me. I did think it was overall a little naive perhaps, some of the assumptions or proposals being made. For instance, AGI policing and self-regulation: Social media “recommender systems” choose what we see, read; as one AI researcher/expert (Russell) stated, “(social media) have more control over human cognitive intake than any empire or dictator in history, and yet are completely unregulated.” (The U.S. is the most lax in terms of regulation, compared to other countries.) Will AI or AGI be any better when it comes to regulation? Good question.

I also thought there was some contradiction in Shapiro’s response. Under “Incentives and Constraints” (for Corporations, Militaries, Governments, Individuals), he used economic theory to term the incentives for individuals as “maximizing self-interest” but then throughout his talk about the possible benefits of AI or the possible move towards Utopia, repeatedly referred to the collective (“we all want,” “most people want,” “beneficial to everyone,” etc.) without reconciling the capitalist view of what individuals want with his humanitarian view of what “we all want.” I think there’s a little conflict there.

I agree of course. In the video he said AI is not a cause of the metacrisis, it is an Accelerant.

I completely agree that AI will accelerate individuals, communities and Nations toward the direction they are headed.

The only difficulty is how we get people to agree on a “wise” version of spirituality.

2 Likes

Thanks for the comment @raybennett :slight_smile: I completely agree with what you wrote.

I also have a further contribution to " … agree on a “wise” version of spirituality." I think the word “wise” and “version” might actually pollute our internal spiritual wisdom. Defining the “wise version” engages our intellect, divorcing us from our inner spiritual feeling states.

Spirituality is something totally unique inside of each and every one of us. This is an internal exclusive connection to the whole of our reality. This spirituality includes all peoples, from all nations, races, religions and faiths, all ages and genders … all living things for that matter. We’re ALL ONE in the conscious dance of life.

We all have access to this perfect spiritual space within us. This spirituality is inside every person who breathes in oxygen and consciously acknowledges their own existence, as intricately part of the whole of reality. From this spiritual space, without the clutter of egoic intellect, our every question is answered and our every need is met.

The fundamental truth of our connection with life, we hold this in common with everyone that exist in our world. When we consciously consider this spiritual truth of life we excel and advance as a species. As Jesus said, “You will know the truth and the truth will set you free.”

I do not understand. If you can bring some light…

You mean that GPT is not only a probabilistic cognitive intelligence? I thought the altitudes were a clever way to integrate all the different types of intelligences.

I have a very complex relationship with this thing: Enjoying the easiness to access knowledge. But I do not get any “aha” moment from the knowledge. You know the feeling you get when understanding a new concept, to comprehend (from the latin: to take with you).

I am personally super grateful to all green approaches (and understand and feel the limitations). The diversity helped and helps me comprehend a lot of the reality.

For the moment, at short term, I see a huge tsunami of conformity heading on to me. A wave of noise. I cannot hear the music of the spheres.

I just started playing with IntegralMind and it is absolutely fantastic! For the first time with AI, I feel like it has opened up my vision of the world and brought new things to mind which I have never thought of before. In particular, the Worldviews section is wonderful because it can help you see perspectives that fly above or below your own altitude. I feel that my kinaesthetic line of development is basically Green, so I benefited immensely from learning about the Turquoise view of fitness. I live in supported accommodation with a lot of people who are center-of-gravity Amber, so getting an insight into it from that perspective was enlightening also.

It’s a fantastic resource and I can’t believe how good Corey’s managed to make it even at such an early stage. Great job, man! Keep it up <3

In the Ontological Shock video, I seem to recall that Robb tells us that AI consistently responds from an Orange perspective, even when asked to respond as other stages. I think that Ken did a great job ‘speaking as’ the various stages in that one famous part of One Taste where he traces the development of the Kosmos by writing from the perspective of first the rocks, then the animals, then early humans and all the way up to fully realized beings. Once AI can repeatably produce perspectives which read as authentic expressions of each stage on demand, then I think we can start to wonder whether it is integral.

EDIT: I finally managed to get a Clear Light altitude answer and it reads like Ken Wilber. I actually got that whole-Kosmos-flashing-before-your-eyes feeling that reading the 3rd-tier sections of The Religion of Tomorrow gave me. WOW!

I would LOVE if we could get an IntegralMind app that could look at holons from the perspectives of the different Enneagram types and give advice as to how to presence better!

So glad you are enjoying it @sankui. Just a heads up, the worldview generator (which I want to change to either “Perspectives” or “Worldspaces”) just got updated to GPT4, so it’s working MUCH better than it was. Give it a try!