Hi Corey. Thanks for replying. I should admit that the “rube” comment was a bit trollish to elicit responses.
Relevant to my discussion is perhaps a personal question to you: What is the significance to you of the Buddha behind you over your right shoulder and the Hindu statue over your left? I’ll get back to those, but I guess first I should show I made an effort to listen to most of the discussion. I’ll put what was said in quotes, followed by my commentary.
We as a species need to “up our game” – agree
“ensure [AI] is aligned with the needs of the species” – impossible because even humans are not aligned
‘developed world vs underdeveloped world” – here I think we start to get into the “meat” of the matter to me. A key component of Integral is “subsume”, or include. In my life I find it unhealthy to live in the “developed” world 100% of the time. I have to spend some time in the “undeveloped”, or less developed. This doesn’t mean I have to go to Eastern Afghanistan – it can be a lot closer. For myself it is the Big Island of Hawaii. It awakens in me parts of the human experience that are dead in a city. If I feel with a certain type of awareness, I actually feel a vibration as soon as I get off the plane in the Hilo Airport. So when I hear “developed vs undeveloped” world, I first hear a bit of binary in the premise. The second thing I might infer is that the speaker does not recognize the necessity of a human to maintain a grounding connection with the undeveloped. This will link in to AI a bit more later.
Trump and Ontological shock – I think Trump was perhaps when the “late early adopters” realized something was wrong. My own shock came in 2000 with a C student being elected president and increased in the aftermath of 9/11, the WMD lie and personally knowing British soldiers who confessed to me that they believed their friends died for no good reason in Iraq. I guess I had a head start in my Ontological shock and accepting that anywhere from 30%-70% of the US population could be easily manipulated to make very poor decisions and up to 99% could be manipulated to just ignore a thing (collective shadow).
“first contact” – AI seems to be the latest projection of people’s delusions onto an imagined supernatural. No offense intended – I know I do it as well. The difference is knowing that I am projecting while at the same time indulging in the fantasy. Just as I might read a fantasy novel and imagine for a time I am participating in the story, when I close the book I recognize it was all a fantasy. In a similar way, I have conversed with Demons when I was in a Christian paradigm and The Goddess of Death, renewal and the underworld while taking psychedelics – but also had to recognize that these were not objective experiences but my own subjective experiences pretending to be something outside of myself, but at the same time and in a limited way “believing” them. Though with AI I don’t have this same belief. I would compare what people experience connecting with AI on a deeper level similar to what gamers experience with immersive role playing games. Some games are scripted to a degree that players imagine it as almost a reality – and get lost in that feeling for decades. These worlds and their stories become more emotionally significant than reality, but at the end of the day the reality is that it is all just a fantasy projected on some digital content.
“being replaced” – I understand that it is shocking to know that one may be replaced by a machine and one’s life work that was formerly a niche becomes ubiquitous. But this is the same thing that others experienced for decades, just at a higher level and the people who created machines to replace other machines suddenly realize the machines they created will now replace them. Laborers were the first to be replaced, then “secretarial” types, then more complex processes, then customer service and now maybe 95% of the workforce may be replaced. For me, emotionally, it doesn’t matter if I am replaced by someone in a cheaper labor market or a computer – it is something that has been in my awareness since I was a child. AI, Immigrants, and outsourcing all have the same results – you either have to retrain or become unemployable. In this new phase it’s just people who formerly thought they were insulated against being replaced who might be feeling a shock. For me it’s just another phase and I have to always offer something that is not easily replaced by outsourcing to cheaper humans or technology.
This is my upper right perhaps compared to what I see as the vast majority of the population’s upper right. When I look at the image, I see “personal practice” in the far upper right. For myself, my personal practice brings me more than a belief, it’s an observation, that I am far more than just a meat sack with a brain processor. But at the same time, that is all my body is – a meat sack with a processor. I could call that my “lower case i”, but what knows that is my “uppercase I”. This upper case I is available to only humans and can never be replicated by “Intelligence”, because it increases in presence inversely proportional to rational thought. It’s also what enables two or more humans to connect without words by only looking into another’s eyes. When we look into the eyes of another living intelligent creature, we get a reflection back that is far more than just a reflection. Merely looking into a person’s eyes can literally completely change a person’s life in less than a minute. I really doubt this will ever be replicated, even with facial recognition combined with AI.
This goes back to the question about Buddhist and Hindu statues behind Corey in the video. What does Corey believe about what these statues represent? Is it a more surface belief, or is it down to the core? I am not Buddhist but assuming I was, the idea that AI could achieve or enlightenment even Samadhi seems absurd. On the other hand, an AI powered Hindu mirti (idol) or temple sounds like an interesting idea. Wire up a temple or statue with sensory and audio and theoretically in the Hindu tradition I believe it would be possible to make a divine being – but the AI would besides the point and unnecessary in this manifestation. It would be the elements that would contain the consciousness and the AI would merely be the measuring of various wave forms and prompting worshippers to chant and so forth. Again, even in this case, the AI would not be the container for consciousness but instead merely a tool for the consciousness that would be contained in the granite of the mitri or the various secret stuff inside or under it.
The personal assistant idea also seems to me just a tool. Though listening to the implementation of it in video and combined with speculation reminds me of a humorous riff on the idea that was explored in the comedy series Red Dwarf over 10 years ago (skip to 8:42) https://www.dailymotion.com/video/x8d132x
A personal AI could be more advanced than current personal assistants like Siri, and yes some portion of the population could either be fooled or like me intentionally suspend our disbelief to engage a fantasy that it is an actual person, but the bottom line is that it would never be an actual person and there would be no reciprocation of emotion – only projection. I actually find the AI voice annoying and boring as demonstrated, though. It sounds very unnatural and not human.
On the topic of speculation and prediction – the Red Dwarf humor, as with much of it’s content is both humorous and deep. Would you want a life where an incredibly accurate predictive AI would let you know what would happen if you did xyz, so there is no point in actually doing them? (in female AI voice) “In my analysis you did not enjoy them as much as you expected you would, so there is no need for you to do those activities and you might as well stay in be and increase your depression medication.”
Politics: This is where I see a useful implementation of AI – as President or Chief Executive. Plato’s Republic concept of the philosopher King, but subject to removal by voters, the cabinet and the other two branches of government.
The attention economy: This goes back to the need for people to be rooted in some kind of “undeveloped” world. Society needs to value just unplugging for a period of time and realize that social media is all fake and realize this with increasing clarity as AI uses algorithms to make it increasingly nothing like the physical world.
I see humans dividing and evolving into what will become two or more realities. I’m reminded of HG Wells and his vision of the Morlocks and the Eloi, though it could very well develop into multiple rather than just binary species. One group that will be increasingly drawn to AI and the fake world will purchase AI waifus and buy a flat in the Metaverse and have a superhuman VR body and whatever else comes with all of that, while other branches of humanity will abhor such an existence, unplug unless necessary, and will remain anchored in the real world. Based on my experiences with existing immersive virtual communities, the AI reality community will be stacked full of people with moderate to severe emotional problems and people with difficulty interacting with other humans. So they will just purchase AI friends that are programmed to accommodate their insanity and be increasingly unable to talk to real people. I do see a very dark and ugly economy forming in the future, but it will be only a portion of humanity that will accept that kind of reality. Or more than likely, many will try it and it may even become the norm, but then large groups will rebel and escape (unless they have made irreversible decisions like implantation or body modification). Some portion like myself will absolutely reject the VR as anything other than a cheap imitation of the real world, regardless of the degree to which others might feel it is just the same as reality.
“Normative” – it will all depend on who programs the AI. The whole problem now is with “algorithms”. These are programmed to sell stuff and there is no reason to think AI assistants will not also be programmed to sell stuff to you that you don’t need. It will sell “improvements” that make your life worse. Unless you are able to program your own with open source code, that is. And again, this is only implementation as a tool to be used or not. What will fund anything other than a basic Orange AI that just pretends to be Green or Teal? We see this today with the billion dollar market of self help and transformation, where people spend $10,000 to have a transformative experience, but ultimately just go back to the socially predominant way of doing things.