It seems like AI philes were a bit ahead of themselves and the latest human implementation of AI is already starting to fail.
While the Title of the video below is about one topic of Elon Musk’s version of AI being an initial failure for his cause, the deeper topic is that this failure is due to the very nature of AI.
AI only works when the majority of it’s learning is from humans. What Elon’s “Blue Check Bois” are learning is that no matter how much they try and alter the behavior of their AI, it is learning from a far larger sample of other AI.
AI learning from AI results in unoriginal “double slop” as AI plagiarizes AI. Virtually duplicate content is reproduced a thousand times in an instant. Since there is a decreasing amount of human content generated, AI is reproducing other AI and learning more from other AI rather than from humans.
Since it is cheaper to produce 1,000 AI articles than one human produced article, the Internet is increasingly being flooded by worse and worse quality content and none of it is original.
Search engine Algorithms see the similarities (aka popularity) of AI generated content and place it highest in the search engine results. Therefore when AI uses a search engine to find information to copy for its content, it finds AI content not human content
As time goes by, humans find information produced by search engines less and less useful. It is “double slop”
AI does get some things “wrong” and also has inherent bias. Or conversely in the case of Elon’s AI, it can refuse to accept factually incorrect input and only accept factually correct concepts determined to be “woke” by the right. Regardless, AI accelerates at a pace that humans cannot keep up with and humans can neither correct nor “correct” AI.
In summary, AI needs human input or it becomes increasingly worthless. If AI accelerates too fast for humans to keep up, it only begins to fail sooner. We are only 6 months into this latest version of AI and it is already starting to make the internet increasingly useless for humans at a faster rate.
It’s interesting, because I just created a polarity map for the integral ideas of Depth and Span, which I think is relevant here. Because when we are looking at the generative capacities of AI, we can see that it can be used in either of these two ways — as you say, there is a massive increase of span, as it is easier for AI to generate 1,000 pages than it is for a human being to generate one single page. If we are using the measure of quantity over quality, AI certainly wins here.
But if we are considering depth and genuine creativity, I think it’s a slightly different game. AI is at its best when it leveraged collaboratively in a way that brings us into greater depth of self-expression. In this case, AI isn’t viewed as some 3rd-person “other” that is disconnected from us, but rather as a new layer of our own distributed intelligence. That is, if we want AI to be creative, then we need to use it creatively, as I myself have been doing for the last year since GPT became accessible, and especially as we create specialized tools and frameworks, such as our Context platform, or even the sorts of projects that I’ve been working on, such as the GigaGlossary.
As for where this is all going, you said it yourself — we are only 6 months into this latest version of AI, which I think is far too soon for us to declare it a “broken” technology. We are still in the “Model T Ford” days, and I have no doubt that while many of the challenges that we are seeing today will remain for awhile, a great many others will be solved over the short and long span, and those solutions themselves will likely come from a collaboration of human intelligence and AI-assisted pattern recognition.
tl;dr: if we use AI in lazy and uncreative ways, as a substitute for our own thinking, then we will likely get lazy and uncreative results. If we use AI in smart and insightful ways, as a way to augment our own thinking, then we will likely get more smart and insightful results.
The other day I showed a programming class how to use ChatGPT in the debugging process. It did not replace human understanding to the point where simple copy/paste of the output would have been appropriate, but it certainly sped things up. Remember when calculators were a no-no in math class? Similar trend shaping up where software development is concerned.
Of course “we”, meaning humans will use it in both ways. My opinion is that the vast majority of humans will use it in a “lazy” way, which is also more generous than other words.
The question I am thinking is “Will the lazy overwhelm the medium”. An example is Video such as TV and movies. I believe the “lazy” has overwhelmed the video media globally such that is is now difficult to find new releases that are good quality. Even the with the content of “documentary” is unreliable and more often than not intentionally manipulative. So even though I may desire quality movies and TV shows, it is becoming increasingly difficult to find them in all the fluff. I can easily spend more time looking for quality entertainment, scrolling through descriptions on Netflix for example than actually watching the content.
Similarly, AI will face a similar problem. How will AI distinguish between quality and popular fluff without devoting more computing power to analyzing all aspects of content? If the majority of humans supported by majority of AI say the moon is made of cheese, how will your AI identify the truth from fluff when all it has to go by is your word compared to millions of AI generated PhD Papers?
I think the problem we are seeing is that industries are not using humans for quality control in producing content. The data set is keywords and click trends, generate an article and the algorithm (AI) that initially gave the keywords everyone was clicking on now sends more humans and AI to the AI generated content because it has those key words humans and AI are searching for.
In 1997 I could enter in any key words and get what I want in real content on the first page. Today you first get the paid advertising on top, followed by trash content like Quora, then Google’s “People also ask” with the more “official” AI generated generic and accepted answers, then a few major tech companies who control that market, then funnily enough further down the page more entries of generic Google generated “People also ask” questions and answers.
This is the case currently. What will happen increasingly is that AI will only look at this content to generate thousands of Articles on any topic, creating a self perpetuating circle of trash content increasingly separated from the real non-digital world.
Google hasn’t unveiled any plans on how they are going to prevent this with human intervention. They would have needed to hire a massive number of humans six months ago if they were ahead of the situation.
All I can add to that is that human curation of content is becoming a value-add. I spend all my online time is small communities like this one precisely because I like to relate to content in a human way. If @raybennett is really a bot, then you really aced the Turing Test, because I am thoroughly fooled!
The other day I used ChatGPT to look up something like “did Derrida consider himself postmodern?” I figured it would sample a lot of articles and give me a consensus view. Worked fine for that. But if I were actually writing a paper on that topic, far more research would be required. In general, I find ChapGPT to be a faster version of Wikipedia, which in turn is a faster version of what I used to use as a kid - encyclopedias.
I quickly realized that the voice that was speaking was AI, and something about it deeply rubbed me the wrong way.
I think a portion of society will increasingly experience this and for that portion of society it will be easier to spot and more deeply disturbing as time goes by.
I will compare it to certain public speakers who cannot change their register, like Jordan Peterson or Donald Trump. The first dozen times listening are ok, but then you notice certain annoyances of their vocal register. In these two examples, Jordan Peterson sounding a whiney old female impersonator and Donald Trump sounding like he’s taken to much medication.
It’s the kind of thing you cannot unsee or unhear, like when you realize your parents are flawed you cannot go back to the days of not seeing it.
Or perhaps even more significant, when we realize a significant other has a specific flaw and from that point onward that flaw grows and grows in annoyance and opens the door to other flaws being seen.
Perhaps most of the population will continue to like AI, just as most of humanity continues to like other kinds of trash that are poor substitutes for something else.