Image © Jose & MidJourney
I went and asked ChatGPT to define hallucination, it came back with “a hallucination is a perceptual experience that occurs in the absence of an external stimulus, often perceived as real by the individual experiencing it”. It then explained what is commonly associated with (mental health conditions, substance abuse, neurological disorders), and because so many people have probably asked the same question related to ChatGPT, immediately connected the topic and described unexpected or erroneous outputs. The above explanation of hallucination seemed to me tragically but naturally human, tragic when people are struck by things like schizophrenia, natural when via substance abuse, which is something that has happened since the dawn of ages.
I started thinking of all those artists that created invaluable pieces for humankind, like the “Starry Night” from Van Gogh, the paranoiac-critical method of Salvador Dalí, the poetry of William Blake, the paintings of Frida Kahlo, what would we be of humans without their work, how much was hallucination integral to their creative processes. So many humans trying to voluntarily create mental constructs beyond reality to write fiction or visionary work. So many humans trying to capture dreams and use them as part of the creative process. If that is all part of the messiness of what it is to be human, why the concern about hallucination? I guess this has to do with context and timing, people that use ChatGPT don’t want it to hallucinate when you need an answer that is verifiably correct, in line with the training model. But if the model is being trained on what humans have produced and placed on the world wide web, won’t hallucination be a part of it?
I started thinking, what if, as the powers of be start progressing towards perfect AGI, aiming to become better than being human, absent of hallucination, and that ends up separating humans from AI systems, our capacity to hallucinate with or without external help. If indeed these models will run out of human-made content to train them, and if these models will be more and more precise, factual, well behaved and ‘normal’, will humans last frontier of defense be to hallucinate?
I started imagining a fictional story where humans understand this as a valuable trait, one impossible to replicate and train models on, and therefore start to not only indulge but train themselves to hallucinate, a world where many people seen today as ‘sick’ because of their mental disorders become cherished and idolized because of their innate capacity to hallucinate, where legal and illegal substances will become even stronger and a part of natural, human life. A future where the only thing the models can’t do is what they are doing now and we so much want to eradicate, humans will be the ones able to hallucinate and to laugh at the absurdity of it all. In this context, life would be creative, artistic, expressionist, experiential, exploratory in nature, chaos and madness redefined, just to stay human, outside of the loop in this case.
I am not a doom and gloom person, I believe we, humans and generative AI, will find ways to serve each other without this dystopian future coming to fruition, but might make a good TV series!