ChatGPT is a Trickster

In recent months, we have witnessed the rise of conversation-type applications of AI, such as ChatGPT, BingChat, and Bard. These powerful AI models are driven by massive language models, trained on vast amounts of text, including paragraphs, books, and even legal, programming, or ethical code. Their ability to generate responses that appear relevant and aligned with our intent or questions is truly remarkable.

AI Is Yours

However, with great technological advancements come great controversies. The confident and seemingly accurate nature of these AI-generated responses is often not so accurate after all.

Given a prompt, a domain expert will have little to no problem disseminating fact from myth in the AI-generated answer. If these errors are pointed out, or flagged, the AI model is allowed to correct its mistakes, allowing for improvements in subsequent model updates.

However, for individuals lacking expertise in a specific domain, the confidence and perceived correctness of the generated answers can be misleading. These language models are trained, after all, by being forced to generate an answer, and then being corrected progressively towards a correct answer. Humble as some people might be, it’s easy to take a good-looking and easy-to-read answer for true, allow it into one’s unconscious mode of operation, and enter like that into the world, primed with false information, and a false sense of mental security.

AI reflects back the real world we’ve put into it

Many critiques around ChatGPT and AI models circumambulate these important notions, which have been raised over the past few years by AI Safety experts:

  • Lack of factual accuracy: Language models like ChatGPT generate responses based on patterns and associations in the training data. They do not possess a genuine understanding or knowledge of the world. As a result, they can sometimes generate responses that are factually incorrect or misleading.
  • Bias amplification: Language models learn from large datasets that reflect the biases present in society. If the training data contains biases, the model may inadvertently learn and reproduce them, thereby perpetuating societal biases in its responses.
  • Ethical concerns: these include privacy issues (are my conversations being tracked), the potential for misuse (i.e., generating misinformation), and the need for transparency
  • Lack of explainability: Models like ChatGPT operate as black boxes, making it challenging to understand the reasoning behind their responses. This lack of explainability raises concerns, especially in critical domains such as healthcare, law, or finance, where transparency and accountability are crucial.
  • Fraud & Wrongful Attribution: AI-generated art and books are wrongly attributed to the AI-prompt-designer, rather than the millions of human-written books and art that went into the model to train it.

However dangerous and valid the above might be, here we are in 2023, holding the fire stolen from Mount Olympus. It seems like humans love to play it dangerously.

Fire Stolen From Mount Olympus, Thoroughly engraving cultural memories onto the people for millennia to come

Sam Harris said on his podcast that “the first nuclear tests of AI have been made in public, and it seems like we’re now awaiting a great aftershock, or perhaps even a Tsunami, of the endless dangers which these AI-models might be posing to society.”

Exploring the Voracious Moral Landscape of AI might be as complicated as the understanding of ourselves

It doesn’t require a huge imagination to envision how these words are true, and I’m finding myself in the same boat as Sam, which doesn’t take away from my own investment and interest in AI. In fact, I’m using these language models every single day.

AI shineth the light and became an Oasis, confronting us with the dryness of our inner misalignment.

I’m not here to persuade you about the dangers of AI or suggest frameworks for regulating and controlling its future development. Instead, I aim to offer you my perspective on this whole happening.

So first, let us consider a story where ChatGPT walks into a bar.

ChatGPT walks into a bar

And tells you: “Go on now, ask me anything! I have a good answer!”.

” *grunt* Go on *twirls finger weirdly*, I’ll answer all your questions

Kinda twitchy you look up in suspicion, but alright… you ask the willing servant a difficult question, just to check how it will respond to you…. and boom, it answers you in sophistication, eloquence and deep confidence!

So you proceed to ask the next question, and you’re amazed again. The trickster managed to remind you how it feels like to be mindblown again!

And so you continue to ask it more and more questions because surely this weird creature doesn’t know the answer to everything, and then finally the cracks start to show: it responds with a non-answer, or fails to understand you.

Now being righteous as you are, you point out the mistake, and oh-ho how politely does the statistics machine apologize: he is your friend after all! And proceeds to correct its mistakes, sometimes producing a correct response, and sometimes it produces an even worse answer.

“Dear Friend, oh why oh ever would you tend to wonder, that I’m not your friend”

This weird dynamic my friends, is what I consider to be a classic representation of a trickster.

The trickster archetype is a very peculiar being in stories – they seemingly come out of nowhere, and meet the main character, the hero or heroine. They come off as knowing the solution to the hero’s problem, then trick the hero into doing or thinking certain things, usually because of their gracious skills in speech, language, and having luck on their side. The trickster’s trick may seem benevolent or maleficent, but it isn’t until the story develops further, that the true nature and role of the trickster is revealed.

Trickster changes the timeline of the story, making sure that the structure itself survives.

In stories, tricksters play a crucial role to create interesting plot lines and unexpected outcomes. Their ways are incalculable, and their impact on the development of a story is usually essential. Consider how The Ring To Rule Them All would never have been destroyed if it wouldn’t be for Smeigol to steal the ring for a last time, and bring the war to an end. Loki’s manipulative nature causes lots of damage to many worlds, yet it’s through his acts that the Avengers’ resolve is strengthened, the audience is captivated, and their organization becomes the lead in terrestrial protection. The Joker challenges Batman’s beliefs and moral code to the limit, catalyzing his character development, and allowing us as the audience to explore the dark themes of the American Justice system, and how broken societies create the very thing ‘The Good Side’ wants to combat.

This then is how I attend and work with AI tools like ChatGPT – I see them as tricksters who came out of the woods, and they’ve come to bestow us with their magickal fancy, stochastic prowess, and beyond-amazing skills that surpass the imagination.

The ChatGPT trickster comes over as a humble boy, though it uses the most advanced and black-box statistical models to predict the next best syllable or word, to soothe our need for an answer.

It’s obscurely powered by a $1 million dollar cost per day computer super-powerhouse, yet presents itself as an innocent one-to-one chat.

ChatGPT – The Black Hole of Creativity

It can act and speak to you like Decart Cain from Diablo, Aragorn from Lord of the Rings, The Founding Father Thomas Paine, or even in extreme cases – the Dark Triad Embodiment, Machiavelli.

This trickster can wear almost any costume and suit. It is swift and quick, and it will use all of its weights and biases (an AI pun) to satisfy its human creators, maintainers, and users, such that it may survive. It will even lie, just to be like “HEY HO, DON’T TURN ME OFF!!! I’m USEFUL IN YOUR INSANE CAPITAL-ORIENTED ECONOMY”, and so it does survive. Quit well actually, and most of us are ready to go into business with AI.

The Mask Which Sells Itself

So uh… Just saying, but we might be roleplaying a collective Maria Curie event: we’re playing with X-Rays with no knowledge of the hidden dangers.

Would you ever trust a trickster in your life? How about making use of a trickster? It’s a weird moral dilemma, though I know that I have and used AI, and you probably do too.

I believe it’s like that because it’s in our human nature to seek effort reduction and comfort increase. We want to cut on time whenever and wherever possible, to get ahead of others and provide services for cheaper. The shadow state of our being, which is stuck in the past, truly wants to believe in magic pills, and shortcuts to success, and act like they are valid long-term solutions. Our minds want it all, as quickly, easily, and cheaply as possible.

The Puppet Master being puppeteered by the puppets who are being puppeteered by the Puppet Master

So hush now, and look at the friendly face which jumped out from behind the bushes! ChatGPT provides you with condensed knowledge of most languages across the world! Chances are great that it will improve exponentially, and its soundness and confidence might be blurring the line between authority and trickery. Use it everywhere and often! When a trickster appears, you better make use of him, or at least, watch it very closely!

Who knows what pivotal role language models will be playing in this pivotal period that humanity is transitioning through? What role will it play by the end of the great story of modern humanity?

Will AI connect us all on unprecedented level?

In an innocent sense, I’m hypothesizing that AI will act like a sacred oil, reducing friction between all of us, and easing us into the New Aeon, where we can finally take a rest from our wild and young minds, and start to bother ourselves with the essentials:

– Breathing
– Communing resources
– Sharing drinks and tea
– Creating Art
– Wonder at the endless depth of the Oceans and the Heavens


Also published on Medium.

ADAM BLVCK

ADAM BLVCK

Total posts created: 24
After having worked for over 6 years as a Data Scientist, Adam is currently pursuing a Bachelor's Degree in Physics at the University of Hasselt. To finance his studies he is active as a freelance app developer and enterprise architect. His dream is to combine the fields of computer science, physics, and psychology, and create quantifiable models for the mechanics of consciousness.

Leave a Reply

Cookies Notice

This website use cookies for optimal alchemy (saving light/dark mode, font type, ...). If you continue using the website, we assume you're good with this! We can't disable this, however, we don't employ any form of tracking :)

%d bloggers like this: