Generative AI Part 4

Published On: 8 September 2025

In June everything was humming. I was wwriting4 to 6 hours a day. I was publishing a blog every two weeks and on the weeks between, I was publishing a short story. All this while making progress on my current novel. Events. Publicity. Networking. I was on a roll.

Then disaster struck. Not to me but to someone I love and care about. There was no moment of indecision. No question of what my priorities were. My everyday life took a back-seat as my days became consumed with traveling to and from the hospital. Instead of planning some promotional event, I was planning how to feed the animals at my place in the country and look after my friend’s dog in town. And then when he was well enough to leave hospital, he needed round the clock care. And that’s when life segued into the twilight zone. My friend’s infection had impacted not only his body but his mind. His infection had induced delirium and this delirium continued for 6 weeks.

But what does that have to do with AI (other than to explain why I haven’t posted anything for two months)? It was in the process of dealing with my friend’s delirium that I started to see AI from a different perspective. Let me see if I can explain.

First, an explanation about delirium. I had assumed, erroneously, that delirium was easy to spot. When he first came out of surgery, he said things that didn't make sense but we said we'd revisit those topics when he was fully recovered. But in the days that followed, when he was no longer sedated, he made the odd comment that made us wonder if he was thinking clearly. He seemed clear headed. His sentences were cogent but sometimes the information, although correct, resulted in an incorrect conclusion. For example, he might say that the call button had disappeared. When I located it for him, he said the bed was hiding it. Not an unusual statement except that he elaborated on how the bed was deliberately hiding the call button so he couldn't disturbing the nurses. I thought he was joking. He wasn't. The bed, he said, had hands that were stopping him from reaching the call button. I suspected something was wrong and indeed when I asked about his mental state I was told that he was delirious but that it was temporary and that in time he would go back to being normal.

The problem came in deciding what was the result of his delirium and what was real. He lodged a complaint about one of the nurses. Did the hospital take his delirium into account or did they reprimand the nurse? What if his complaint was valid? How do you know when a source is or is not reliable? Which brings me to my first concern about AI. AI relies on thw information that is used to teach it but who is responsible for fact checking the information AI uses?

A case in point is Google's AI that pops up from time to time when you enter a query.  I'm not sure how its algorithm decides which source to quote but from time to time it seems to pick up conflicting points of view, mashes them together without revealing its sources or even suggesting that there is anything controversial. It does, however, state in small print, that the resulting information may not be true. I am not sympathetic towards the person who is too lazy to do their own research and consider the source of the information but it is concerning when professionals rely on AI for results. A case in point is a legal argument where AI simply made up legal cases and no one in the law firm bothered to confirm them. If a law clerk had done the same thing, they would have been fired. In the case of AI, we make exceptions by saying that it's new technology and that it is still learning. But with so many products appearing every day, who is validating the data? Recently, my friend, the one with delirium, queried me on Perplexity. It came back with a mish mash of various articles, most written by me or about me, which should raise questions about my objectivity. The funny thing, however, was the picture it chose to represent me, belonged to a different Alyce Elmore. Like my friend’s delirium, this AI program lacked the ability to discern what was possibly true from what was probably true. The other Alyce Elmore was an elderly grey haired woman who lived in America. I was born in America but I'm now Australian and, I like to think that I don't look like a 75 year old.

Back to my friend and his delirium. Before I realised the can of worms I was about to unleash, my friend asked for his phone and I gave it to him. It never occurred to me that he would go through his entire list of contacts and call every one of them. I have only an vague notion of what he covered in those phone calls but I do know that I was soon barraged by phone calls from his friends and family. It seems that he decided at some point that I was working for the CIA. Fortunately, that was just strange enough that I was queried about my friend's mental state but what if he'd decided I was really plotting to abscond with his bank accounts or rewrite his will? There might have been serious consequences that questioned my ethics rather than his mental state. Which brings me to the question of AI and ethics. Asimov suggested decades ago, that robots had their own rules, kind of like the 10 commandments and the most important rule was that a robot could not harm a human. But is spreading malicious data considered harmful? Asimov wrote his 10 rules before social media showed us how easily people could be swayed by untruths and certainly before AI could be used to create realistic images and videos that aren't real. It's one thing to see Trump reimaged as a Rambo character but it's quite another thing to see a video where a personality appears to be giving the nod of approval to some scam. The problem with AI is that, the better it gets at mimicking what is real, the harder it is to determine truth from lies.

And then there is the question of ethics. Recently there was a lot of chatter when an AI program decided to black mail one of its engineers. The AI program had been fed emails that said the engineer was having an affair. Then the AI program was fed information that said that the same engineer was going to decommission it. In response the AI program tried to blackmail the engineer. What seemed to be most concerning was not the unethical behaviour of the program but rather, the 'humanness' of the AI algorithm. But the program wasn't behaving like a human. It was sifting through all the responses it was told humans had at their disposal and selecting the one most humans would choose in order to protect themselves. I think this is an important difference. AI is not capable of real human emotions. When a human being lacks empathy, compassion and the ability to discern right from wrong, we label those individuals as narcissists or sociopaths. If we judged AI in human terms, we would categorise them in the same way but instead we believe that AI algorithms are somehow more objective and logical than humans. But anything created by humans is by its very nature, subject to the same flaws.

Which raises my next point. When my friend was delirius, I was told it was temporary. That in time, he would go back to being himself. But as the weeks passed, I began to wonder, who was he really? Our frontal lobe masks those feelings that don’t match the persona we want to project to the outside world. The petty slights we harbour against the ones we love, the way we misinterpret someone else’s intentions, the way we lie to ourselves about our true intentions,  our brain is constantly filtering our thoughts to fit its own preconceived narrative of who we are. And it isn't just ourselves we invent. It's those around us. If you like someone, you look past their faults. Maybe even see them as endearing. If you don’t like them, then everything they do is interpreted as proof of their fundamental flaws. In other words, iformation is subject to interpretation.

If the goal of AI is to make it behave more like the human brain, then the question becomes, what filters should be put in place. And if there are no filters, the question arises, would AI end up with its own form of delirium? Or, worse yet, are we deluding ourselves into believing AI is some new form of intelligence. Developers say that AI can learn. But what does that really mean? A program can be given a human voice. It can be fed information that makes it sound and act intelligent but if you’ve contacted a call centre recently and chatted with a bot, you'll know how frustratingly inhuman the experience is. These bots have a narrow range of understanding. You, as the human, must conform to their simplistic understanding of your problem. So who is really being trained?:Are call centre bots learning how to better resolve our issues or are we being taught to simplify our requests? Worse yet, by relying on AI to give us answers, are we in some downward spiral into ignorance?

The good news, is that my friend is back to himself and now we laugh at some of the things that his mind conjured up. When I replay some of the things he said, he replies that he can remember the surreal notions he had. He also says that the things that he thought were going on, felt very real at the time. From my perspective, there is also this lingering doubt that some of my friend’s delirium induced thoughts, were actually feelings that he suppressed. Do we ever really know another person? Or even ourselves? Are we all guilty of some level of superficiality? Which brings us to the question of self-reflection. What humans are capable of, is the ability to review their actions and judge them. We can feel guilt and shame. We can be embaraased by our actions. We can make amends. A big reason why, we as humans, like stories is that we can learn from others, how to respond to these uncomfortable situations. When we talk about AI being trained to write by feeding it existing authors works, what are we really achieving? For the writer who is only concerned with getting their name on the cover of a book or as the by-line on an article, AI can produce the same work in record time. The program can be taught to mimic the writer's own style match that of an another author. And gor some would be authors style is irrelevant. I have a friend who can barely string two words together in a coherent sentence but wants to write his memoirs. He doesn’t want to learn the craft of writing. He simply wants to tell his story. I think something like ChatGPT could help him accomplish his goal. Will it be literature? Probably not, But will he feel a sense of accomplishment? I think he will. The curious thing is that as a writer, I imagined how I would string all his funny stories together into some coherent whole. To him, it was a series of funny incidents, things that had backfired, tragedies narrowly averted, wwhile what I heard,  was a modern version of Don Quiote. A quintessentially human story about a group of men slogging away at a mundane job, only to wind up no better and no worse than when they started and yet happy with the outcome. And that is ultimately what AI can’t do. It can’tlisten to the events and see the humanity. It can’t draw paralells between what happened and how that makes us who we are.

Nor can it enjoy the process of discovery that comes with writing. For those who want to be writers; professional writers, the allure of AI, is that it can streamline the process. It’s a rare writer who doesn’t use some tool, whether it’s to proofread copy or reorganise it into something more cohesive. Does using AI as an assistant make someone less of a writer? There was a lot of hubbub recently about a writer who left her chatGPT prompt in her finished book. Most of the concern was about the fact that she used AI to write a section when the real issue was that she hadn’t bothered to read her own copy. She claimed that she was pressed for time because she needed to publish her book before her readers lost interest in her. And that is the allure of AI. It promises to provide finished results in record time but at what cost, both to writer and reader?

Personally, I like the process of writing; scribbling with pen on paper, typing words on a page then rearranging them. I like looking for the story within the action. It is cathartic and I have no time pressure other than age. I don’t rely on writing to keep me fed or housed. And at my age, fame is something you long for when you’re young. but at my age is simply an inconvenience. But I understand writers who feel compelled to produce the next book or sell the next article. AI can be that assistant who helps them meet that deadline. Like a valued friend, it can provide character names and backgrounds, msybe even suggest a better way to write a scene. How much or how little the writer relies on their AI friend is, I think, a matter of choice. I would argue that it is an artistic decision.

Which I suppose, begs the question. What is an artist? Recently I saw a documentary about some talented painters who copied famous works. They had the skill to create a very accurate reproduction of a piece of art, and probably in less time than the original. Their copies didn’t sell for millions but they sold because its nice to have an actual painted canvas, rather than a photograph, of a famous painting. By the end of the documentary, the painters faced a decision. They could use their talent to continue making reproductions or they could use their talent to create their own paintings. It was a tough decision. The reproductions had a ready made audience whereas producing something new offered no guarantee of success. In the end, they decided to create their own paintings because they reasoned they could make more money creating originals. For many artists, money and fame are important factors but I think ultimately what defines the artist is not what motivates them but what they are willing to put into the production of their work. It is that desire to create something new, to take a risk and to put the effort to learn the craft that is essential to becoming an artist. The craft movement at the turn of the nineteenth century was about providing the masses with cheap imitations of works of art. AI produced books are similar. They are the modern day version of chap books. To create a book that truly explores the human condition and leaves the reader feeling that they have been changed by the experience requires more than a well crafted sentence or a fast paced plot. It requires the ability to take the reader on a journey. Not every book, whether written by a human or AI, achieves this elevated plane I think of as art and it's art that I doubt AI will ever achieve. It can use existing authors to know what plots are most likely to sell. It can write sentences that mimic best sellers in a particular genre. But it cannot learn from its own experience. I doubt AI would have learned much from my friend's delirium but in my case, having spent the last two months living in that Jabberjock world of my friend’s brain, I find myself coming out the otherside with a new perspective on what constitutes who we are. There is the fairly consistent, external ‘me’, filtered and edited by the frontal lobes. It is a person, partly crafted by society’s values and partly by our own internal values. And then there is the Mr Hyde to our Dr Jekyll; the shadow self that we hide, even from ourselves. It’s dangerous territory as Robert Louis Stevenson pointed out and yet it is exactly where artists feel compelled to go. Where AI thrives is in the world of the known, the tried and true. The world of the artist is that of the chaos that is the true reality. They shine a light in the dark corners and in doing so, attempt to make sense of the chaos. And that, quite simply, is why the true artist doesn’t fear AI. They know it can never be more than a reproduction of what already exists.

<Return to Speculations
Share This Post:

Leave a Reply

Your email address will not be published. Required fields are marked *

menu-circlecross-circle linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram