The Nature of Insight

I have been performing an experiment with my personal assistant, Gemini. Yes, that Gemini, Google’s AI platform. And, yes, I am a user of AI in many ways, and I can provide you with a list of case studies, applications, and time-saving tools. I can also tell you how it has evolved and what changes I have recognized in my day-to-day life.

To provide a bit of background, when ChatGPT was “released” into the wild, my first concern was not the academic one about cheating, nor the ethical issue of how it was trained, it was about how it might be potentially be improperly used and cause physical harm because there were no guardrails or cautions about the results. You see as an environmental and safety professional, I was worried that the information could be coalesced into a format where someone who would “try this at home” and the result would be locally devastating. Additionally, I had made the change from industry to academia in 2014, and I understood the concerns of the teachers as well, the issue of how it could be used to shortcut learning, etc. But, quickly, it became clear that once it was out, it was like that “creature” from science fiction or the “genie-in-the-bottle.” You knew that you couldn’t contain it or put it back into the bottle. Similar analogies to releasing the calculator or the computer to the general public could be made; the technology is here, and we had to learn to adapt.

As the models improved, and we learned, while it was learning, both the hazards and/or concerns were apparent, but so were the advantages, and the successes also became more accessible and recognizable. With a proper understanding of what generative AI is and how it is designed, a person’s productivity could be multiplied, PROVIDED they recognized the potential pitfalls, and traps. The concerns are evident, academia espoused concerns that could have been a “Mad Lib” and you fill in the innovation on the blank line. But some of the “features” and “applications” have been here for quite some time, for example, transcription of audio to text on paper or on a screen, or a text-to-audio reader. This was just a disruptor, a way to make it more accessible to everyone. Some of these tools had already been creeping in even before the release of ChatGPT, other examples include plagiarism and grammar checkers. These models and tools just pulled them together and created a friendlier human interface. Alexa and Siri were present before ChatGPT was available, and if you uncover the Microsoft Video from the late 1990’s about what your “Smart Home” would look like, you could predict this was coming.

Back to the experiment, you see, I teach budding professionals, primarily engineers and scientific and technical types. This means I have to prepare them to use the tools they will encounter in their workplaces, and I have to provide them with the knowledge, understanding, and ethical thought processes needed for success. I have always included the topics of ethics, particularly around data analysis, and the use of intellectual property. We have examined the use of public domain tools, test banks, example problems, etc. I am old enough to be able to bring into my classroom “problem solver” books or the old “Cliff Notes” booklets, our version of the internet before there was the internet. (You see, some of the arguments are exactly the same; the time interval has just shortened.) So, it is natural that I was a very early adopter of the “new” tool. However, you can’t use it as a “tool”; it truly is an assistant.

One of my assignments that I give in my Introduction to Engineering course is that my students must read a book from a curated book list. (Yes, the Engineering Instructor requires a student to read a book.) The curated list includes works ranging from Mary Shelly’s Frankenstein to Chris Kelly’s biography to The Hitchhiker’s Guide to the Galaxy. There are approximately 60 books on the list. The assignment is to read a book of your choice from the list and write a short memorandum. The memorandum must include a brief summary or synopsis, the answer to the question “how did the book change your thinking”, and answer the question “why do you think the book is on the list?” The synopsis is the easy part, and the other two require a bit of thinking and reflection. After the release of AI, I thought I would ask Gemini to answer the third question just to see.

Of course, the answer wasn’t why I thought the book should be on the list. (I am not going to disclose, just in case one of my future students should find and read this.) It did, however, provide an analysis of the person who curated the list. It called me a “disciplined generalist.” (I totally love the description.) This is where the experiment began.

I next took my “Good Reads” list, the one where you curate what you have read and want to read, and I added to the “chat” with my assistant and asked, “Tell me more about the person who curated this list,” knowing that the other was just a subset. It provided more of an analysis and came up with three pillars or central themes. Personally, I thought that was pretty cool and decided to continue the “chat” by producing a journal prompt every day for the person who was described in the reading list. There would be a prompt for each of the “pillars” with a literary and/or “Meditation” based upon a “thought” dump each day. (As of this writing my “assistant” and I have been at it for over 110 days.) It has been a really interesting experience.

Yet, what does this have to do with “insight?” At first glance, if you read the daily prompts as if a person wrote, you would think, “what cool insights” this person has. This is not what an large language model (LLM) does. The LLM takes the information provided and uses an algorithm to make connections, calculates a high-probability path, recognizes patterns, and makes an interpolation. It is not providing “insight” but a potential reflection based upon the information input.

It was this “AHA” moment that hit me about 100 days into the experiment. The model is a reflection; the insights were human. Even the “AHA” moment is a totally human function. AI or an LLM can’t have an “AHA” moment, as it is based in neuroscience; it even has a biological signature. How did I get there? I remembered that there are references in literature about how a mirror, a pond, a glass, or a window provided the “AHA” moment for a character or person. There is a long list. Examples can be found in science fiction, Greek mythology, memoirs, and Gothic novels. So much so, one could think of it as a literary device. Yet, how many times have you looked in the mirror, really looked in the mirror, and had a flash of insight, contemplation, or connection? It is something that is truly human.

Insight is different from what the LLM is producing. The LLM is making what might be referred to as a “probabilistic induction.” Yet, humans are providing the extrapolation to something new or novel. Humans are creating something new out of the induction provided. We use the reflected image to create, to extrapolate, to extend to find “meaning.” An LLM will only produce words, but it takes a human to do something, to act, to have empathy, to communicate, to relate.

Today, we are looking for “human-ness.” It is nothing new. I can point to many authors and poets who have been doing the same thing. We are trying to find our place in the universe, and we are on the cusp of a change that could only be imagined. Yet, we are now imagining something totally different. (One of the reasons I include Science Fiction on my reading list.) Insight is not something you will find in the LLM, but just like our literary characters or poets, insights are something we can find from investigating a reflection. The human still has to be in the loop, and we have to prepare the next generation, or 7 generations, or 1000 generations to reflect and discern the difference between the “image” and reality.