Since the explosion of ChatGPT onto the world stage, it has been impossible to escape the AI revolution that is seemingly taking over every part of the technology sector. Every app is working to integrate AI, people are using these new tools to generate text in new and creative ways every day, and the accessibility of AI text generation tools means anyone and everyone is playing with this new technology. There is constant discussion about how creative these new AI tools are and how it seems like ChatGPT and similar tools are truly thinking and creating new ideas. But, should we really be calling AI "creative" and "thinking"? Or are these terms masking the truth about how AI is really functioning, and does this prevent us from determining the best uses for these tools within our companies and our apps?
For humanity, speech has been one of our primary differentiators from other animal species. It's one of the things (along with tool-making), that has allowed us to believe in our superiority on earth. This hubris is what encourages us to believe that these recent breakthroughs in commercial AI might be computers "thinking." In fact, I believe the narrative around AI and it's abilities needs to be flipped around dramatically. Instead, what this is showing us is that the complexity and uniqueness in human speech and word patterns isn't as impressive as we would like to believe. Instead, what this tells us is that human speech patterns are much more predictable than we have previously thought. We have now shepherded in the era where computers can easily and quickly produce human-quality prose with enough memory and processing power to learn, store and calculate the required probabilities.
For decades we believed that certain tasks were always going to need a human, and one large category of such tasks involved prose writing. The new AI frontier is showing us that this is fact not the case, however we need to be very careful to determine what tasks are acceptable for generative prose and which is not. Here, I'll outline two important high-level points that anyone working with generative prose needs to keep in mind.
1. AI is taught what is "right"
We always need to keep in mind that AI text generation tools only know what we have taught it, and the collective training material holds within it a massive amount of inherent bias. Textio has been doing an amazing job researching, quantifying and outlining a variety of biases within AI tools, including gender, racial and age bias, among others. They have written a variety of blog posts on the topic that are a valuable and fascinating read.
When using generative prose, it is important to understand where the training set data came from that powers the AI. If you are training your own LLM then you have some amount of control over this training data, but in many cases you may not have that control. If this is the case for you, it's important to know that the entire breadth of human online text might be contributing to the output of the prose. In this situation, the output of generative prose will never rise above the collective state of humanity's current writing ability, style, biases, and thought patterns. In some cases this can be very useful – this means that simulating the voice of a "regular person" is very easy. For types of writing that are very structured and formulaic (resumes, cold outreach letters, customer support needs, etc), the output ability of generative prose is fantastic, if you take in the caveat of the inherent current social biases.
If, however, you are hoping to use generative prose to generate text that rises above the human median in terms of intelligence, accuracy, inclusiveness, or other traits, then you may need to spend extra care to ensure the output of your generative prose tool meets these goals.
2. AI "creativity" cares nothing about truth
The wow factor of the new class of AI tools tends to center around the creativity these tools exhibit. ChatGPT and other similar tools can generate brand new content seemingly from scratch that reads as if a human wrote it. Sometimes this creativity is called "hallucinations." What we should never forget is that when the AI is adding in uncertainty into the model in order to facilitate this creativity, there is no fact checker and no editor to check whether what is being said is accurate or not. For some tasks this does not matter at all – I've seen many parents talking about how wonderful ChatGPT is at creating novel bedtime stories for children. No one cares about the accuracy of a princess and dragons story. The problem lies when ChatGPT is exceedingly good at creating convincing and seemingly knowledgable content that initially reads as very trustworthy but where the actual content is a complete fabrication.
In other words, what some people call "hallucinations" might be better described as "bullshitting." It is therefore useful to think of generative prose tools as being capable of being a master manipulator and master bullshitter.
Over the next few years we will all be learning how to best rein in the bullshitting ability of these tools when accuracy is needed. As a start, the more specific and granular the prompts are for generating prose might be one of the key levers available to us to control the output of generative prose AI. Similarly, I wonder if an entire secondary industry of AI tools to act as quality control might emerge to help ensure content remains accurate and appropriate.
Is AI magic?
It is not necessary to completely understand the inner workings of complex LLMs in order to better understand the skills, limitations, and concerns of generative prose AIs. However, the more people have a basic understanding of how computers are able to simulate complex behaviors the more power and control we as a society have over the shortcomings of these tools.
As the saying goes – any sufficiently advanced technology is indistinguishable from magic. Let's reduce the knowledge gap with the hope that we also reduce the "magic" and regain the ability to know how to best leverage and utilize these tools in our own apps.