I love artificial intelligence (AI) tools such as ChatGPT and Gemini. I downloaded the free versions of the apps to my phone and use them throughout the day. I ask questions to identify the actors in the TV shows I watch, to get quick and easy instructions on how to get to places in New York City, where I live, and how to fix the devices in my life when glitches occur.
Professionally, I have learned to use the paid version of ChatGPT and customized AI systems. These tools can synthesize and summarize information from multiple uploaded documents and can help an employee quickly vet the quality of information and writing. Within seconds, it can provide a new and improved version of a report or study, presenting it in whatever style the user likes.
The nuance here, however, is that employees must understand how to get these AI tools to go a step further in doing their due diligence and in double-checking specific points.
Treat AI Like an Employee Who Is Still Learning
When I give an AI bot instructions and it presents me with a finished product, I always re-copy and paste my original instructions into the prompt box and say: “Are you sure you did the following? Please double-check. Be meticulous in your double-checking.”
Frequently, the system will tell me I was right to ask it to double-check and that it found places where it had not explicitly followed my instructions.
I sometimes ask it to go back and check again. Finally, I ask, “Please present me with your finalized version that you are as close to 100 percent certain as possible follows my directions exactly.”
Without push-back, there is a fair chance the AI tool will fall short of perfection in what it presents.
Similarly, it’s important in the instructions to tell the bot not to include any information in the final product that the user (i.e., your employee) did not give it, or which was not explicitly stated in the source material given to it. Otherwise, being “generative” AI, the technology can start making projections or adding into the final product what some call “hallucinations.”
Train Employees to Train AI
Like a human employee, the AI tool(s) your employees are given need training. For example, when using a paid version of ChatGPT to check past editions of a newsletter for potential duplicates of material, I trained the tool to use a methodical three-step process. The newsletter I was working on when using this AI tool contained synthesized versions of articles that came from other publications. The source of the summarized articles (the publications the material came from) was always listed above the headlines. I found that the AI tool needed to be told that as a first step to determining if I was about to use a post that had been used already, it should look for other summaries in past editions from the same source material/publication as the post I was considering putting into an upcoming edition. An intelligent human employee probably would have figured that out on their own; the AI tool needed to be trained to do so.
Push Back/Ask Follow-Up Questions
Sometimes in response to a prompt, the AI tool will give information that is correct but lacks detail. It may do this to be cautious, but the result can lack meaning. For example, in my work with newsletters, I asked it to rewrite and improve headlines from the source material. It sometimes gave me headlines that were factually correct but so general that they would never engage readers. For a synthesized study on a specific innovation in care for a particular cancer, for instance, AI offered this generic headline: “Latest Innovations Offer New Hope in Cancer Care.” I pushed back, saying, “That is too vague. Can you write a headline with greater detail, using specific points from the information I gave you?”
Train Employees to Offer AI Constructive Criticism
Just as people often are taught in high school and college (and maybe even earlier) to offer peers constructive criticism, the AI tool your employees work with also needs constructive feedback.
It’s easy to get frustrated with a technology tool and tell it, “I can’t believe you made a mistake like that.”
Instead, the AI tool can benefit from the same process an employee may (hopefully) already know to go through with a human: “Do you know why you made that mistake?”
I’ve found the AI system usually does know and can state exactly why it was not able to do what you asked it to do. I then follow up with: “Please tell me the process you’re going to remember to do going forward so that mistake doesn’t happen again.”
Push AI to Go Further
AI is infamously known for exhibiting the same biases and prejudices as the people who created it. It also has the human tendency to do its best with a little push. Employees who are trained to offer that push in the form of incisive questions and feedback can make an AI tool an invaluable part of their work process.
Do you train employees on how to get AI technology to give them the best results?