How Much Help Do We Want from AI?

When AI is listening, taking notes, and transcribing on your behalf, you don’t necessarily learn and retain that knowledge.

I was on the elevator in my building last week and predicted to my fellow passenger that soon there would be facial recognition in the elevator. The elevator would “see” our faces as we walked in and light up our floors on its own, with no need for us to push buttons.

At the very least, I imagine, the elevators of the future will respond to spoken commands. Pushing buttons and having to think about the floor you need to get off on is so yesterday, isn’t it?

A World Fit for a Low-Energy/High-Relaxation Person

I’m a low-energy/high-relaxation person by nature, so these changes sound wonderful to me. The less I must do in life, the better.

In the workplace, artificial intelligence (AI) has become a great friend, reminiscent of Hal 9000 in 2001: A Space Odyssey. In the movie, one of the astronauts says he thinks of Hal like another member of his team. I think of AI the same way. It fact checks and summarizes for me and even fills out the search engine optimization fields in the content management system I work in. It doesn’t do the SEO work better than me, but it doesn’t do it any worse.

It’s a much better social media marketer than me. I copy and paste in dull blurbs that ran in newsletters and e-blasts and ask it to turn the copy into “a great social media post with emojis,” and just like that, it does so.

The key concern: Will my new AI friend, much like Hal 9000, become a foe someday or prove counterproductive in the long run?

AI: The Crackerjack Secretary We All Need?

 Slack, the popular workplace communication app, is offering AI in its technology that sounds like a crackerjack secretary from the 1950s: sitting in on meetings, taking impeccable notes, and then offering a summary of all the high points. Meeting attendees may not even need to pay attention. It’s like being a bad student, spacing out in class or not going to class at all, and then getting the notes from a straight A student (or at least a solid B student).

In the medical world, which I also report on, AI has turned into that superlative secretary. The electronic health record (EHR) systems doctors and health systems rely on now sometimes have built-in AI scribes. These systems transcribe the doctors’ notes for them, inputting needed information into the patient’s record and probably also including the notes the doctor uses to jog their memory for future visits from each specific patient—essential points to remember to ask them about.

It probably also will include information about the patient’s life that it picks up from the conversation, so the doctor knows the next time the patient comes in to ask about how their grandchild’s gymnastics or chess games are going. It makes the doctor look like they actually remember having seen this patient before.

With both Slack and the kind of AI systems embedded in some modern EHR systems, the question becomes one of learning, retention, and reflection. When someone else is listening, taking notes, and transcribing on your behalf, how much do you learn and retain? How much thoughtful reflection takes place?

What Creates Embedded Information in Your Brain?

I have a great memory compared to most people. I don’t think it has to do with intelligence as much as it has to do with being a caring, interested listener and a compulsive reflector—meaning, whether I want to or not, I go over interactions and incidents in mind many times after they are over. The interactions and incidents don’t have to do be important or upsetting for me to do this; it’s just the way my personality and brain operate.

Most people don’t reflect the way I do, and they usually don’t bother listening to other people unless forced. By necessity until now, they paid attention at work. Or at least they paid enough attention to retain their jobs.

What happens if AI is doing the work of paying attention and taking notes? How much knowledge capture will happen inside employees’ heads? You could argue that it doesn’t matter. However, you also could say that the accumulated knowledge of a person who has been in their jobs for years is one of the things that makes them so valuable. They then can do more than get work done; they can become a resource for their bosses, helping them to make important decisions, and guiding them on how best to implement the plans that result from those decisions.

The updated Slack system has “AI agents” that can converse and negotiate with other AI agents. So that would mean we would need to trust these AI agents to dictate the future business strategy and actions for our organizations, if a human isn’t actively participating in the process.

Double-checking AI agent conversations and negotiations would require a human with knowledge that has been captured in their brains. How much help can AI provide before you no longer have a human with enough knowledge captured to make the ultimate decisions that affect the future of your organization?

The allure of not having to manually press literal and figurative buttons in life is so great, I fear we all soon may be on auto-pilot mode with only the robot-businessman-pilot “capable” of making final decisions.

How is AI being rolled out in your organization? How much help from this technology is too much, so that human learning and knowledge capture is impeded?