Navigating the Ethical Challenges of AI

Explore the ethical challenges of AI, including bias, privacy concerns, and job displacement in the digital age.

The age of artificial intelligence (AI) is fully upon us.  It’s impossible to open a news feed or attend an industry event without at least 20 percent of the content (and sometimes much more) relating to how AI will change our lives and our world.

However, AI brings with it many challenges that employers need to be mindful of as they navigate the next chapter in their technological lives. Among the challenges are those related to confidentiality and privacy, accuracy, job displacement, bias, and resource use. Let’s look at these one by one.

Confidentiality and Privacy

Most AI is what’s called “generative”—that’s the “G” in ChatGPT. The user asks a question (creates a prompt) and the platform nearly instantly generates a response.

AI generally gets its data from the internet, as well as from internal libraries of information it collects and stores. Perhaps the user has uploaded files—maybe hundreds of documents related to a matter—and asked for a summary.

Does the data from these documents become accessible to other users? Does that comport with client expectations? If doctors, for example, at a research facility upload patient information—even if it’s partially redacted or anonymized—is there a risk that a subsequent user of the same platform could receive information gleaned from a private source?

Increasingly, in an effort to prevent such leaks, commercial users of AI have built their own private databases—sometimes referred to as “sandboxes”—to safeguard, as much as possible, the confidentiality their clients expect.

Accuracy

One well-known jurist compared AI to a law clerk who never slept and could readily deliver a bad draft quickly—and might lie to you. Stories abound in the legal press about lawyers who used AI to generate documents that were filed in court, where the AI was later found to have hallucinated some of the case cites and invented rules, laws, and precedents.

Many courts have adopted rules that forbid the use of AI in documents to be filed in court, while others allow the use of AI but require that any such use be disclosed.

In order to maintain high ethical standards, it’s important to use AI only for first drafts and to fact-check everything relevant to the matter and, where appropriate or required, disclose that AI played a role in the creation of the document.

Job Displacement

Conversations about AI often focus on whether AI-enhanced robots will eliminate all work. The tech barons, who want their new offerings to be accepted, demurred, suggesting that almost no jobs would be eliminated and, in fact, many would be created.

When I spend time in any West Coast city with robotaxis, I wonder whether drivers of taxis or rideshare services feel like their jobs are secure. The notion that AI won’t replace workers is simply false. It will replace certain jobs—those that lend themselves to automation—though many will remain unchanged.

For example, while AI can read and interpret X-rays, an experienced doctor can still catch things that AI misses and see patterns that AI cannot.

The likely scenario for the near future is that AI will be part of a team, joining with a human co-pilot to navigate the terrain. These so-called “centaur” teams will ensure that humans still have jobs and that those jobs will be made more efficient with AI, but that they will not be eliminated.

For leaders, being clear about the goals of AI adoption and being transparent and honest about how AI will impact the size and needs of the workforce is critical. If the business is likely to experience significant change, the leaders must figure out how to communicate that news. If, on the other hand, the business is less likely to be impacted by AI, that needs to be communicated clearly as well.

AI is evolving and it’s possible that X-ray techs may see threats emerge that are, for the moment, not yet on their radar.

Bias

Where does AI get its data? If AI scrapes data from the internet—well, do you know the saying “garbage in, garbage out”? And if you have read the comments section of pretty much any article, you’ve witnessed how ugly and biased the conversation can become. A benign article about a sports figure or celebrity, or even a recipe for blueberry muffins, often devolves into racism, sexism, and hate.

Can that same internet be harvested for unbiased responses to important questions? Hardly.

On the plus side, horror stories about how AI perpetuates some of our species’ worst instincts are an artifact of earlier generations of tech. While we may be out of the thickest part of the forest, we’re not totally out of the woods. An ethical leader would be using sandboxed AI so that the comments section of a related matter wouldn’t infect the information retrieved from a prompt. That same leader would employ internal resources to double-check the work of AI for bias. Again, AI on its own may be problematic, but as part of a centaur, it may be less so.

Resources

AI consumes huge amounts of power.  I’ve read that AI currently uses about the same amount of fuel and produces the same amount of pollution as the entire airline industry, and that within five years, the power demand will triple.

If I can open a book on my shelf and get a perfect answer, is it ethical or appropriate for me to use an energy-guzzling application to get the same result? AI may be efficient from a time perspective, but if AI is overly consumptive of resources and is unsustainable, there are ethical implications to consider.

Most ethical quandaries have no clear answers. Ethics matters involve tough choices between legitimate objectives. AI adds a degree of complexity to our world and, therefore, to the ethical choices facing businesses and their leaders. This conversation is in its infancy, and I am excited about the future while remaining concerned about the ethical issues associated with the rapid adoption of AI. Only time will tell, but given the rapid pace of AI’s development, change may be the only constant.

Richard Birke
Richard Birke is the chief architect of JAMS Pathways and is experienced at resolving complex, multiparty disputes. With over 35 years of hands-on dispute resolution, he draws on experience in a wide range of disciplines, including mediation, psychology, economics, law, communications, negotiation theory, strategic behavior, and diversity, equity and inclusion, to apply the right tools to every client situation.