Listen

Description

Today’s government agencies are tasked with providing quality experiences and services to their constituents. More and more, that requires the implementation of AI and automated tools, from chatbots and virtual assistants to enhanced mapping and monitoring capabilities.

These innovations empower government agencies to do more with less, and more importantly, provide citizens and staff with services where and when they need them.

But there’s a bit of a caveat here. While AI has all this potential, it also comes with a number of risks and challenges. Incomplete data sets and human error during the data training process can lead to biased algorithms.

If we’re not careful, AI can end up doing more harm than good.

So, how can government agencies prevent these biases while continuing to innovate?

Introducing Machine Morality, a new podcast from Esri and GovExec’s Studio 2G, where we’ll get to the bottom of some of government’s biggest ethical AI challenges. In this three-part series, we’ll
listen in as experts on AI and ethics from government and industry alike discuss how defense and intelligence leaders can strategically implement the latest AI tools and technologies, while ensuring the technology is used in a way that serves all populations fairly and equally.

This episode will draw from a recent webcast from Defense One and INSA, underwritten by Esri, titled “AI and Ethics: Mitigating Unwanted Bias” in which experts discuss some of today’s most pressing hurdles for AI in government — and how we can begin to address them together.

Check it out.