Listen

Description

AI managers are no longer science fiction.

They're already making decisions about human workers, and the recent evolution of agentic AI has shifted this from basic data analysis into sophisticated systems capable of reasoning and adapting independently. Our host, Carter Considine, breaks it down in this edition of Ethical Bytes.

A January 2025 McKinsey report shows that 92% of organizations intend to boost their AI spending within three years, with major players like Salesforce already embedding agentic AI into their platforms for direct customer management.

This transformation surfaces urgent ethical questions.

The empathy dilemma stands out first. After all, it can only execute whatever priorities its creators embed. When profit margins override worker welfare in the programming, the system optimizes accordingly without hesitation.

Privacy threats present even greater challenges.

Effective people management by AI demands unprecedented volumes of personal information, monitoring everything from micro-expressions to vocal patterns. Roughly half of workers express concern about security vulnerabilities, and for good reason. Such data could fall into malicious hands or enable advertising that preys on people's emotional vulnerabilities.

Discrimination poses another ongoing obstacle.

AI systems can amplify existing prejudices from flawed training materials or misinterpret signals from neurodivergent workers and those with different cultural communication styles. Though properly designed AI might actually diminish human prejudice, fighting algorithmic discrimination demands continuous oversight, resources, and expertise that many companies will deprioritize.

AI managers have arrived, no question about it. Now it’s on us to hold organizations accountable in ensuring they deploy them ethically.

Key Topics:

• AI Managers of Humans are Already Here (00:25)

• Is this Automation, or a Workplace Transformation? (01:19)

• Empathy and Responsibility in Management (03:22)

• Privacy and Cybersecurity (06:27)

• Bias and Discrimination (09:30)

• Wrap-Up and Next Steps (12:10)

More info, transcripts, and references can be found at ⁠ethical.fm