Listen

Description

Brad M. Thompson, Partner, Epstein Becker & Green PC, Chris Provan, Managing Director & Senior Principal Data Scientist, Mosaic Data Science, and Sam Tyner-Monroe, Ph.D., Managing Director of Responsible AI, DLA Piper LLP (US), discuss how to analyze and mitigate the risk of bias in artificial intelligence through the lens of data science. They cover HHS’ Section 1557 Final Rule as it pertains to algorithmic bias, examples of biased algorithms, the role of proxies, stratification of algorithms by risk, how to test for biased algorithms, how compliance programs can be adapted to meet the unique needs of algorithmic bias, the NIST Risk Management Framework, whether it’s possible to ever get rid of bias, and how explainability and transparency can mitigate bias. Brad, Chris, and Sam spoke about this topic at AHLA’s 2024 Complexities of AI in Health Care in Chicago, IL.

Essential Legal Updates, Now in Audio

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Stay At the Forefront of Health Legal Education

Learn more about AHLA and the educational resources available to the health law community at https://www.americanhealthlaw.org/.