Look for any podcast host, guest or anyone
Showing episodes and shows of

EqualAI & World Economic Forum

Shows

Outer Limits Of Inner Truth RebornOuter Limits Of Inner Truth RebornThe Journey Beyond Death: After Death Communication Part (1/2)The Journey Beyond Death continues with Part 9, diving into the extraordinary topic of After Death Communication. In this powerful episode, we explore how contact between the living and the spirit world is not only possible—but often happening in ways most people overlook: through dreams, symbols, synchronicities, and even cutting-edge technology. Featuring internationally renowned psychic mediums Lisa McGarrity and Joyce Keller, astrologer Constance Stellas, and tech visionary Robert LoCascio, this episode offers diverse insights into how the boundary between life and death is far more permeable than we imagine. Lisa McGarrity shares how departed loved ones often reach out with su...2025-05-0559 minOuter Limits Of Inner Truth RebornOuter Limits Of Inner Truth RebornThe Journey Beyond Death: After Death Communication Part (1/2)The Journey Beyond Death continues with Part 9, diving into the extraordinary topic of After Death Communication. In this powerful episode, we explore how contact between the living and the spirit world is not only possible—but often happening in ways most people overlook: through dreams, symbols, synchronicities, and even cutting-edge technology. Featuring internationally renowned psychic mediums Lisa McGarrity and Joyce Keller, astrologer Constance Stellas, and tech visionary Robert LoCascio, this episode offers diverse insights into how the boundary between life and death is far more permeable than we imagine. Lisa McGarrity shares how departed loved ones often reach out with su...2025-05-0559 minOuter Limits Of Inner Truth RebornOuter Limits Of Inner Truth RebornThe Journey Beyond Death: After Death Communication Part (1/2)The Journey Beyond Death continues with Part 9, diving into the extraordinary topic of After Death Communication. In this powerful episode, we explore how contact between the living and the spirit world is not only possible—but often happening in ways most people overlook: through dreams, symbols, synchronicities, and even cutting-edge technology. Featuring internationally renowned psychic mediums Lisa McGarrity and Joyce Keller, astrologer Constance Stellas, and tech visionary Robert LoCascio, this episode offers diverse insights into how the boundary between life and death is far more permeable than we imagine. Lisa McGarrity shares how departed loved ones often reach out with su...2025-05-0559 minOuter Limits Of Inner TruthOuter Limits Of Inner TruthThe Journey Beyond Death: After Death Communication Part (1/2)The Journey Beyond Death continues with Part 9, diving into the extraordinary topic of After Death Communication. In this powerful episode, we explore how contact between the living and the spirit world is not only possible—but often happening in ways most people overlook: through dreams, symbols, synchronicities, and even cutting-edge technology. Featuring internationally renowned psychic mediums Lisa McGarrity and Joyce Keller, astrologer Constance Stellas, and tech visionary Robert LoCascio, this episode offers diverse insights into how the boundary between life and death is far more permeable than we imagine. Lisa McGarrity shares how departed loved ones often reach out with su...2025-05-0559 minPolicy Career Profiles: Voices from the FieldPolicy Career Profiles: Voices from the Field“Tina Huang (AI policy)” by Horizon Institute for Public ServiceTina Huang is the Director of Strategic Initiatives at EqualAI. Published in November 2023. --- If you’re interested in pursuing a career in emerging technology policy, complete our Policy Career Interest Form, and we may be able to match you with opportunities suited to your background and interests. --- Last updated: April 11th, 2024 Source: https://emergingtechpolicy.org/profiles/tina-huang 2025-03-3105 minIn AI We Trust?In AI We Trust?AI Literacy Series Ep. 4: Mason Grimshaw on AI Literacy and Data Sovereignty for Indigenous CommunitiesMason Grimshaw: the Power of Identity, Community, and RepresentationDescription: Co-hosts of EqualAI’s AI Literacy Series, Miriam Vogel and Rosalind Wiseman are joined by Mason Grimshaw, data scientist at Ode Partners and VP at IndigiGenius. Grimshaw discusses his roots growing up on a reservation, and what led him to the field of AI. He explains why it’s his mission to bring AI education and tools back to his community. Grimshaw articulates how AI literacy is essential for Indigenous communities to ensure they retain data sovereignty and benefit from these game-changing tools.Literacy Series Desc...2025-03-1156 minIn AI We Trust?In AI We Trust?AI Literacy Series Ep. 3: danah boyd on Thinking Critically about the Systems That Shape UsDescription: Co-hosts of EqualAI’s AI Literacy Series, Miriam Vogel and Rosalind Wiseman sit down with danah boyd, Partner Researcher at Microsoft Research, visiting distinguished professor at Georgetown, and founder of Data & Society Research Institute, to explore how AI is reshaping education, social structures, and power dynamics. boyd challenges common assumptions about AI, urging us to move beyond simplistic narratives of good vs. bad and instead ask: Who is designing these systems? What are their limitations? And what kind of future are we building with them?Literacy Series Description: The EqualAI AI Literacy podcast series builds on...2025-02-271h 15In AI We Trust?In AI We Trust?AI Literacy Series Ep. 2 with Dewey Murdick (CSET): Centering People in AI’s ProgressDescription: In this episode of EqualAI’s AI Literacy Series, co-hosts Miriam Vogel and Rosalind Wiseman sit down with AI policy expert Dewey Murdick, Executive Director at Georgetown's Center for Security and Emerging Technology (CSET) who shares his hopes for AI’s role in personal development and other key areas of society. From national security to education, Murdick unpacks the policies and international collaboration needed to ensure AI serves humanity first.Literacy Series Description: The EqualAI AI Literacy podcast series builds on In AI We Trust?’s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the...2025-02-1140 minIn AI We Trust?In AI We Trust?AI Literacy Series Ep. 1: What is AI and Why Are We Afraid of It?Miriam Vogel and Rosalind Wiseman break down the basics, the limitations, the power, and the fear surrounding AI – and how you can transform it from a concept to a tool in the first episode of the In AI We Trust? AI Literacy series.The EqualAI AI Literacy podcast series builds on In AI We Trust?’s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series will provide listeners with valuable insights and discussions around AI’s impact on society, who is leading in this area of AI lit...2025-01-2925 minIn AI We Trust?In AI We Trust?Vilas Dhar (McGovern Foundation): AI for the people and by the people: Year-in-Review and 2025 PredictionsIn this 2024 year-end episode of In AI We Trust?, Vilas Dhar of the Patrick J. McGovern Foundation and Miriam Vogel of EqualAI review 2024 and discuss predictions for the year ahead. 2024-12-2036 minIn AI We Trust?In AI We Trust?Michael Chertoff (Chertoff Group) and Miriam Vogel (EqualAI): Is your AI use violating the law?In this special edition of #InAIWeTrust?, EqualAI President and CEO Miriam Vogel and former Secretary of Homeland Security Michael Chertoff sit down to discuss their recent co-authored paper, Is Your Use of AI Violating the Law? An Overview of the Current Legal Landscape. Special guest Victoria Espinel, CEO of BSA | The Software Alliance, moderates the conversation with the co-authors to explore key findings, current laws on the books, and potential liabilities from AI deployment and use that lawyers, executives, judges, and policy makers need to understand in our increasingly AI-driven world. The article can b...2024-10-1728 min42 Nuances de SF42 Nuances de SFVers des IAs non genrées ? (3/3)On conclut cette série d'épisodes en se demandant si les intelligences artificielles explicitement non genrées sont la solution pour lutter contre les stéréotypes de genre. Et surtout, en quoi la science-fiction peut nous aider à déconstruire ces modèles traditionnels ? Equal AI : https://www.equalai.org/ Seaborn, K., Pennefather, P., & Kotani, H. (2022). Exploring Gender-Expansive Categorization Options for Robots. Proceedings of the ACM CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3491101.3519646 2024-09-2433 minIn AI We Trust?In AI We Trust?Amanda Levendowski (Georgetown University): Can AI and copyright law coexist?Georgetown University Law Center Associate Professor Amanda Levendowski, and guest co-host Karyn Temple, Senior Executive Vice President and Global General Counsel for the Motion Picture Association (MPA) and EqualAI board member, join In AI We Trust? to explore the protections and limits of copyright law. Tune in to learn more about training AI systems and methods for evaluating the potential harms.2023-11-2238 minRaising HealthRaising HealthBridging AI, Ethics, and Consumer Trust with Miriam VogelMiriam Vogel, President and CEO of EqualAI, cohost of the podcast In AI We Trust?, and Chair of the National AI Advisory Committee, joins Vijay Pande of Bio + Health.Miriam offers pragmatic insights for founders on ethical integration of AI. She also outlines concrete steps to build trustworthy AI. Finally, she discusses the regulatory landscape and the state of politics around AI today. 2023-11-2127 minIn AI We Trust?In AI We Trust?Victoria Espinel (BSA), Reggie Townsend (SAS), and Dawn Bloxwich (Google Deepmind): How can companies responsibly integrate AI into their businesses? (Part 2)In part two of our special episode of In AI We Trust?, EqualAI advisors Victoria Espinel and Reggie Townsend discuss how they got into the field of AI, their involvement in the EqualAI Badge Program and their experiences guiding its participants, and, along with Dawn Bloxwich, discuss how companies can benefit from their co-authored white paper: An Insider’s Guide to Designing and Operationalizing a Responsible AI Governance Framework.2023-10-1139 minIn AI We Trust?In AI We Trust?Xuning (Mike) Tang (Verizon), Diya Wynn (AWS), and Catherine Goetz (LivePerson): How can companies responsibly integrate AI into their businesses? (Part 1)In this week’s special episode of In AI We Trust?, we interview three of our EqualAI Badge Program alumni—Xuning (Mike) Tang (Verizon), Diya Wynn (AWS), and Catherine Goetz (LivePerson)—to discuss their journey’s in the responsible AI field, share their highlights from the EqualAI Badge Program and AI Summit, and underscore the main takeaways from our co-authored white paper: An Insider’s Guide to Designing and Operationalizing a Responsible AI Governance Framework.2023-10-0338 minWashington AI Network with Tammy HaddadWashington AI Network with Tammy HaddadThe Washington AI Trailblazer: Miriam Vogel, President and CEO of EqualAI and Chair of the National AI Advisory Committee (NAIAC)Miriam Vogel, the President and CEO of EqualAI and Chair of NAIAC, speaking in her personal capacity, highlights the critical crossroads between Congress, federal agencies and the tech industry on AI policy.  Government policymakers, particularly at the FTC and CFPB are already evaluating the effect of artificial intelligence, while agencies like the EEOC are investigating the potential impact on civil rights laws like the Americans with Disabilities Act. Recognizing they’ve missed opportunities in the past, and armed with a greater depth and understanding of AI, Congress is rising to the occasion on a bipartisan basis in terms of...2023-08-2136 minPwC Pulse - a business podcast for executivesPwC Pulse - a business podcast for executivesResponsible AI: Building trust, shaping policyText us your thoughts on this episodeWith the use of generative AI accelerating, it’s important to focus on how business leaders can get the most out of their tech investment in a trusted and ethical way. In this episode, we dive into responsible AI – what it is, why it’s important and how it can be a competitive advantage.To cover this important topic, PwC’s host, Joe Atkinson, is joined by leading AI experts and members of the National AI Advisory Committee to the President and the White House – Miriam Vogel, President and CEO of...2023-08-1630 minGeneration AI: Automating Better BusinessGeneration AI: Automating Better BusinessBeyond Black Mirror: Illusion, Ethics and Responsible AINefarious criminals watching your every move, a system of social ranking that decides your life’s path, tech as a tool for healing and connectivity — pop culture from Black Mirror to iRobot to Blade Runner have people excited (and scared) about the potential power AI delivers straight to our fingertips. But these references leave people with perceptions that may not be quite accurate according to Miriam Vogel, President and CEO at EqualAI and Chair of the White House's National AI Advisory Committee. Miriam joins us on the premiere episode of Generation AI to dive into what it takes to e...2023-07-1833 minWharton Business DailyWharton Business DailyAs AI Transforms Our Lives, We Need to Build Trust in the Technology: Here's WhyMiriam Vogel, CEO of EqualAI, joins the show to talk about promoting responsible AI governance to ensure that we benefit from the best of what the technology as to offer. Miriam was a speaker at the Wharton MSI Analytics Conference that took place May 4-5, 2023. Hosted on Acast. See acast.com/privacy for more information.2023-05-0818 minIn AI We Trust?In AI We Trust?Mark Brayan (Appen): For whom is your data performing?In this episode, Mark Brayan focuses on a key ingredient for responsible AI: ethically sourced, inclusive data. Mark is the CEO and Managing director of Appen, which provides training data for thousands of machine learning and AI initiatives. Good quality data is imperative for responsible AI (garbage in, garbage out), and part of that equation is making sure that it is sourced inclusively, responsibly, and ethically. When developing and using responsible AI, it’s critically important to get your data right by asking the right questions; for whom is your data performing – and for whom could it fail?—...2022-10-1228 minIn AI We Trust?In AI We Trust?Krishnaram Kenthapadi (Fiddler.ai): Citizen audits are coming; are you ready?Krishnaram is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and Machine Learning monitoring platform. Prior to Fiddler AI, Krishnaram has served as Principal Scientist at Amazon AWS AI, on the LinkedIn AI team, and on Microsoft's AI and Ethics in Engineering and Research (AETHER) Advisory Board. In this episode, Krishanaram warns of the importance of not simply performing the important task of model validation but continuing to test it post deployment. He also highlights incentives to test your AI early and often: even without new laws in place, empowered and tech-savvy citizens are...2022-09-2844 minIn AI We Trust?In AI We Trust?Dr. Edson Prestes: Can we ingrain empathy into our AI?Dr. Prestes, Professor of Computer Science at the Institute of Informatics, Federal University of Rio Grande do Sul and leader of the Phi Robotics Research Group. In this episode, Dr. Prestes shares his trailblazing work in international AI policy and standards, including the development of the first global AI ethics instrument. Dr. Prestes discusses ethics in technology and the infusion of empathy, as well as his focus on establishing human rights for a digital world. — Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www...2022-09-1445 minIn AI We Trust?In AI We Trust?Joe Bradley (LivePerson): How much 'rat poison' is in our AI and can AI be more "human"?Joe Bradley is the Chief Scientist at LivePerson, a leading Conversational AI company creating digital experiences that are “Curiously Human”, powering nearly a billion conversational interactions monthly in their Conversational Cloud. In this episode, Joe shares the broad lens he brings to his work in AI. He discusses the interconnectedness between AI and humanity, and his work at LivePerson to develop “empathetic” AI systems to help brands better connect with their customers. Joe addresses his experience in the EqualAI Badge program and basic challenges in reducing bias in AI, from determining what to measure to whom to consider when evaluati...2022-08-2452 minTech\'ed UpTech'ed UpBuilding Without Scaling Harm • Miriam Vogel (EqualAI)CEO of Equal AI and White House technology advisor, Miriam Vogel, joins Niki in the studio to discuss her work to reduce algorithmic bias in the deployment of artificial intelligence. In this episode, Miriam explains why this is a critical moment to make sure we are using AI as a solution to increase fairness and build trust, and not as a scaling function for harms that already exist. “We have this critical moment here where we can take safeguards and make sure that we are using AI as a solution and not as a scaling function for ha...2022-06-3025 minIn AI We Trust?In AI We Trust?Dr. Margaret Mitchell: How can we ensure AI reflects our values – and why this matters to each of us?Dr. Margaret Mitchell is a renowned researcher who has won numerous awards for her work developing practical tools to combine ethics and machine learning. LastFall, Dr. Mitchell joined the AI startup HuggingFace ( "to democratize good machine learning") and previously research positions at Google and Microsoft. Inthis episode, Dr. Mitchell articulates numerouschallenges in the endeavor to create ethical AI. She also illuminates thedistinction between ethical and responsible AI; the necessity of ahuman-centered, inclusive approach to AI development; and the need for policymakers to understand AI...2022-05-1056 minIn AI We Trust?In AI We Trust?Mira Lane (Microsoft): Can compassion lead to better AI?Mira Lane, a a polymath, technologist and artist, is the head of Ethics & Society at Microsoft, a multidisciplinary group responsible for guiding AI innovation that leads to ethical, responsible, and sustainable outcomes. In this episode, she shares how the culture at Microsoft includes compassion in AI development to the benefit of their AI products, how she changes the perception of responsible AI from a tax to a value-add and how games can play a role in achieving this goal.----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our...2022-04-1447 minIn AI We Trust?In AI We Trust?Keith Sonderling, EEOC Commissioner: Does AI scale or reduce bias in the workplace?Keith Sonderling is a Commissioner of the U.S. Equal Employment Opportunity Commission (EEOC) and helped launch the EEOC's unprecedented Initiative on Artificial Intelligence and Algorithmic Fairness in 2021. In this episode, he shares guidance for employers on building, buying and employing AI programs in HR systems and shares his optimism on the unique opportunity we have at this moment to ensure a significant, positive impact in deploying AI technology.Subscribe to catch each new episode! Find us on Apple(https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151), Amazon, Spotify and all major platforms. To learn more about...2022-03-3151 minIn AI We Trust?In AI We Trust?MP Darren Jones: 'Horizon Scanning' to Design Better AI RegulationDarren Jones is Member of UK Parliament who has chaired the Parliamentary Technology Information and Communications Forum, Parliamentary Commission on Technology Ethics, and Labour Digital. Darren is also the founding chair of the Institute of AI, a global coalition of legislators interested in AI, and he is a member of the World Economic Forum (WEF) Global AI Action Alliance (GAIAA). In this episode, Darren speaks to how legislators need to 'horizon scan' and understand cutting edge tech to translate it into creating more opportunities while reducing risk through laws and regulation. He argues regulation can support 'safety by design', instead...2022-03-1740 minIn AI We Trust?In AI We Trust?Ilana Golbin (PwC): Does sci-fi help or hinder AI understanding?Ilana Golbin is a Director in PwC Labs leading projects on Emerging Technology and AI. She is a Certified Ethical Emerging Technologistand was recently recognized in Forbes as one of 15 leaders advancing Ethical AI. In this episode, Ilana shares the principles she uses to ensure confidence in AI systems used both internally, at PWC, and when advising clients. She explains some of the complexitiesin the application of those principles, how responsible AI governance is part of a demonstration of cultural sensitivity and how sci-fi can be a helpful partner in the governance process. ...2022-02-2446 minIn AI We Trust?In AI We Trust?Renee Cummings: How AI Does (& Should) Impact Our BHM CelebrationRenée Cummings is a pioneering AI ethicist, Criminologist, Columbia University Community Scholar, and Founder of Urban AI. Her studies focus on the impact of AI on criminal justice, specifically in communities of color and incarcerated populations. In this episode, you will be inspired by Renee's insights on the impact that AI and data science has on our civil rights, how increasing diversity in AI is fundamental to creating technology that reflects our humanity, and improvements that still need to be made in areas such as trust and accountability. ----- Subscribe to catch each new episode on Apple (https://podcasts.a...2022-02-1752 minIn AI We Trust?In AI We Trust?Marco Casalaina (Salesforce): Techno-Optimist not a Techno-ChauvinistIn this episode, MarcoCasalaina, Salesforce’s SVP of Product Management and GM of Einstein, explains how his decades of experience in AI and tech has resulted in his techno-optimism, how an AIethicist enhances his work and why he encourages others to join the EqualAI badge program. He also shares his excitement about rapidly developing transforming models but illuminates how this technology will be the next gen ethical AI quandary.-----Subscribe to catch each new episode on Apple (https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151),Spotify and all major platforms. To learn mo...2022-02-1142 minIn AI We Trust?In AI We Trust?Mukesh Dalal, Stanley Black & Decker (SBD): Why all companies will need a Chief AI OfficerAs the Chief AI officer at Stanley Black and Decker (SBD), Mukesh Dalal has helped transform a 178-year-old global manufacturing company’s approach to AI with the vision of delivering $1 billion of value to the company through AI and Analytics. In this week’s episode, Mukesh outlines SBD’s forward-thinking strategy on AI, describes SBD’s journey into the responsible AI space, and foresees that soon all major companies will have Chief AI Officers to harness the business potential and root out risks of AI technology. Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn...2022-02-0432 minSmart Money CircleSmart Money CircleSmart Money Circle: Rob LoCascio, Founder and CEO, of LivePerson ($LPSN)Rob LoCascio, Founder and CEOof Live Person Inc. ($LPSN) and has been its chief executive officer since its inception in 1995, making him one of the longest-standing founding CEOs of a tech company today.    As the inventor of online chat for brands, Rob disrupted the way people communicate with companies around the world, removing the need for 1-800 numbers, long wait times, and endlessly scanning websites for information.    In 2016, Rob again led the company to the forefront of conversational commerce by making it easy for consumers to connect with brands on messaging via platforms including SMS, What...2022-02-0343 minIn AI We Trust?In AI We Trust?Meghna Sinha: Why data scientists are like medical professionals; why ignorance is not an option and steps we all must take when making data-based decisionsMeghna Sinha is the Vice President of AI and Data at Verizon’s AI Center. Before joining Verizon, Meghna was Target’s VP of Data Sciences. In this episode, Meghna posits that data scientists are similar to medical practitioners, affirms that AI must start and end with humans, and shares lessons from the EqualAI Badge Program for Responsible AI Governance. Referenced papers/articles can be found here: WEF Toolkit (https://www3.weforum.org/docs/WEF_Empowering_AI_Leadership_2022.pdf) @VogelMiriam & Robert Eccles article on AI as a necessary part of Board governance (https://corpgov.law.harvard.edu/2022/01/05/board-responsibility-for-artificial-intelligence-oversight/) Meghna’s articl...2022-01-2839 minIn AI We Trust?In AI We Trust?Amy Holcroft: How HPE is "living it in action" how the EqualAI Badge program, in collaboration with WEF, has helped this effortAmy Holcroft is the Chief Privacy Officer and VP of Privacy & Info Governance at Hewlett Packard Enterprise (HPE). In this episode, Amy shares how she co-leads the establishment of HPE’s AI Ethics Advisory board and HPE’s AI Ethical Principles. She shares that her work requires resilience and thoughtful governance, and how participation in the EqualAI Badge Program on Responsible AI Governance, in collaboration with the World Economic Forum, supports her and her work at HPE, including a timely session with Cathy O'Neil that she put to good use immediately. ----- Subscribe to catch each new episode on Apple, Spot...2021-12-0930 minIn AI We Trust?In AI We Trust?Seth Dobrin: How do you establish a human-centered approach to data and AI (and why is this necessary to succeed)?Seth Dobrin is the Global Chief AI Officer of IBM. Seth has spent his career scaling and using existing technologies to address previously intractable problems at scale. In this episode, Seth shares concrete steps he has taken to create a more diverse and trust-based workplace, explains how his PhD in genetics is relevant and helpful to his current work in AI, and breaks down the what, why and how of a human-centered approach to AI. ----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai...2021-11-1147 minIn AI We Trust?In AI We Trust?Kat Zhou: What is the role of design in creating inclusive and equitable AI?Kat Zhou is a product designer focusing on integrating ethics into the design of AI systems. She is a leading voice for more inclusive and privacy-respecting approaches to AI, and she has called for greater regulation of AI and more human-centric business models for AI companies. In this episode, we ask Kat how governments, product designers, and corporate decision makers can minimize the harms of AI products – and whether there are any products that should never be developed to begin with. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Tw...2021-11-0334 minIn AI We Trust?In AI We Trust?Alex Kotran: Who needs AI literacy and how can we accelerate it?Alex Kotran is the co-founder and CEO of the AI Education Project, a non-profit that brings AI-related knowledge and skills to communities that are being impacted by AI and automation. In this episode, Alex highlights how the communities that are most impacted by AI are often the ones with the least access to basic AI knowledge, and how this is creating disparities in access to healthcare, financial services, criminal justice, and more. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-10-2828 minIn AI We Trust?In AI We Trust?Meg King of the Wilson Center: Who is ensuring policy makers are able to "speak AI"?Meg King is the Director of the Science and Technology Innovation Program at the Wilson Center; a non-partisan think tank created by Congress. She leads innovative transnational projects examining the development of emerging technology and related policy opportunities. Her program also provides training seminars for Congressional and Executive branch staff to develop technology knowledge and skills. In this episode, Meg shares context on her recent congressional testimony, the goals of her work at the Wilson Center and lessons we can learn from AI frameworks and policies abroad. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ ...2021-10-2230 minIn AI We Trust?In AI We Trust?David Hardoon: Can AI be ethical?In this episode, we speak with David Hardoon, a self-proclaimed "data artist." He leads Data and AI efforts at UnionBank Philippines and serves as an external advisor to Singapore's Corrupt Investigation Practices Bureau (CPIB). David has extensive experience in both industry and academia with a PhD in Computer Science and B.Sc. in Computer Science and AI. He weighs in on both the high level concepts surrounding ethics and AI and offers practical steps he uses to support ethical governance with the AI systems under his purview. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ ...2021-10-1446 minIn AI We Trust?In AI We Trust?Rep. Yvette Clarke: Why is AI regulation necessary during this time of racial reckoning?Find out on this week's episode with special guest Congresswoman Yvette Clarke (NY-9th) why she makes AI a top priority in her work to protect vulnerable populations. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-10-0633 minIn AI We Trust?In AI We Trust?Elham Tabassi of NIST: Who ensures the U.S. has strong metrics, tools, & standards for responsible AI?Observers have been skeptical about the ability of the US to lead in AI and establish the necessary framework to ensure its safe and effective development. NIST – the National Institute of Standards and Technology – is responding to that call. In this episode, we speak with Elham Tabassi who is leading NIST's work to support safe and effective Artificial Intelligence. Elham the Chief of Staff in the Information Technology Laboratory (ITL) and serves on the National AI Research Resource Task Force, announced by the White House and the National Science Foundation (NSF) last June. Learn what makes NIST's 'secret sauce' for impa...2021-09-2948 minIn AI We Trust?In AI We Trust?Taka Ariga and Stephen Sanford: What is the U.S. GAO's AI Framework?Taka Ariga is the first Chief Data Scientist and Director of the Innovation Lab at the U.S. Government Accountability Office (GAO). Stephen Sanford is the Managing Director in GAO’s Strategic Planning and External Liaison team. Taka and Stephen are the authors of the GAO's recently released AI Framework, one of the first resources provided by the U.S. government to help identify best practices and the principles to deploy, monitor and evaluate AI responsibly. In this episode, we ask the AI Framework authors why they took on this initiative and lessons learned that are broadly applicable across industry. ...2021-09-2343 minIn AI We Trust?In AI We Trust?Vilas Dhar: How can civil society shape a positive, human-centric future for AI?Vilas Dhar is a technologist, lawyer, and human rights advocate championing a new social compact for the digital age. As President and Trustee of the Patrick J. McGovern Foundation, he is a global leader in advancing artificial intelligence and data solutions to create a thriving, equitable, and sustainable future for all. In this episode we ask Vilas how he arrived at the intersection of AI and philanthropy, and how he thinks philanthropists and civil society can shape a more inclusive and societally beneficial future for AI. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You...2021-09-1640 minIn AI We Trust?In AI We Trust?Steve Mills: How can companies walk the walk on responsible AI?Steve Mills is a Partner at Boston Consulting Group (BCG), where he serves as Chief AI Ethics Officer and the Global Lead for Artificial Intelligence in the Public Sector. He has worked with dozens of leading companies and government agencies to improve their AI practices, and in this episode he shares some of the key lessons he has learned about how organizations can translate their ethical AI commitments into practical, meaningful actions. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-08-2435 minIn AI We Trust?In AI We Trust?Julia Stoyanovich: Can AI systems operate fairly within complex, diverse societies?Julia Stoyanovich is an Assistant Professor in the Department of Computer Science and Engineering at NYU’s Tandon School of Engineering, where she is also the Director of the Center for Responsible AI. Her research focuses on responsible data management and analysis and on practical tools for operationalizing fairness, diversity, transparency, and data protection in all stages of data acquisition and processing. In addition to conducting field-leading research and teaching, Professor Stoyanovich has written several comics aimed at communicating complex AI issues to diverse audiences. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can al...2021-08-1842 minIn AI We Trust?In AI We Trust?Oren Etzioni: Why is the term "machine learning" a misnomer?Dr. Oren Etzioni is Chief Executive Officer at AI2, the Allen Institute for AI, a non-profit that offers foundational research, applied research and user-facing products. He is Professor Emeritus at University of Washington and a Venture Partner at the Madrona Venture Group. He has won numerous awards and founded several companies, has written over 100 technical papers, and provides commentary on AI for The New York Times, Wired, and Nature. In this episode, Oren explains why “machine learning” is a misnomer and some of the exciting AI innovations he is supporting that will result in greater inclusivity. ----- To learn more abou...2021-08-1037 minIn AI We Trust?In AI We Trust?Alexandra Givens: What makes tech to social justice issue of our time?Alexandra Reeve Givens is the President & CEO of the Center for Democracy and Technology (CDT). She is an advocate for using technology to increase equality, amplify voices, and promote human rights. Previously, Alexandra served as the founding Executive Director of the Institute for Technology Law & Policy at Georgetown Law, served as Chief Counsel for IP and Antitrust on the Senate Judiciary Committee and began her career as a litigator at Cravath, Swaine & Moore. In this episode, Alexandra explains her unconventional path to the tech space as a lawyer and why she believes technology is the social justice issue of our...2021-08-0526 minIn AI We Trust?In AI We Trust?Navrina Singh: How AI is a multi-stakeholder problem and how do we solve for it? (Spoiler: it's all about trust.)Navrina Singh is the Founder & CEO of Credo AI, whose mission is to empower organizations to deliver trustworthy and responsible AI through AI audit and governance products. Navrina serves on the Board of Directors of Mozilla and Stella Labs. Previously she served as the Product leader focused on AI at Microsoft where she was responsible for building and commercializing Enterprise Virtual Agents and spent 12+ years at Qualcomm. In this episode, Navrina shares several insights into responsible AI, including the 3 key elements to building trust in AI and the 4 components of the "Ethical AI flywheel." ----- To learn more about EqualAI...2021-07-2837 minIn AI We Trust?In AI We Trust?Andrew Burt: How can lawyers be partners in the AI space?Andrew Burt is a lawyer specializing in artificial intelligence, information security and data privacy. He co-founded bnh.ai and serves as chief legal officer of Immuta. His work has been profiled by magazines like Fast Company and his writing has appeared in Harvard Business Review, the New York Times and the Financial Times. In this episode, we explore the 'hype cycle' of AI where risks are overlooked and the appropriate role of a lawyer as a partner in this space. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter...2021-07-2238 minIn AI We Trust?In AI We Trust?Anima Anandkumar: How can the intersection of academia and industry inform the next generation of AI?Anima Anandkumar is an accomplished AI researcher in both academia and in industry. She is the Bren professor at Caltech CMS department and director of machine learning research at NVIDIA. Previously, Anima was a principal scientist at Amazon Web Service, where she enabled machine learning on the cloud infrastructure. Anima is the recipient of numerous awards and honors and has been featured in documentaries and articles by PBS, Wired, MIT Technology review, Forbes and many others. In this episode we learn about the “trinity of the deep learning revolution,” how the next generation of AI will bring the “mind & body” together...2021-06-3039 minIn AI We Trust?In AI We Trust?Vivienne Ming: How can we create AI that lifts society up rather than tearing it down?Vivienne Ming is an internationally recognized neuroscientist and AI expert who has pushed the boundaries of AI in diverse areas including education, human resources, disability, and physical and mental health. In this episode, we ask Vivienne how we can ensure that society captures the benefits of AI technologies while mitigating their risks and avoiding harms to vulnerable populations. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-06-2458 minIn AI We Trust?In AI We Trust?Heather Cox: Why did our company make a public commitment to equitable AI?On this episode, we hear from Heather Cox, Chief Digital Health and Analytics Officer at Humana. Heather brings 25 years of experience to the role including having served as Chief Technology and Digital Officer at USAA, and CEO of Citi FinTech at Citigroup. In this episode, Heather shares why she decided Humana should take the EqualAI Pledge to Reduce Bias in AI and how they have restructured their company and partnerships to ensure their AI programs better serve their population, and adhere to the core principle to "do no harm". ----- To learn more about EqualAI, visit our website: https://www...2021-06-1540 minIn AI We Trust?In AI We Trust?Commissioner Edward Santow: How can tech governance preserve human rights and achieve responsible AI?On this episode, we are thrilled to share our conversation with Commissioner Edward Santow of the Australian Human Rights Commission. The Commission recently released the Human Rights and Technology final report, which makes 38 recommendations to ensure human rights are upheld in Australia’s laws, policies, funding and education on AI. We ask him about lessons learned in the 3 year creation of this report and which recommendations are most universally applicable. Learn more about the report here: https://tech.humanrights.gov.au/ ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Tw...2021-06-0840 minIn AI We Trust?In AI We Trust?Tess Posner: How can we create a more inclusive technology industry?Tess Posner is the CEO of AI4ALL, an organization working to make the technology industry more inclusive and to ensure that AI is developed responsibly. Before joining AI4ALL, she was Managing Director of TechHire at Opportunity@Work, a national initiative launched out of the White House to increase diversity in the tech economy. In this episode, we explore the diversity challenges facing the technology industry and the exciting efforts that AI4ALL is leading to empower diverse young people to join – and improve – one of the most powerful industries shaping society today. ----- To learn more about EqualAI, visi...2021-06-0132 minIn AI We Trust?In AI We Trust?Sarah Drinkwater: What is 'Responsible AI' and why don't we have it?Sarah Drinkwater is director of the Responsible Technology team at the Omidyar Network where she works to help technologists prevent, mitigate, and correct societal downsides of technology—and maximize positive impact. Priorto Omidyar Network, Sarah was head of Campus London, Google’s first physical startup hub. At Google, Sarah also built and led a global Google Maps community team. She also advised startups and large brands on their social strategy and was a journalist. On this episode we ask, "What is 'Responsible AI' and why don't we have it?" ----- To learn more about EqualAI, visit our website: https://www.equa...2021-05-2530 minIn AI We Trust?In AI We Trust?Tim O'Brien: Is there a role for me in building ethical AI?In this episode we speak with Tim O'Brien who leads Ethical AI Advocacy at Microsoft. Before joining Microsoft in 2003, Tim worked as an engineer, a marketer and a consultant at startups and Fortune 500 companies. In this discussion, Tim leads us through Microsoft's journey – and his own – to become a leader in the field of AI ethics and answers the questions: what does an AI Ethicist do? And, is there a role for 'white guys' to play in this field? ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equa2021-05-1839 minIn AI We Trust?In AI We Trust?Aneesh Chopra: How would you grade the US government's tech readiness?Aneesh Chopra served as the first Chief Technology Officer of the United States. He is currently the president of CareJourney, a provider of clinically-relevant analytics that builds a rating system of healthcare networks. He is also the co-founder of a data analytics investment group, Hunch Analytics. Aneesh sits on the Board of the Health Care Cost Institute, a non-profit focused on unbiased health care utilization and cost information. Previously, Aneesh served as Virginia’s Secretary of Technology and wrote of his experience in government and tech in his book "Innovative State: How New Technologies Can Transform Government." In this episode, An...2021-05-1132 minIn AI We Trust?In AI We Trust?Ashley Casovan: There are tools to help governments and companies reduce bias in AIIn this episode, we interview our friend and colleague, Ashley Casovan, Executive Director of Responsible AI Institute, formerly AI Global, a non-profit dedicated to creating practical tools to ensure the responsible use of AI. Previously, Ashley served as Director of Data and Digital at the Government of Canada, where she led research and policy development related to data, open source, and artificial intelligence. Ashley helps us answer the pressing question in AI: should we rely on internal corporate monitoring, government regulation, third party certification, or some combination? Learn more about the Responsible AI Institute: https://www.responsible.ai/ More information...2021-05-0437 minIn AI We Trust?In AI We Trust?Kush Varshney: Can we trust AI?Dr. Kush R. Varshney is a distinguished research staff member and manager in IBM Research AI at the Thomas J. Watson Research Center where he has conducted cutting edge AI and Machine Learning research for the past ten years. Varshney also serves as co-director of IBM’s Science for Social Good program. Varshney received both a Masters in Science and a Ph.D. in electrical Engineering and Computer Science from MIT. In addition to writing numerous articles on AI, Varshney helped develop AI Fairness 360-a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning mo...2021-04-2737 minIn AI We Trust?In AI We Trust?Robert LoCascio: Why I co-founded EqualAIRob LoCascio is the founder of LivePerson, Inc. and has been its chief executive officer since its inception in 1995, making him one of the longest-standing founding CEOs of a tech company today. As the inventor of online chat for brands, Rob disrupted the way people communicate with companies around the world. He is a founding member of EqualAI, which works with companies, policy makers, and experts to reduce bias in AI, and the NYC Entrepreneurs Council of the Partnership for New York City. In 2001, Rob started the Dream Big Foundation with its first program, FeedingNYC. As someone who has been...2021-04-2032 minIn AI We Trust?In AI We Trust?Malcolm Frank: How to advise your clients to successfully and responsibly navigate the digital ageMalcolm Frank is the president of digital business and technology at Cognizant. Malcolm’s influence is wide ranging and evident across media. He has co-authored two best-selling books, “What to Do When Machines Do Everything” (2017) and “Code Halos” (2014) and authored numerous white papers focusing on the Future of Work. A highly sought-after speaker, Malcolm has presented at conclaves across the globe, including the World Economic Forum and the South by Southwest (SXSW) Conference. He is frequently quoted, is the subject of a Harvard Business School case study and was named “one of the most influential people in finance” by Risk Management mag...2021-04-1337 minIn AI We Trust?In AI We Trust?Christo Wilson: What is an algorithmic audit?Christo is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University, a member of the Cybersecurity and Privacy Institute and the Director of the BS in Cybersecurity program in the College. He is a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University, and an affiliate member of the Center for Law, Innovation and Creativity at Northeastern University School of Law. His research investigates the sociotechnical systems that shape our lives using a multi-disciplinary approach. You can find more of his talks and cutting edge research here: https://cbw.sh/ ---- To...2021-04-0728 minIn AI We Trust?In AI We Trust?Judy Spitz: Do we have a pipeline problem?Tune in to this week’s episode of "In AI we Trust?" to hear Dr. Judith Spitz, Founder and Executive Director of Break Through Tech and learn the often missed barrier she identified to getting women into tech and how she fixes it with her organization (hint: look in our own backyards). Dr. Spitz was previously Chief Information Officer (CIO) of Verizon, and in 2016, devoted herself to helping women break into tech. She launched WiTNY, or the Women in Technology and Entrepreneurship in New York Initiative, and saw a 94 percent increase in the number of women graduating with computer science de...2021-03-3043 minIn AI We Trust?In AI We Trust?Kathy Baxter: What to do before launching an ethical AI productKathy Baxter is the principal architect of Ethical AI Practice at Salesforce. She develops research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. On this episode, we ask Kathy: What are the critical steps to take from an ethics perspective, to ensure your AI product is safe to launch? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-03-2628 minIn AI We Trust?In AI We Trust?Kathy Baxter: What to do before launching an ethical AI productKathy Baxter is the principal architect of Ethical AI Practice at Salesforce. She develops research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. On this episode, we ask Kathy: What are the critical steps to take from an ethics perspective, to ensure your AI product is safe to launch? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-03-261h 11In AI We Trust?In AI We Trust?Kathy Baxter: What to do before launching an ethical AI productKathy Baxter is the principal architect of Ethical AI Practice at Salesforce. She develops research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. On this episode, we ask Kathy: What are the critical steps to take from an ethics perspective, to ensure your AI product is safe to launch? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-03-251h 12In AI We Trust?In AI We Trust?BONUS with Rep. Yvette Clarke and Roger McNamee: "In AI We Trust?" Podcast Launch Hosted by the Georgetown University Law CenterWe're excited to share this bonus episode, a recording from the podcast's launch event with special guests Roger McNamee and Congresswoman Yvette Clarke. Representing New York's 9th District, Congresswoman Clarke is a committed champion of fighting bias in AI and other forms of discrimination in tech. Roger McNamee is a longtime investor in tech and author of "Zucked," which sheds light on the dangers of tech that is unfettered and insufficiently regulated. A special thanks to our friends at the Georgetown University Law Center for hosting this event. ---- To learn more, visit our website: https://www.equalai.org/ You...2021-03-241h 12In AI We Trust?In AI We Trust?Meredith Broussard: What makes unfettered AI so dangerous (and what can we do about it)?Meredith Broussard is a computer scientist and data journalism professor at NYU. Her book, "Artificial Unintelligence: How Computers Misunderstand the World," explains the origins of AI and the subtle and not-so-subtle ways that women and people of color were excluded from its genesis. On this episode, we ask Meredith: What makes unfettered AI so dangerous, and what can we do about it? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-03-1639 minIn AI We Trust?In AI We Trust?Bob Work: America is not prepared to compete in the AI-eraYou can read the NSCAI report here: https://www.nscai.gov/2021-final-report/ Bob Work served as the Deputy Secretary of Defense from 2014 to 2017 and has a long history of service in the government and military before then. He is widely known for developing the Third Offset strategy. He is currently President of TeamWork, a consulting firm that specializes in national security affairs. And even more relevant to this discussion - he is the Vice-Chair of the National Security Commission on AI. ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai...2021-03-1133 minIn AI We Trust?In AI We Trust?Cathy O'Neil: Why should companies care about ethical AI?In this episode, Miriam and Mark are joined by Dr. Cathy. O'Neil, a mathematician, data scientist, and author. She is a matriarch in the exploding and significant field of algorithmic bias. Cathy has a Ph.D. in mathematics from Harvard taught at MIT and Barnard. She also founded and runs the algorithmic auditing company Orcaa. She also posted on her popular blog that you should all check out mathbabe.org and her Twitter @mathbabedotorg --- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-03-0937 minIn AI We Trust?In AI We Trust?Kurt Campbell: China, International Diplomacy and AIA farewell episode to EqualAI Advisor, Kurt Campbell asking: How can we apply effective diplomacy strategies to international governance of AI? Note: Mark officially joins the podcast next episode! --- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal2021-03-0943 minIn AI We Trust?In AI We Trust?Welcome to In AI We Trust?The EqualAI podcast launches March 10th, subscribe wherever you get your podcasts so you don't miss out!To learn more, visit our website: https://www.equalai.org/You can also follow us on Twitter: @ai_equal2021-03-0202 minTech Talks DailyTech Talks DailyLivePerson CTO Alex Spinelli, Former Head of Alexa OS for AmazonLivePerson makes life easier for people and brands everywhere through trusted Conversational AI. Its conversational cloud platform empowers consumers to stop wasting time on hold or crawling through websites and message their favorite brands instead, just as they do with friends and family. The company has 18,000 customers, including leading brands like HSBC, Orange, GM Financial, and The Home Depot. They use their conversational solutions to orchestrate humans and AI at scale and create a convenient, deeply personal relationship — a conversational relationship — with their millions of consumers. LivePerson was also named to Fast Company's World's Most Innovative Companies list...2021-02-1532 minThe Geek In ReviewThe Geek In ReviewJennifer Bluestein on Stepping It Up: A Guide for Mid-Level Law Firm AssociatesWelcome to the 100th episode of The Geek in Review. We hope you've enjoyed listening to the podcast as much as we've had in making it. We talk with Jennifer Bluestein, Chief Talent and HR Officer for Perkins Coie in part one of a two-part interview. Jennifer's new book, Stepping It Up: A Guide for Mid-Level Law Firm Associates helps associates, partners, HR, and professional development personnel better understand the needs of those second to sixth-year associates as they move from learning how to practice law to learning how to practice law, while managing up and down...2021-01-0735 minOrdinarily Extraordinary - Conversations with women in STEMOrdinarily Extraordinary - Conversations with women in STEM24. Mimi Keshani, VP of Operations, at the intersection of science and technologyMimi is the Vice President of Operations at Hadean. Mimi is at the intersection of science and technology at a start up in the United Kingdom. She brings her passion and excitement for truly making a difference in the world to her job on a daily basis. Episode NotesMusic used in the podcast: Higher Up, Silverman Sound StudioInvisible Women: Data Bias in a World Designed for Men, by Caroline Criado Perez.Minecraft - a sandbox video game developed by Mojang Studios. The game was created...2020-10-2259 minElectric Ladies PodcastElectric Ladies PodcastNew Year, New Workforce – Mary Lee Gannon, Healthcare CEO & Executive Coach“We have more generations in the workforce than ever before and their life experiences are very different….Look at the other person as a peer.”  Mary Lee Gannon on Green Connections Radio You probably visited with people of all ages over the holidays, and today you have the same variety in the workplace  – and it’s both a gift, because of the wide range of experiences and perspectives they bring, and a challenge.  Just as your Aunt or cousin might have heard some people over the holidays stereotyping people based on their age (including may...2020-01-0341 minFrancoinformadorFrancoinformadorFIFAGate: Un paso más hacia la extradición de Léoz(?). Las noticias del 13 de marzo.Descarga este episodio   LÉOZ: UN PASO MÁS HACIA LA EXTRADICIÓN. La Corte Suprema de Justicia de Paraguay resolvió este martes rechazar el recurso de casación interpuesto por el expresidente de la Conmebol Nicolás Leoz, de 90 años, para anular su proceso de extradición a Estados Unidos, que le reclama por su presunta implicación en un caso de corrupción de la FIFA. No obstante, a Leoz aún le quedan varios recursos para evitar la extradición, entre ellos una acción de inconstitucionalidad que pretende “anular las decis...2019-03-1300 min