podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Miriam Vogel
Shows
In AI We Trust?
AI Literacy Series Ep. 7: Dr. Andrew Ng on Scaling AI for Social Good
In this episode of In AI We Trust?, Dr. Andrew Ng joins co-hosts Miriam Vogel and Rosalind Wiseman to discuss AI literacy and the need for widespread AI understanding. Dr. Ng makes a call to action that everyone should learn to code, especially with AI-assisted coding becoming more accessible. The episode also addresses AI fears and misconceptions, highlighting the importance of learning about AI for increased productivity and potential career growth. The conversation further explores AI's potential for positive large-scale social impact, such as in climate modeling, and the challenges of conveying this potential amidst widespread fears across the...
2025-04-23
53 min
In AI We Trust?
AI Literacy Series: Bridging the Gap Between Technology and Communities with Susan Gonzalez
This episode of In AI We Trust? features co-hosts Miriam Vogel and Rosalind Wiseman continuing their AI literacy series with Susan Gonzalez, CEO of AI&You. The discussion centers on the critical need for basic AI literacy within marginalized communities to create opportunities and prevent an "AI divide." Susan emphasizes overcoming fear, building foundational AI knowledge, and understanding AI's impact on jobs and small businesses. She stresses the urgency of AI education and AI&You's role in providing accessible resources. The episode highlights the importance of dialogue and strategic partnerships to advance AI literacy, ensuring that everyone can benefit f...
2025-04-08
51 min
In AI We Trust?
AI Literacy Series Ep. 6: Bridging the Gap Between Technology and Communities with Susan Gonzalez
This episode of In AI We Trust? features co-hosts Miriam Vogel and Rosyn Man continuing their AI literacy series with Susan Gonzalez, CEO of AI&You. The discussion centers on the critical need for basic AI literacy within marginalized communities to create opportunities and prevent an "AI divide." Susan emphasizes overcoming fear, building foundational AI knowledge, and understanding AI's impact on jobs and small businesses. She stresses the urgency of AI education and AI&You's role in providing accessible resources. The episode highlights the importance of dialogue and strategic partnerships to advance AI literacy, ensuring that everyone can benefit f...
2025-04-07
51 min
In AI We Trust?
AI Literacy Series Ep. 5 with Judy Spitz: Fixing the Tech Talent Pipeline
Description: In this episode of In AI We Trust?, co-hosts Miriam Vogel and Rosalind Wiseman speak with Dr. Judith Spitz, Founder and Executive Director of Break Through Tech sheds light on the blind spots within the industry, and discusses how Break Through Tech is pioneering innovative programs to open doors to talented individuals. She gets more young people from a broad array of backgrounds to study technology disciplines, ensures they learn leadership and other skills critical to their success, and gets them into industry- she is single-handedly building a more robust and prepared tech ecosystem.
2025-03-25
1h 15
In AI We Trust?
AI Literacy Series Ep. 4: Mason Grimshaw on AI Literacy and Data Sovereignty for Indigenous Communities
Mason Grimshaw: the Power of Identity, Community, and RepresentationDescription: Co-hosts of EqualAI’s AI Literacy Series, Miriam Vogel and Rosalind Wiseman are joined by Mason Grimshaw, data scientist at Ode Partners and VP at IndigiGenius. Grimshaw discusses his roots growing up on a reservation, and what led him to the field of AI. He explains why it’s his mission to bring AI education and tools back to his community. Grimshaw articulates how AI literacy is essential for Indigenous communities to ensure they retain data sovereignty and benefit from these game-changing tools.Literacy Series Desc...
2025-03-11
56 min
In AI We Trust?
AI Literacy Series Ep. 3: danah boyd on Thinking Critically about the Systems That Shape Us
Description: Co-hosts of EqualAI’s AI Literacy Series, Miriam Vogel and Rosalind Wiseman sit down with danah boyd, Partner Researcher at Microsoft Research, visiting distinguished professor at Georgetown, and founder of Data & Society Research Institute, to explore how AI is reshaping education, social structures, and power dynamics. boyd challenges common assumptions about AI, urging us to move beyond simplistic narratives of good vs. bad and instead ask: Who is designing these systems? What are their limitations? And what kind of future are we building with them?Literacy Series Description: The EqualAI AI Literacy podcast series builds on...
2025-02-27
1h 15
In AI We Trust?
AI Literacy Series Ep. 2 with Dewey Murdick (CSET): Centering People in AI’s Progress
Description: In this episode of EqualAI’s AI Literacy Series, co-hosts Miriam Vogel and Rosalind Wiseman sit down with AI policy expert Dewey Murdick, Executive Director at Georgetown's Center for Security and Emerging Technology (CSET) who shares his hopes for AI’s role in personal development and other key areas of society. From national security to education, Murdick unpacks the policies and international collaboration needed to ensure AI serves humanity first.Literacy Series Description: The EqualAI AI Literacy podcast series builds on In AI We Trust?’s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the...
2025-02-11
40 min
In AI We Trust?
AI Literacy Series Ep. 1: What is AI and Why Are We Afraid of It?
Miriam Vogel and Rosalind Wiseman break down the basics, the limitations, the power, and the fear surrounding AI – and how you can transform it from a concept to a tool in the first episode of the In AI We Trust? AI Literacy series.The EqualAI AI Literacy podcast series builds on In AI We Trust?’s global reach, focusing specifically on AI literacy. Featuring prominent leaders in the technology, education, and governance fields, this special series will provide listeners with valuable insights and discussions around AI’s impact on society, who is leading in this area of AI lit...
2025-01-29
25 min
In AI We Trust?
Vilas Dhar (McGovern Foundation): AI for the people and by the people: Year-in-Review and 2025 Predictions
In this 2024 year-end episode of In AI We Trust?, Vilas Dhar of the Patrick J. McGovern Foundation and Miriam Vogel of EqualAI review 2024 and discuss predictions for the year ahead.
2024-12-20
36 min
In AI We Trust?
Michael Chertoff (Chertoff Group) and Miriam Vogel (EqualAI): Is your AI use violating the law?
In this special edition of #InAIWeTrust?, EqualAI President and CEO Miriam Vogel and former Secretary of Homeland Security Michael Chertoff sit down to discuss their recent co-authored paper, Is Your Use of AI Violating the Law? An Overview of the Current Legal Landscape. Special guest Victoria Espinel, CEO of BSA | The Software Alliance, moderates the conversation with the co-authors to explore key findings, current laws on the books, and potential liabilities from AI deployment and use that lawyers, executives, judges, and policy makers need to understand in our increasingly AI-driven world. The article can b...
2024-10-17
28 min
Pondering AI
Policy and Practice with Miriam Vogel
Miriam Vogel disputes AI is lawless, endorses good AI hygiene, reviews regulatory progress and pitfalls, boosts literacy and diversity, and remains net positive on AI. Miriam Vogel traverses her unforeseen path from in-house counsel to public policy innovator. Miriam acknowledges that AI systems raise some novel questions but reiterates there is much to learn from existing policies and laws. Drawing analogies to flying and driving, Miriam demonstrates the need for both standardized and context-specific guidance. Miriam and Kimberly then discuss what constitutes good AI hygiene, what meaningful transparency looks like, and why a multi-disciplinary mindset ma...
2024-06-05
33 min
The Matrix AI Talk Radio from inteligenesis.com
Exclusive Interview with Miriam Vogel on Ethical AI and Inclusive Practices
It's a critical juncture in the evolution of artificial intelligence (AI). Miriam Vogel, a beacon in the campaign for equality & fair representation in this rapidly advancing domain, draws our attention to a pressing matter. The need for RESPONSIBLE AI is not just a minor concern; it's an imperative that shapes the future. The technological landscape is shifting. Artificial intelligence finds its way into our daily lives at an unprecedented rate. In these times, the ethos that drives its development is under close scrutiny. ...
2024-05-28
01 min
Raising Health
Bridging AI, Ethics, and Consumer Trust with Miriam Vogel
Miriam Vogel, President and CEO of EqualAI, cohost of the podcast In AI We Trust?, and Chair of the National AI Advisory Committee, joins Vijay Pande of Bio + Health.Miriam offers pragmatic insights for founders on ethical integration of AI. She also outlines concrete steps to build trustworthy AI. Finally, she discusses the regulatory landscape and the state of politics around AI today.
2023-11-21
27 min
In AI We Trust?
Xuning (Mike) Tang (Verizon), Diya Wynn (AWS), and Catherine Goetz (LivePerson): How can companies responsibly integrate AI into their businesses? (Part 1)
In this week’s special episode of In AI We Trust?, we interview three of our EqualAI Badge Program alumni—Xuning (Mike) Tang (Verizon), Diya Wynn (AWS), and Catherine Goetz (LivePerson)—to discuss their journey’s in the responsible AI field, share their highlights from the EqualAI Badge Program and AI Summit, and underscore the main takeaways from our co-authored white paper: An Insider’s Guide to Designing and Operationalizing a Responsible AI Governance Framework.
2023-10-03
38 min
In AI We Trust?
Rep. Ted Lieu (D-CA): Can Congress regulate AI?
Representative Ted Lieu (D-CA) joins this week’s episode of In AI We Trust? to discuss how Congress should approach AI legislation, the impact of generative AI, and U.S. AI efforts on the global stage. Tune in to learn more about Representative Lieu’s computer science focused approach to AI policy and more.
2023-09-27
28 min
Changemakers
Equal AI with Miriam Vogel
Artificial Intelligence is transforming our lives and society by unlocking powerful tools that make living and working more efficient. But from lending algorithms to facial recognition, hidden bias in AI can leave women, people of color and other groups out of the conversation. Equal AI is on a mission to reduce the infusion of unconscious bias in Artificial Intelligence and its CEO Miriam Vogel joins hosts will.i.am and Omar Abbosh to discuss the inclusive future of AI.Read the transcript Hosted on Acast. See acast.com/privacy for more information.
2023-09-12
30 min
In AI We Trust?
Secretary Michael Chertoff and Lucy Thomson (ABA): How is AI reshaping the legal landscape?
Tune into this week’s episode of In AI We Trust? with former United States Secretary of Homeland Security, Michael Chertoff, and Lucy Thomson (American Bar Association), to learn the ways in which AI is changing the legal landscape, how the ABA is tackling this issue (spoiler alert: we applaud the launch of the new AI TF), and to learn the Secretary’s “Three D's” of AI governance.
2023-08-30
50 min
Washington AI Network with Tammy Haddad
The Washington AI Trailblazer: Miriam Vogel, President and CEO of EqualAI and Chair of the National AI Advisory Committee (NAIAC)
Miriam Vogel, the President and CEO of EqualAI and Chair of NAIAC, speaking in her personal capacity, highlights the critical crossroads between Congress, federal agencies and the tech industry on AI policy. Government policymakers, particularly at the FTC and CFPB are already evaluating the effect of artificial intelligence, while agencies like the EEOC are investigating the potential impact on civil rights laws like the Americans with Disabilities Act. Recognizing they’ve missed opportunities in the past, and armed with a greater depth and understanding of AI, Congress is rising to the occasion on a bipartisan basis in terms of...
2023-08-21
36 min
Ladybits and Leadership
40. Unpacking The Barbie Movie with Our Favorite Jewish Barbie Miriam Diana-Conscious Dating Coach and Reproductive Justice Advocate
Miriam Diana is a conscious-dating coach who aims to help every person with an adventurous spirit and a sensitive soul have fun and hold their heart with care while navigating the world of sex. She is here to support you in living your best fucking life and harness your horniness in your journey of connecting with others. She studied for three years at the Tantric Institute for Integrated Sexuality which draws from both modern psychology and ancient healing traditions. Today she joins us on Ladybits and Leadership to unpack the Barbie movie. In this episode...
2023-08-04
1h 15
In AI We Trust?
Sarah Hammer (Wharton School) and Dr. Philipp Hacker (European University Viadrina): Can AI accelerate the UN Sustainable Development Goals (SDGs)?
Professor Sarah Hammer, Executive Director at the Wharton School of the U. of Penn and leads Wharton Cypher Accelerator and Dr. Philipp Hacker, Chair for Law and Ethics of the Digital Society at the European New School of Digital Studies at European University join this week on In AI We Trust? to debrief their recent #AIforGood Conference. Listen to the discussion for insights on how financial regulation, sustainability in AI, content moderation, and other opportunities for international collaboration around AI will help advance UN SDG goals.—Resources Mentioned This Episode:AI for Good Global Su...
2023-07-26
59 min
In AI We Trust?
Chair Charlotte Burrows (EEOC): Is your AI system violating civil rights laws?
In this week’s episode, we are joined by Chair of the U.S. Equal Employment Opportunity Commission (EEOC) Charlotte Burrows, who highlights the EEOC’s work to address AI proliferation in the employment sphere. She discusses the need to increase education of the public on how AI is being used, EEOC guidance on key civil rights bills such as the Americans with Disabilities Act (ADA) and Title VII of the Civil Rights Act of 1964 (Title VII), as well as key points employers should be aware of when deploying AI.
2023-07-12
45 min
In AI We Trust?
Kevin McKee (DeepMind): How does AI influence the core of being human?
Tune in to this week’s episode of In AI We Trust?, where Kevin McKee, Senior Research Scientist at Google DeepMind, discusses issues of AI fairness, AI’s impact on the LGBT+ community, and the balance between developing AI that humans can trust and the anthropomorphization of technology. Kevin leads research projects focused on machine learning, social psychology, and sociotechnical systems and has worked on algorithmic development and evaluation, environment design, and data analysis.—Resources Mentioned this Episode:Humans may be more likely to believe disinformation generated by AICountries Must Act Now Ov...
2023-07-05
35 min
In AI We Trust?
Chris Wood (LGBT Tech): How can we ensure our LGBT+ voices are heard through our data?
This week on In AI we Trust? Executive Director of LGBT Tech, Chris Wood, joins Miriam Vogel and guest-co-host Kathy Baxter for a special episode in celebration of Pride Month. Join this week’s conversation on the duality of technology for the LGBT community – how it can be an impactful medium to foster connection in the LGBT+ community or a harmful tool leveraged against the same individuals – the significance of diversity in tech, the complexity of representation in our datasets, as well as his important research and other initiatives that range from broadband access in rural communities to building an AI...
2023-06-28
45 min
In AI We Trust?
Gilman Louie (America’s Frontier Fund, CEO of In-Q-Tel, NSCAI Comm’r): How will we respond to this ‘Sputnik’ moment?
Gilman Louie is CEO and co-founder of America’s Frontier Fund, CEO of In-Q-Tel, and an NSCAI Commissioner. Tune into this week’s episode of In AI We Trust, where Gilman shares his thoughts on the government's role in regulating, funding, and convening key stakeholders to promote responsible AI. Gilman invokes similar moments of technological innovation in our history to contextualize the opportunity in the U.S. at this moment to set the standards in the AI race; and considers the challenges that derive from our “click economy”. Hear these thoughts and more in this great episode.
2023-05-24
53 min
In AI We Trust?
Rep. Chrissy Houlahan (D-PA): How do we prepare Congress for the age of AI?
Meet one of the Bad A#%* women in Congress, Representative Chrissy Houlahan (D-PA). She is a trailblazer: a strong advocate for and accomplished practitioner in STEAM (science, technology, engineering, art and math) as an engineer, Air Force veteran, successful entrepreneur and former chemistry teacher. This week on In AI We Trust? Miriam Vogel and special guest co-host Victoria Espinel of #BSA ask Representative Houlahan to share her unique perspective on why – and how – Congress must do more to support our veterans, women, entrepreneurship and how this relates to her work in Congress on AI policy.
2023-05-03
30 min
In AI We Trust?
Dr. Haniyeh Mahmoudian (DataRobot): Who should be involved in AI ethics?
In this episode of In AI We Trust? Dr. Haniyeh Mahmoudian, Global AI Ethicist at DataRobot, provides insight into the timely and critical role of an AI ethicist. Haniyeh explains how culture is a key element of responsible AI development. She also reflects on the questions to ask in advance of designing an AI model and the importance of engaging multiple stakeholders to design AI effectively. Tune in to this episode to learn these and other insights from an industry thought leader.—Resources mentioned in this episode:How to Tackle AI Bias (Haniyeh Ma...
2023-04-26
41 min
In AI We Trust?
Justin Hotard (Hewlett Packard Enterprise): Are local communities and data the key to unlocking better AI?
Justin Hotard leads the High Performance Computing (HPC) & AI business group at Hewlett Packard Enterprise (HPE). Tune in to In AI we Trust? this week as he discusses supercomputing, HPE’s commitment to open source models for global standardization and using responsible data to ensure responsible AI. –Resources mentioned in this episode:What are supercomputers and why are they important? An expert explains (Justin Hotard & the World Economic Forum)Fueling AI for good with supercomputing (Justin Hotard & HPE)Hewlett Packard Enterprise ushers in next era in AI innovation with Swarm Learning solution built for the...
2023-04-05
40 min
In AI We Trust?
Jordan Crenshaw (U.S. Chamber of Commerce): Can your company survive without AI adoption?
Based on the testimony of 87 witnesses from 5 field hearings across the US, the U.S. Chamber of Commerce bipartisan AI Commission on Competition, Inclusion, and Innovation released a report yesterday, addressing the state of AI. Tune in this week to hear the U.S. Chamber’s Technology Engagement Center (C_TEC) VP, Jordan Crenshaw share key takeaways from this and other recent C_TEC reports, why tech issues are business issues, the importance of digitizing government data, and the critical impact of tech on small businesses. —Materials mentioned in this episode:...
2023-03-10
38 min
In AI We Trust?
Elham Tabassi and Reva Schwarz (NIST): What's the big deal about the NIST AI Risk Management Framework (AI RMF)?
Elham Tabassi and Reva Schwartz – two AI leaders from the National Institute of Standards and Technology (NIST) – join us this week to discuss the AI Risk Management Framework #AIRMF released on January 26th thanks to the herculean efforts of our guests. Tune in to find out why Miriam Vogel and Kay Firth-Butterfield believe the AI RMF will be game changing. Learn the purpose behind the AI RMF; the emblematic 18-month multi (multi)-stakeholder, transparent process to design it; how they made it ‘evergreen’ at a time when our AI progress is moving at a lightning speed pace and much more.
2023-02-06
50 min
In AI We Trust?
Davos in Review: Should we hit 'pause' on generative AI?
The annual World Economic Forum (WEF) at Davos gathers leading thinkers in government, business and civil society annually to discuss current global economic and social challenges. This week, listen to WEF Executive Committee Member, our own co-host Kay Firth-Butterfield, and Miriam Vogel discuss why this was Kay's “best Davos yet”. Not surprisingly, generative AI and ChatGPT were among the hottest topics. Learn insights gleaned on generative AI’s power and limitations, the key role that investors plan in development and deployment of responsible AI, and how AI can predict wildfires and help fight the climate crisis. Leave a 5 star rating...
2023-02-02
34 min
In AI We Trust?
Dr. Stuart Russell (UC Berkeley): Are we living in an AGI world?
Dr. Stuart Russell (CS Prof, UC Berkeley) has kept us current on AI developments for decades and in this week’s episode, prepares us for the headlines we’ll hear about this week @Davos and in the coming year. He shares his thoughts and concerns on ChatGPT, Lethal Autonomous Weapons Systems, how the future of work might look through an AI lens, and a human compatible design for AI. Listen to this episode here and subscribe to ensure you catch other important upcoming discussions.—Materials mentioned in this episode:Davos 2023, the Wo...
2023-01-18
51 min
In AI We Trust?
2022 Year in Review: Are we ready for what’s coming in AI?
In this special year-in-review edition of "In AI we Trust?", co-hosts Kay Firth-Butterfield (@KayFButterfield) and Miriam Vogel (@VogelMiriam) take a look back at the key themes and insights from their conversations. From interviews with thought leaders, government officials and senior executives in the field, we explore progress and challenges from the past year in the quest for trustworthy AI. We also look ahead to what you can expect to see and encounter, including key issues that are likely to emerge in AI in 2023. Join us as we reflect and gear up for an exciting year in the accelerated path...
2023-01-11
33 min
Faking It
The Power Of Generosity And Taking Intelligent Risks With Allie Vogel
In this episode, Miriam discusses: Living in generosity, building community, and having fun Taking intelligent risks The power of VR and Web3 for empowerment and empathy building Allie Vogel's Resources: Good To Great Book By Jim Collins Follow Allie on Twitter: https://twitter.com/theallievogel Connect with Allie on LinkedIn: https://www.linkedin.com/in/allievogel/ ——————————————————————————————— CALL MIRIAM AND ASK FOR ADVICE - https://bit.ly/miriamadvice ——————————————————————————————— Faking it is about Fake it 'till you make it, not faking orgasms. This podcast is on a mission to empower women in all aspects of their lives, in the bedroom and out of it. Miriam covers topics ranging from entrepreneurship to financial literacy, to sexuality, and believing in yourself. Fa...
2023-01-05
46 min
In AI We Trust?
Dr. Suresh Venkatasubramanian (White House OSTP/Brown University): Can AI be as safe as our seatbelts?
In this episode, we are joined by Dr. Suresh Venkatasubramanian, a former official at the White House Office of Science and Technology Policy (OSTP) and CS professor at Brown, to discuss his work in the White House developing policy, including the AI Bill of Rights Blueprint. Suresh also posits on the basis for current AI challenges as failure of imagination, the need to engage diverse voices in AI development, and the evolution of safety regulations for new technologies. —Materials mentioned in this episode:Blueprint for an AI Bill of Rights (The White House)
2022-12-19
46 min
In AI We Trust?
Joaquin Quiñonero Candela (LinkedIn): Can we meet business goals AND attain responsible AI? (spoiler: we can and must)
This week, Joaquin Quiñonero Candela (LinkedIn, formerly at Facebook and Microsoft) joins us to discuss AI storytelling; ethics by design; the imperative of diversity to create effective AI; and strategies he uses to make responsible AI a priority for the engineers he manages, policy-makers he advises, and other important stakeholders.—Materials mentioned in this episode:Technology Primer: Social Media Recommendation Algorithms (Harvard Belfer Center)Finding Solutions: Choice, Control, and Content Policies; a conversation between Karen Hao and Joaquin Quiñonero Candela hosted live by the Harvard Belf...
2022-12-07
43 min
In AI We Trust?
Deputy Secretary Graves (DOC) answers the question: Can We Maintain Our AI Lead? (spoiler alert: We are AI Ready!)
The Department of Commerce plays a key role in the USG’s leadership in AI given the multiple ways AI is used, patented and governed by the Department. In this special episode, hear from Commerce Deputy Secretary Don Graves on how the US intends to maintain leadership in AI, including through its creation of standards to attain trustworthy AI, working with our allies and ensuring an inclusive and ready AI workforce. —Materials mentioned in this episode:Proposed Law Enforcement Principles on the Responsible Use of Facial Recognition Technology Released from the W...
2022-11-16
38 min
In AI We Trust?
Carl Hahn (NOC): When your AI reaches from the cosmos to the seafloor, and the universe in between, how can you ensure it is safe and trustworthy?
Carl Hahn, Vice President and Chief compliance officer at Northrop Grumman, one of the world’s largest military technology providers, joins us on this episode to help answer this question that he addresses daily. Carl shares his perspective on the impact of the DoD principles, how governments and companies need to align on the “how” of developing and using AI responsibly, and much more. ---------------Materials mentioned in this episode:NAIAC Field Hearing @ NIST YouTube Page“DOD Adopts 5 Principles of Artificial Intelligence Ethics” (Department of Defense)“Defense AI Technology: Worlds Apart...
2022-11-02
44 min
In AI We Trust?
Mark Brayan (Appen): For whom is your data performing?
In this episode, Mark Brayan focuses on a key ingredient for responsible AI: ethically sourced, inclusive data. Mark is the CEO and Managing director of Appen, which provides training data for thousands of machine learning and AI initiatives. Good quality data is imperative for responsible AI (garbage in, garbage out), and part of that equation is making sure that it is sourced inclusively, responsibly, and ethically. When developing and using responsible AI, it’s critically important to get your data right by asking the right questions; for whom is your data performing – and for whom could it fail?—...
2022-10-12
28 min
In AI We Trust?
Krishnaram Kenthapadi (Fiddler.ai): Citizen audits are coming; are you ready?
Krishnaram is the Chief Scientist of Fiddler AI, an enterprise startup building a responsible AI and Machine Learning monitoring platform. Prior to Fiddler AI, Krishnaram has served as Principal Scientist at Amazon AWS AI, on the LinkedIn AI team, and on Microsoft's AI and Ethics in Engineering and Research (AETHER) Advisory Board. In this episode, Krishanaram warns of the importance of not simply performing the important task of model validation but continuing to test it post deployment. He also highlights incentives to test your AI early and often: even without new laws in place, empowered and tech-savvy citizens are...
2022-09-28
44 min
In AI We Trust?
Dr. Edson Prestes: Can we ingrain empathy into our AI?
Dr. Prestes, Professor of Computer Science at the Institute of Informatics, Federal University of Rio Grande do Sul and leader of the Phi Robotics Research Group. In this episode, Dr. Prestes shares his trailblazing work in international AI policy and standards, including the development of the first global AI ethics instrument. Dr. Prestes discusses ethics in technology and the infusion of empathy, as well as his focus on establishing human rights for a digital world. — Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www...
2022-09-14
45 min
In AI We Trust?
Joe Bradley (LivePerson): How much 'rat poison' is in our AI and can AI be more "human"?
Joe Bradley is the Chief Scientist at LivePerson, a leading Conversational AI company creating digital experiences that are “Curiously Human”, powering nearly a billion conversational interactions monthly in their Conversational Cloud. In this episode, Joe shares the broad lens he brings to his work in AI. He discusses the interconnectedness between AI and humanity, and his work at LivePerson to develop “empathetic” AI systems to help brands better connect with their customers. Joe addresses his experience in the EqualAI Badge program and basic challenges in reducing bias in AI, from determining what to measure to whom to consider when evaluati...
2022-08-24
52 min
Tech'ed Up
Building Without Scaling Harm • Miriam Vogel (EqualAI)
CEO of Equal AI and White House technology advisor, Miriam Vogel, joins Niki in the studio to discuss her work to reduce algorithmic bias in the deployment of artificial intelligence. In this episode, Miriam explains why this is a critical moment to make sure we are using AI as a solution to increase fairness and build trust, and not as a scaling function for harms that already exist. “We have this critical moment here where we can take safeguards and make sure that we are using AI as a solution and not as a scaling function for ha...
2022-06-30
25 min
In AI We Trust?
Dr. Richard Benjamins (Telefonica): What are the key ingredients for a successful Responsible AI Framework?
Dr. Richard Benjamins is Chief AI & Data Strategist at Telefonica, author of The myth of the algorithm and A Data-Driven Company, and co-founder of OdiseIA. In this week’s episode, Richard offers his roadmap for trustworthy AI, including his company's “aspirational” approach to AI governance, their use of an ethics committee, how they use the bottom line to reinforce their goals and other best practices in designing responsible AI use.
2022-06-15
58 min
In AI We Trust?
Beena Ammanath (Deloitte): What concrete steps companies can (must) take to achieve trustworthy AI
Beena Ammanath is Executive Director of the Global Deloitte AI Institute, author of Trustworthy AI: A Business Guide For Navigating Trust and Ethics in AI and founder of the nonprofit to increase diversity in tech, Humans for AI. In this episode, Beena explains where organizations (and others) can begin to embed AI ethics as a part of their routine business practice, the importance for policy makers and organizations alike to focus on use cases when building frameworks, and shares others lessons on how to ensure we create more inclusive, trustworthy AI.
2022-05-27
51 min
In AI We Trust?
Dr. Margaret Mitchell: How can we ensure AI reflects our values – and why this matters to each of us?
Dr. Margaret Mitchell is a renowned researcher who has won numerous awards for her work developing practical tools to combine ethics and machine learning. LastFall, Dr. Mitchell joined the AI startup HuggingFace ( "to democratize good machine learning") and previously research positions at Google and Microsoft. Inthis episode, Dr. Mitchell articulates numerouschallenges in the endeavor to create ethical AI. She also illuminates thedistinction between ethical and responsible AI; the necessity of ahuman-centered, inclusive approach to AI development; and the need for policymakers to understand AI...
2022-05-10
56 min
In AI We Trust?
Rep. Don Beyer (D-VA): Can the U.S. Congress Create Legislative Frameworks to Support AI Development (and should it)?
Rep. Don Beyer (D-VA) is Chair of Congress' Joint Economic Committee and serves on the Ways and Means and the Science, Space and Technology Committees, as well as a member of the AI Caucus- and in his spare time, he is pursuing a Masters Degree in Artificial Intelligence. In this episode, Rep. Beyer explains his enthusiasm for AI and the opportunities it presents to enhance human life -- (e.g., better understanding and treating long covid and preserving life in suicide prevention)-- and the potential harms he is concerned about, as well as the ability of the US Congress...
2022-04-26
36 min
In AI We Trust?
Mira Lane (Microsoft): Can compassion lead to better AI?
Mira Lane, a a polymath, technologist and artist, is the head of Ethics & Society at Microsoft, a multidisciplinary group responsible for guiding AI innovation that leads to ethical, responsible, and sustainable outcomes. In this episode, she shares how the culture at Microsoft includes compassion in AI development to the benefit of their AI products, how she changes the perception of responsible AI from a tax to a value-add and how games can play a role in achieving this goal.----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our...
2022-04-14
47 min
In AI We Trust?
Dr. Athina Kanioura (PepsiCo): Is AI a Privilege Reserved for Big Tech?
Dr. Athina Kanioura is Chief Strategy and Transformation Officer at PepsiCo, leading their company-wide transformation in digital strategy. In this episode, Athina opens our eyes to ways that companies like PepsiCo are using AI (and equally important, where they are not). She shares challenges in undergoing a digital transformation and explains their legacy-focused approach to AI integration as a means for greater efficiency as well as instilling better sustainability practices, upskilling employees and supporting small business partners.
2022-04-08
47 min
In AI We Trust?
Keith Sonderling, EEOC Commissioner: Does AI scale or reduce bias in the workplace?
Keith Sonderling is a Commissioner of the U.S. Equal Employment Opportunity Commission (EEOC) and helped launch the EEOC's unprecedented Initiative on Artificial Intelligence and Algorithmic Fairness in 2021. In this episode, he shares guidance for employers on building, buying and employing AI programs in HR systems and shares his optimism on the unique opportunity we have at this moment to ensure a significant, positive impact in deploying AI technology.Subscribe to catch each new episode! Find us on Apple(https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151), Amazon, Spotify and all major platforms. To learn more about...
2022-03-31
51 min
In AI We Trust?
MP Darren Jones: 'Horizon Scanning' to Design Better AI Regulation
Darren Jones is Member of UK Parliament who has chaired the Parliamentary Technology Information and Communications Forum, Parliamentary Commission on Technology Ethics, and Labour Digital. Darren is also the founding chair of the Institute of AI, a global coalition of legislators interested in AI, and he is a member of the World Economic Forum (WEF) Global AI Action Alliance (GAIAA). In this episode, Darren speaks to how legislators need to 'horizon scan' and understand cutting edge tech to translate it into creating more opportunities while reducing risk through laws and regulation. He argues regulation can support 'safety by design', instead...
2022-03-17
40 min
In AI We Trust?
Ziad Obermeyer: A physician, academic, McKinsey alum's approach to tackling bias in AI
Ziad Obermeyer is a Professor of Health Policy and Managementat the UC Berkeley School of Public Health where he conducts research at the intersection of machine learning, medicine, and health policy. Previously, he was a professor at Harvard Medical School and consultant at McKinsey & Co. He continues to practice emergency medicine in underserved parts of the US and is also a co-founder of Nightingale Open Science, a computing platform givingresearchers access to massive new health imaging datasets. In this episode, you'll hear how he ended up co-authoring the seminal study to identify bias in AI...
2022-03-09
59 min
In AI We Trust?
Jen Gennai (Google): How to Manage the Creation of Responsible AI Products for Billions
Jen Gennai is Founder and Director of the Responsible Innovation Group at Google. In her current role leading the Responsible Innovation Group, Jen and her team are responsible for creating and operationalizing Google’s AI Principles. In this episode, Jen shares what responsible AI means to her, lessons learned that inform her perspective from which we all can learn, how AI should or should not be regulated, and the AI innovations on the horizon she is excited to see come to fruition.
2022-03-04
1h 08
In AI We Trust?
Ilana Golbin (PwC): Does sci-fi help or hinder AI understanding?
Ilana Golbin is a Director in PwC Labs leading projects on Emerging Technology and AI. She is a Certified Ethical Emerging Technologistand was recently recognized in Forbes as one of 15 leaders advancing Ethical AI. In this episode, Ilana shares the principles she uses to ensure confidence in AI systems used both internally, at PWC, and when advising clients. She explains some of the complexitiesin the application of those principles, how responsible AI governance is part of a demonstration of cultural sensitivity and how sci-fi can be a helpful partner in the governance process. ...
2022-02-24
46 min
In AI We Trust?
Renee Cummings: How AI Does (& Should) Impact Our BHM Celebration
Renée Cummings is a pioneering AI ethicist, Criminologist, Columbia University Community Scholar, and Founder of Urban AI. Her studies focus on the impact of AI on criminal justice, specifically in communities of color and incarcerated populations. In this episode, you will be inspired by Renee's insights on the impact that AI and data science has on our civil rights, how increasing diversity in AI is fundamental to creating technology that reflects our humanity, and improvements that still need to be made in areas such as trust and accountability. ----- Subscribe to catch each new episode on Apple (https://podcasts.a...
2022-02-17
52 min
In AI We Trust?
Marco Casalaina (Salesforce): Techno-Optimist not a Techno-Chauvinist
In this episode, MarcoCasalaina, Salesforce’s SVP of Product Management and GM of Einstein, explains how his decades of experience in AI and tech has resulted in his techno-optimism, how an AIethicist enhances his work and why he encourages others to join the EqualAI badge program. He also shares his excitement about rapidly developing transforming models but illuminates how this technology will be the next gen ethical AI quandary.-----Subscribe to catch each new episode on Apple (https://podcasts.apple.com/us/podcast/in-ai-we-trust/id1563248151),Spotify and all major platforms. To learn mo...
2022-02-11
42 min
In AI We Trust?
Mukesh Dalal, Stanley Black & Decker (SBD): Why all companies will need a Chief AI Officer
As the Chief AI officer at Stanley Black and Decker (SBD), Mukesh Dalal has helped transform a 178-year-old global manufacturing company’s approach to AI with the vision of delivering $1 billion of value to the company through AI and Analytics. In this week’s episode, Mukesh outlines SBD’s forward-thinking strategy on AI, describes SBD’s journey into the responsible AI space, and foresees that soon all major companies will have Chief AI Officers to harness the business potential and root out risks of AI technology. Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn...
2022-02-04
32 min
In AI We Trust?
Meghna Sinha: Why data scientists are like medical professionals; why ignorance is not an option and steps we all must take when making data-based decisions
Meghna Sinha is the Vice President of AI and Data at Verizon’s AI Center. Before joining Verizon, Meghna was Target’s VP of Data Sciences. In this episode, Meghna posits that data scientists are similar to medical practitioners, affirms that AI must start and end with humans, and shares lessons from the EqualAI Badge Program for Responsible AI Governance. Referenced papers/articles can be found here: WEF Toolkit (https://www3.weforum.org/docs/WEF_Empowering_AI_Leadership_2022.pdf) @VogelMiriam & Robert Eccles article on AI as a necessary part of Board governance (https://corpgov.law.harvard.edu/2022/01/05/board-responsibility-for-artificial-intelligence-oversight/) Meghna’s articl...
2022-01-28
39 min
In AI We Trust?
In AI we Trust welcomes its acclaimed new cohost, Kay Firth-Butterfield of the World Economic Forum
Kay Firth-Butterfield is a leader in AI governance. Her deep and wide-ranging experience as an entrepreneur, barrister, judge, and now as Head of Artificial Intelligence and Machine Learning and member of the Executive Committee at the World Economic Forum, has established Kay as an internationally recognized expert on the subject. Her numerous titles and awards include being featured in the New York Times as one of 10 Women Changing the Landscape of Leadership. In this episode, Miriam Vogel interviews her co-host, Kay Firth-Butterfield on her long-time commitment to exploring how humanity can equitably benefit from new technologies.
2022-01-18
38 min
In AI We Trust?
A year in review: the path toward responsible AI in 2021 (and farewell to Mark episode)
In this episode, cohosts Miriam Vogel and Mark Caine share the conversations and highlights that have inspired them in 2021 and predict what we can expect to see in this space in 2021. We also bid farewell to Mark as he departs the World Economic Forum and takes on new adventures.
2022-01-07
24 min
In AI We Trust?
Amy Holcroft: How HPE is "living it in action" how the EqualAI Badge program, in collaboration with WEF, has helped this effort
Amy Holcroft is the Chief Privacy Officer and VP of Privacy & Info Governance at Hewlett Packard Enterprise (HPE). In this episode, Amy shares how she co-leads the establishment of HPE’s AI Ethics Advisory board and HPE’s AI Ethical Principles. She shares that her work requires resilience and thoughtful governance, and how participation in the EqualAI Badge Program on Responsible AI Governance, in collaboration with the World Economic Forum, supports her and her work at HPE, including a timely session with Cathy O'Neil that she put to good use immediately. ----- Subscribe to catch each new episode on Apple, Spot...
2021-12-09
30 min
In AI We Trust?
Seth Dobrin: How do you establish a human-centered approach to data and AI (and why is this necessary to succeed)?
Seth Dobrin is the Global Chief AI Officer of IBM. Seth has spent his career scaling and using existing technologies to address previously intractable problems at scale. In this episode, Seth shares concrete steps he has taken to create a more diverse and trust-based workplace, explains how his PhD in genetics is relevant and helpful to his current work in AI, and breaks down the what, why and how of a human-centered approach to AI. ----- Subscribe to catch each new episode on Apple, Spotify and all major platforms. To learn more about EqualAI, visit our website: https://www.equalai...
2021-11-11
47 min
In AI We Trust?
Kat Zhou: What is the role of design in creating inclusive and equitable AI?
Kat Zhou is a product designer focusing on integrating ethics into the design of AI systems. She is a leading voice for more inclusive and privacy-respecting approaches to AI, and she has called for greater regulation of AI and more human-centric business models for AI companies. In this episode, we ask Kat how governments, product designers, and corporate decision makers can minimize the harms of AI products – and whether there are any products that should never be developed to begin with. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Tw...
2021-11-03
34 min
In AI We Trust?
Alex Kotran: Who needs AI literacy and how can we accelerate it?
Alex Kotran is the co-founder and CEO of the AI Education Project, a non-profit that brings AI-related knowledge and skills to communities that are being impacted by AI and automation. In this episode, Alex highlights how the communities that are most impacted by AI are often the ones with the least access to basic AI knowledge, and how this is creating disparities in access to healthcare, financial services, criminal justice, and more. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-10-28
28 min
In AI We Trust?
Meg King of the Wilson Center: Who is ensuring policy makers are able to "speak AI"?
Meg King is the Director of the Science and Technology Innovation Program at the Wilson Center; a non-partisan think tank created by Congress. She leads innovative transnational projects examining the development of emerging technology and related policy opportunities. Her program also provides training seminars for Congressional and Executive branch staff to develop technology knowledge and skills. In this episode, Meg shares context on her recent congressional testimony, the goals of her work at the Wilson Center and lessons we can learn from AI frameworks and policies abroad. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ ...
2021-10-22
30 min
In AI We Trust?
David Hardoon: Can AI be ethical?
In this episode, we speak with David Hardoon, a self-proclaimed "data artist." He leads Data and AI efforts at UnionBank Philippines and serves as an external advisor to Singapore's Corrupt Investigation Practices Bureau (CPIB). David has extensive experience in both industry and academia with a PhD in Computer Science and B.Sc. in Computer Science and AI. He weighs in on both the high level concepts surrounding ethics and AI and offers practical steps he uses to support ethical governance with the AI systems under his purview. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ ...
2021-10-14
46 min
In AI We Trust?
Rep. Yvette Clarke: Why is AI regulation necessary during this time of racial reckoning?
Find out on this week's episode with special guest Congresswoman Yvette Clarke (NY-9th) why she makes AI a top priority in her work to protect vulnerable populations. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-10-06
33 min
In AI We Trust?
Elham Tabassi of NIST: Who ensures the U.S. has strong metrics, tools, & standards for responsible AI?
Observers have been skeptical about the ability of the US to lead in AI and establish the necessary framework to ensure its safe and effective development. NIST – the National Institute of Standards and Technology – is responding to that call. In this episode, we speak with Elham Tabassi who is leading NIST's work to support safe and effective Artificial Intelligence. Elham the Chief of Staff in the Information Technology Laboratory (ITL) and serves on the National AI Research Resource Task Force, announced by the White House and the National Science Foundation (NSF) last June. Learn what makes NIST's 'secret sauce' for impa...
2021-09-29
48 min
In AI We Trust?
Taka Ariga and Stephen Sanford: What is the U.S. GAO's AI Framework?
Taka Ariga is the first Chief Data Scientist and Director of the Innovation Lab at the U.S. Government Accountability Office (GAO). Stephen Sanford is the Managing Director in GAO’s Strategic Planning and External Liaison team. Taka and Stephen are the authors of the GAO's recently released AI Framework, one of the first resources provided by the U.S. government to help identify best practices and the principles to deploy, monitor and evaluate AI responsibly. In this episode, we ask the AI Framework authors why they took on this initiative and lessons learned that are broadly applicable across industry. ...
2021-09-23
43 min
In AI We Trust?
Vilas Dhar: How can civil society shape a positive, human-centric future for AI?
Vilas Dhar is a technologist, lawyer, and human rights advocate championing a new social compact for the digital age. As President and Trustee of the Patrick J. McGovern Foundation, he is a global leader in advancing artificial intelligence and data solutions to create a thriving, equitable, and sustainable future for all. In this episode we ask Vilas how he arrived at the intersection of AI and philanthropy, and how he thinks philanthropists and civil society can shape a more inclusive and societally beneficial future for AI. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You...
2021-09-16
40 min
In AI We Trust?
Steve Mills: How can companies walk the walk on responsible AI?
Steve Mills is a Partner at Boston Consulting Group (BCG), where he serves as Chief AI Ethics Officer and the Global Lead for Artificial Intelligence in the Public Sector. He has worked with dozens of leading companies and government agencies to improve their AI practices, and in this episode he shares some of the key lessons he has learned about how organizations can translate their ethical AI commitments into practical, meaningful actions. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-08-24
35 min
In AI We Trust?
Julia Stoyanovich: Can AI systems operate fairly within complex, diverse societies?
Julia Stoyanovich is an Assistant Professor in the Department of Computer Science and Engineering at NYU’s Tandon School of Engineering, where she is also the Director of the Center for Responsible AI. Her research focuses on responsible data management and analysis and on practical tools for operationalizing fairness, diversity, transparency, and data protection in all stages of data acquisition and processing. In addition to conducting field-leading research and teaching, Professor Stoyanovich has written several comics aimed at communicating complex AI issues to diverse audiences. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can al...
2021-08-18
42 min
In AI We Trust?
Oren Etzioni: Why is the term "machine learning" a misnomer?
Dr. Oren Etzioni is Chief Executive Officer at AI2, the Allen Institute for AI, a non-profit that offers foundational research, applied research and user-facing products. He is Professor Emeritus at University of Washington and a Venture Partner at the Madrona Venture Group. He has won numerous awards and founded several companies, has written over 100 technical papers, and provides commentary on AI for The New York Times, Wired, and Nature. In this episode, Oren explains why “machine learning” is a misnomer and some of the exciting AI innovations he is supporting that will result in greater inclusivity. ----- To learn more abou...
2021-08-10
37 min
In AI We Trust?
Alexandra Givens: What makes tech to social justice issue of our time?
Alexandra Reeve Givens is the President & CEO of the Center for Democracy and Technology (CDT). She is an advocate for using technology to increase equality, amplify voices, and promote human rights. Previously, Alexandra served as the founding Executive Director of the Institute for Technology Law & Policy at Georgetown Law, served as Chief Counsel for IP and Antitrust on the Senate Judiciary Committee and began her career as a litigator at Cravath, Swaine & Moore. In this episode, Alexandra explains her unconventional path to the tech space as a lawyer and why she believes technology is the social justice issue of our...
2021-08-05
26 min
In AI We Trust?
Navrina Singh: How AI is a multi-stakeholder problem and how do we solve for it? (Spoiler: it's all about trust.)
Navrina Singh is the Founder & CEO of Credo AI, whose mission is to empower organizations to deliver trustworthy and responsible AI through AI audit and governance products. Navrina serves on the Board of Directors of Mozilla and Stella Labs. Previously she served as the Product leader focused on AI at Microsoft where she was responsible for building and commercializing Enterprise Virtual Agents and spent 12+ years at Qualcomm. In this episode, Navrina shares several insights into responsible AI, including the 3 key elements to building trust in AI and the 4 components of the "Ethical AI flywheel." ----- To learn more about EqualAI...
2021-07-28
37 min
In AI We Trust?
Andrew Burt: How can lawyers be partners in the AI space?
Andrew Burt is a lawyer specializing in artificial intelligence, information security and data privacy. He co-founded bnh.ai and serves as chief legal officer of Immuta. His work has been profiled by magazines like Fast Company and his writing has appeared in Harvard Business Review, the New York Times and the Financial Times. In this episode, we explore the 'hype cycle' of AI where risks are overlooked and the appropriate role of a lawyer as a partner in this space. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter...
2021-07-22
38 min
In AI We Trust?
Anima Anandkumar: How can the intersection of academia and industry inform the next generation of AI?
Anima Anandkumar is an accomplished AI researcher in both academia and in industry. She is the Bren professor at Caltech CMS department and director of machine learning research at NVIDIA. Previously, Anima was a principal scientist at Amazon Web Service, where she enabled machine learning on the cloud infrastructure. Anima is the recipient of numerous awards and honors and has been featured in documentaries and articles by PBS, Wired, MIT Technology review, Forbes and many others. In this episode we learn about the “trinity of the deep learning revolution,” how the next generation of AI will bring the “mind & body” together...
2021-06-30
39 min
In AI We Trust?
Vivienne Ming: How can we create AI that lifts society up rather than tearing it down?
Vivienne Ming is an internationally recognized neuroscientist and AI expert who has pushed the boundaries of AI in diverse areas including education, human resources, disability, and physical and mental health. In this episode, we ask Vivienne how we can ensure that society captures the benefits of AI technologies while mitigating their risks and avoiding harms to vulnerable populations. ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-06-24
58 min
In AI We Trust?
Heather Cox: Why did our company make a public commitment to equitable AI?
On this episode, we hear from Heather Cox, Chief Digital Health and Analytics Officer at Humana. Heather brings 25 years of experience to the role including having served as Chief Technology and Digital Officer at USAA, and CEO of Citi FinTech at Citigroup. In this episode, Heather shares why she decided Humana should take the EqualAI Pledge to Reduce Bias in AI and how they have restructured their company and partnerships to ensure their AI programs better serve their population, and adhere to the core principle to "do no harm". ----- To learn more about EqualAI, visit our website: https://www...
2021-06-15
40 min
In AI We Trust?
Commissioner Edward Santow: How can tech governance preserve human rights and achieve responsible AI?
On this episode, we are thrilled to share our conversation with Commissioner Edward Santow of the Australian Human Rights Commission. The Commission recently released the Human Rights and Technology final report, which makes 38 recommendations to ensure human rights are upheld in Australia’s laws, policies, funding and education on AI. We ask him about lessons learned in the 3 year creation of this report and which recommendations are most universally applicable. Learn more about the report here: https://tech.humanrights.gov.au/ ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Tw...
2021-06-08
40 min
In AI We Trust?
Tess Posner: How can we create a more inclusive technology industry?
Tess Posner is the CEO of AI4ALL, an organization working to make the technology industry more inclusive and to ensure that AI is developed responsibly. Before joining AI4ALL, she was Managing Director of TechHire at Opportunity@Work, a national initiative launched out of the White House to increase diversity in the tech economy. In this episode, we explore the diversity challenges facing the technology industry and the exciting efforts that AI4ALL is leading to empower diverse young people to join – and improve – one of the most powerful industries shaping society today. ----- To learn more about EqualAI, visi...
2021-06-01
32 min
In AI We Trust?
Sarah Drinkwater: What is 'Responsible AI' and why don't we have it?
Sarah Drinkwater is director of the Responsible Technology team at the Omidyar Network where she works to help technologists prevent, mitigate, and correct societal downsides of technology—and maximize positive impact. Priorto Omidyar Network, Sarah was head of Campus London, Google’s first physical startup hub. At Google, Sarah also built and led a global Google Maps community team. She also advised startups and large brands on their social strategy and was a journalist. On this episode we ask, "What is 'Responsible AI' and why don't we have it?" ----- To learn more about EqualAI, visit our website: https://www.equa...
2021-05-25
30 min
In AI We Trust?
Tim O'Brien: Is there a role for me in building ethical AI?
In this episode we speak with Tim O'Brien who leads Ethical AI Advocacy at Microsoft. Before joining Microsoft in 2003, Tim worked as an engineer, a marketer and a consultant at startups and Fortune 500 companies. In this discussion, Tim leads us through Microsoft's journey – and his own – to become a leader in the field of AI ethics and answers the questions: what does an AI Ethicist do? And, is there a role for 'white guys' to play in this field? ----- To learn more about EqualAI, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equa
2021-05-18
39 min
In AI We Trust?
Aneesh Chopra: How would you grade the US government's tech readiness?
Aneesh Chopra served as the first Chief Technology Officer of the United States. He is currently the president of CareJourney, a provider of clinically-relevant analytics that builds a rating system of healthcare networks. He is also the co-founder of a data analytics investment group, Hunch Analytics. Aneesh sits on the Board of the Health Care Cost Institute, a non-profit focused on unbiased health care utilization and cost information. Previously, Aneesh served as Virginia’s Secretary of Technology and wrote of his experience in government and tech in his book "Innovative State: How New Technologies Can Transform Government." In this episode, An...
2021-05-11
32 min
In AI We Trust?
Ashley Casovan: There are tools to help governments and companies reduce bias in AI
In this episode, we interview our friend and colleague, Ashley Casovan, Executive Director of Responsible AI Institute, formerly AI Global, a non-profit dedicated to creating practical tools to ensure the responsible use of AI. Previously, Ashley served as Director of Data and Digital at the Government of Canada, where she led research and policy development related to data, open source, and artificial intelligence. Ashley helps us answer the pressing question in AI: should we rely on internal corporate monitoring, government regulation, third party certification, or some combination? Learn more about the Responsible AI Institute: https://www.responsible.ai/ More information...
2021-05-04
37 min
In AI We Trust?
Kush Varshney: Can we trust AI?
Dr. Kush R. Varshney is a distinguished research staff member and manager in IBM Research AI at the Thomas J. Watson Research Center where he has conducted cutting edge AI and Machine Learning research for the past ten years. Varshney also serves as co-director of IBM’s Science for Social Good program. Varshney received both a Masters in Science and a Ph.D. in electrical Engineering and Computer Science from MIT. In addition to writing numerous articles on AI, Varshney helped develop AI Fairness 360-a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning mo...
2021-04-27
37 min
In AI We Trust?
Robert LoCascio: Why I co-founded EqualAI
Rob LoCascio is the founder of LivePerson, Inc. and has been its chief executive officer since its inception in 1995, making him one of the longest-standing founding CEOs of a tech company today. As the inventor of online chat for brands, Rob disrupted the way people communicate with companies around the world. He is a founding member of EqualAI, which works with companies, policy makers, and experts to reduce bias in AI, and the NYC Entrepreneurs Council of the Partnership for New York City. In 2001, Rob started the Dream Big Foundation with its first program, FeedingNYC. As someone who has been...
2021-04-20
32 min
In AI We Trust?
Malcolm Frank: How to advise your clients to successfully and responsibly navigate the digital age
Malcolm Frank is the president of digital business and technology at Cognizant. Malcolm’s influence is wide ranging and evident across media. He has co-authored two best-selling books, “What to Do When Machines Do Everything” (2017) and “Code Halos” (2014) and authored numerous white papers focusing on the Future of Work. A highly sought-after speaker, Malcolm has presented at conclaves across the globe, including the World Economic Forum and the South by Southwest (SXSW) Conference. He is frequently quoted, is the subject of a Harvard Business School case study and was named “one of the most influential people in finance” by Risk Management mag...
2021-04-13
37 min
In AI We Trust?
Christo Wilson: What is an algorithmic audit?
Christo is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University, a member of the Cybersecurity and Privacy Institute and the Director of the BS in Cybersecurity program in the College. He is a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University, and an affiliate member of the Center for Law, Innovation and Creativity at Northeastern University School of Law. His research investigates the sociotechnical systems that shape our lives using a multi-disciplinary approach. You can find more of his talks and cutting edge research here: https://cbw.sh/ ---- To...
2021-04-07
28 min
In AI We Trust?
Judy Spitz: Do we have a pipeline problem?
Tune in to this week’s episode of "In AI we Trust?" to hear Dr. Judith Spitz, Founder and Executive Director of Break Through Tech and learn the often missed barrier she identified to getting women into tech and how she fixes it with her organization (hint: look in our own backyards). Dr. Spitz was previously Chief Information Officer (CIO) of Verizon, and in 2016, devoted herself to helping women break into tech. She launched WiTNY, or the Women in Technology and Entrepreneurship in New York Initiative, and saw a 94 percent increase in the number of women graduating with computer science de...
2021-03-30
43 min
In AI We Trust?
Kathy Baxter: What to do before launching an ethical AI product
Kathy Baxter is the principal architect of Ethical AI Practice at Salesforce. She develops research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. On this episode, we ask Kathy: What are the critical steps to take from an ethics perspective, to ensure your AI product is safe to launch? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-03-26
28 min
In AI We Trust?
Kathy Baxter: What to do before launching an ethical AI product
Kathy Baxter is the principal architect of Ethical AI Practice at Salesforce. She develops research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. On this episode, we ask Kathy: What are the critical steps to take from an ethics perspective, to ensure your AI product is safe to launch? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-03-26
1h 11
In AI We Trust?
Kathy Baxter: What to do before launching an ethical AI product
Kathy Baxter is the principal architect of Ethical AI Practice at Salesforce. She develops research-informed best practices to educate Salesforce employees, customers, and the industry on the development of responsible AI. Prior to Salesforce, she worked at Google, eBay, and Oracle in User Experience Research. On this episode, we ask Kathy: What are the critical steps to take from an ethics perspective, to ensure your AI product is safe to launch? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-03-25
1h 12
In AI We Trust?
BONUS with Rep. Yvette Clarke and Roger McNamee: "In AI We Trust?" Podcast Launch Hosted by the Georgetown University Law Center
We're excited to share this bonus episode, a recording from the podcast's launch event with special guests Roger McNamee and Congresswoman Yvette Clarke. Representing New York's 9th District, Congresswoman Clarke is a committed champion of fighting bias in AI and other forms of discrimination in tech. Roger McNamee is a longtime investor in tech and author of "Zucked," which sheds light on the dangers of tech that is unfettered and insufficiently regulated. A special thanks to our friends at the Georgetown University Law Center for hosting this event. ---- To learn more, visit our website: https://www.equalai.org/ You...
2021-03-24
1h 12
In AI We Trust?
Meredith Broussard: What makes unfettered AI so dangerous (and what can we do about it)?
Meredith Broussard is a computer scientist and data journalism professor at NYU. Her book, "Artificial Unintelligence: How Computers Misunderstand the World," explains the origins of AI and the subtle and not-so-subtle ways that women and people of color were excluded from its genesis. On this episode, we ask Meredith: What makes unfettered AI so dangerous, and what can we do about it? ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-03-16
39 min
In AI We Trust?
Bob Work: America is not prepared to compete in the AI-era
You can read the NSCAI report here: https://www.nscai.gov/2021-final-report/ Bob Work served as the Deputy Secretary of Defense from 2014 to 2017 and has a long history of service in the government and military before then. He is widely known for developing the Third Offset strategy. He is currently President of TeamWork, a consulting firm that specializes in national security affairs. And even more relevant to this discussion - he is the Vice-Chair of the National Security Commission on AI. ---- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai...
2021-03-11
33 min
In AI We Trust?
Cathy O'Neil: Why should companies care about ethical AI?
In this episode, Miriam and Mark are joined by Dr. Cathy. O'Neil, a mathematician, data scientist, and author. She is a matriarch in the exploding and significant field of algorithmic bias. Cathy has a Ph.D. in mathematics from Harvard taught at MIT and Barnard. She also founded and runs the algorithmic auditing company Orcaa. She also posted on her popular blog that you should all check out mathbabe.org and her Twitter @mathbabedotorg --- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-03-09
37 min
In AI We Trust?
Kurt Campbell: China, International Diplomacy and AI
A farewell episode to EqualAI Advisor, Kurt Campbell asking: How can we apply effective diplomacy strategies to international governance of AI? Note: Mark officially joins the podcast next episode! --- To learn more, visit our website: https://www.equalai.org/ You can also follow us on Twitter: @ai_equal
2021-03-09
43 min
In AI We Trust?
Welcome to In AI We Trust?
The EqualAI podcast launches March 10th, subscribe wherever you get your podcasts so you don't miss out!To learn more, visit our website: https://www.equalai.org/You can also follow us on Twitter: @ai_equal
2021-03-02
02 min
Aves Aves - Spass mit Vögel
Everything goes Vogel
News, ungeklärte Fragen, againwhatlearned Momente, good to knows und Spannendes über den König der Lüfte und den everything goes Vogel. Aber von vorne. Wir versuchen die rote Liste und ihre Interpretation zu ergründen. Keine fünf Minuten in die Sendung schon die ersten Klugscheisserdiskussionen. Stramm weiter mit dem Steinadler, wo lebt er, wie lebt er und warum? Und was hat das Forsthaus Falkenau mit all dem zu tun? Und was ist eigentlich mit Bayern und der borealen Zone los? Der Witz des Tages ist eindeutig der falsche Fuffzger mit den Adleraugen. Und die große...
2020-07-19
1h 06
Electric Ladies Podcast
Is AI Biased Against Women? Miriam Vogel, Executive Director of Equal-AI
“Decades and generations of fighting for equality and against discrimination can be almost erased by one line of code.” Miriam Vogel on Green Connections Radio Listen closely the next time you get directions on your smartphone or order Alexa to do something. Did you notice that the voices of personal digital assistants like GPS systems, Siri and Alexa are female? Why is that? Especially when the sophisticated problem-solving technologies like IBM Watson and Microsoft's Einstein are named after men. That's what a recent UN study found. But does that mean AI is biased against women?
2019-07-12
33 min