Anand Tamboli, the Author of the upcoming book on “Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating and Quantifying Risks” participates in Risk Roundup to discuss Responsible AI
Responsible AI
AI technologies bring the transformative power to nations: their government, industries, organizations, and academia (NGIOA). The development of AI is creating new opportunities for everyone, even individuals. As AI technologies become more pervasive and get deeply embedded in products and services, and responsible for an increasing number of decision-making processes like benefit payments, mortgage approvals, parole grants, college admissions, job interview screening, and medical diagnosis, they become less visible and transparent.
Since algorithms are
not viewable, one of the real risks with AI is amplifying and reinforcing
existing human biases into AI decision-making processes. While the biases could
be either intended or unintended, the reality is that AI-based decisions should
be understandable and auditable to those impacted and adhere to existing rules
and regulations.
While the goal of emerging technologies like AI is to improve the lives of everyone around the world, it is also raising further questions about the best way to build fairness, interpretability, responsibility, accountability, privacy, and security into these emerging systems. These issues are far from solved, and in fact, at the forefront of the AI technology adoption across nations.
Even though AI is quickly becoming a new tool for transformation, it has also become clear that deploying AI requires careful management and governance to prevent unintentional damage to individuals and society as a whole. Justifiably, trust in decision-making AI systems is going to be crucial as we move forward in using AI systems for decision-making broadly. Perhaps coding responsibility, accountability, and explainability into algorithms will be our only solution.
Understanding
Responsible AI
Responsible AI is about
building trust in AI solutions and currently focuses on ensuring the ethical,
transparent, and accountable use of AI technologies in a manner consistent with
user expectations, organizational values, and societal laws and norms. The
question is whether this is enough and effective.
The goal is to have a Responsible AI guard against the use of biased data or algorithms, ensure that automated decisions are justified and explainable, and help maintain user trust, individual privacy, and security. While there is a broad hope that it can be made possible by providing clear rules of engagement, the question is unless the rules are coded, ensuring the responsibility, accountability, and explainability will perhaps not be possible. Responsible AI that is in the code will allow organizations to innovate and realize the transformative potential of AI that is both compelling and accountable and make it easier for explainable AI to be accountable, explainable, and effective.
While at the moment Responsible AI is about creating governance frameworks to evaluate, deploy, and monitor AI to create new opportunities, it requires architecting responsibility in the code and implementing coded solutions that put humans at the center. By using design-led thinking, organizations at all levels can examine core ethical questions in context right in the code, evaluate the adequacy of policies and programs, and create a set of value-driven requirements to govern AI solutions. That brings us an important issue, how should automated decision systems be governed? Simply by governance frameworks or using code as the constitution for embedded responsibility, accountability, and explainability.
In the coming years,
responsible AI will need to be a critical component of algorithms as well as an
organizational change model that focuses on rapid learning and adapting. It is
time to define a framework for how responsible AI can be embedded in the code,
and security checkpoints are assigned to create checks and balances for this
process. By integrating responsible AI into the system and organizational approach
for change, it is possible to ensure that the critical element of trust is
cultivated and maintained among critical human stakeholders, the most important
of which being employees, customers, citizens, and consumers.
The time is now to define a practical Responsible AI framework that will certainly enhance Explainable AI and accountability further.
For more, please watch the Risk Roundup Webcast or hear the Risk Roundup Podcast
About the Guest
Anand Tamboli is the author of the upcoming book on “Keeping Your AI Under Control: A Pragmatic Guide to Identifying, Evaluating and Quantifying Risks”.
About the Host of Risk Roundup
Jayshree Pandya (née Bhatt), the founder and
chief executive officer of Risk Group LLC(www.riskgroupllc.com),
is working passionately to define a new security-centric operating system for
humanity. Her efforts towards building a strategic security risk analytics
platform are to equip the global strategic security community with the tools
and culture to collectively imagine the strategic security risks to our future
and to define and design a new security-centric operating system for the future
of humanity.
About Risk Roundup
Risk Roundup, a global initiative launched by
Risk Group, is a security risk reporting for risks emerging from existing and
emerging technologies, technology convergence, and transformation happening
across cyberspace, aquaspace, geospace, and space. Risk Roundup is released in
both audio (Podcast) and video (Webcast) format and is available for
subscription at (Risk Group Website, iTunes, Google Play, Stitcher Radio, Android, and Risk Group
Professional Social Media).
About Risk Group
Risk Group LLC is a leading strategic security
risk analytics platform.
Copyright Risk
Group LLC.
All Rights Reserved
The post Responsible AI appeared first on Risk Group.