Advertisement

How to Get Started with AI You can Trust?

By on

Click to learn more about author Manoj Saxena.

Today the public sees AI as a technical solution, but AI’s biggest problems are not technical, they’re the design and behavioral issues. “There is nothing Artificial about AI” to quote Fei Fei Li, a leading AI researcher at Stanford University. AI is inspired by people, it is created by people, and – most importantly – it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.  Therefore, it is the responsibility of business leaders and AI designers to take a comprehensive view of AI that focuses on designing systems that engage users in a way that delivers exceptional experiences while building trust.

And trust is a very complex concept especially in the context in machine learning powered self-learning systems. To do this right requires a reframing our approach to AI design built around these five pillars:

  1. Data Rights: Do you have rights to the data?

Data is the fuel that powers AI.  Those deploying AI must ensure that the data they are using is of high-quality and the users have insight into how it is being used. User visibility around where data and models live, who has access to it, and what it is being used for is essential to ensuring the system is making trustworthy decisions. In other words, People have the right to know that their data is being collected and how exactly it’s being used. This has been in the headlines lately, relating to the use of data and the manipulation of the 2016 election across news and social media platforms, prompting changes.

2. Explainability: Is your AI transparent?

Can you explain how your AI generated that specific insight or decision?  Current AI systems operate in a black box and offer little insight into how it reaches its outcomes. AI systems built using responsible AI principles need to understand business stakeholder concerns and provide business process, algorithmic and operational transparency to build trust.

An example of this would be in the medical field. If AI were being used to recommend a course of treatment, doctors would need to know why that treatment was recommended – at a very detailed level – before prescribing that treatment.

3. Fairness: Is your AI unbiased and fair?

AI systems continuously process and learn from data.  Can you be sure your AI doesn’t discriminate against any group of people?  Responsible AI system design needs to ensure that the data being used is representative of the real world and the AI models are free of algorithmic biases to mitigate skewed decision-making and reasoning, resulting in reasoning errors and unintended consequences.

As an example, a soap dispenser that was programmed to automatically dispense soap to hands placed under it failed to recognize any hands that weren’t white, because it had only been trained to recognize hands using images of hands with light color skin tones. As a result, it didn’t work on brown or black hands.

4. Robustness: Is your AI robust and secure?

As with other technologies, cyber-attacks can penetrate, and fool AI systems. How can you make sure the AI cannot be hacked? There have been cases where a small amount of noise prevented AI from recognizing objects it was trained to distinguish. Responsible AI systems should provide the ability to detect adversarial data and provide protection against adversarial attacks while understanding how issues with data quality impact system performance.

Examples of adversarial attacks — synthetically created inputs that pretend to relate to one class but actually are from another one – include fooling autonomous vehicles to misinterpret stop signs vs. speed limit and bypassing facial recognition, such as the ones for ATM’s.

5. Compliance: Is your AI appropriately governed?

Just like actions taken by humans, there needs to be a trail of auditability, to be able to defend why a particular decision was made. Organizations must use AI in a compliant and auditable manner that operates within the boundaries of local, national and industry regulation.  Responsible AI systems take a holistic governance model that avoids silos and provides mechanisms for implementation, governance, and control of domain-specific policies and regulations such as HIPAA and FINRA rules.

Responsible AI is fundamentally about building trust and confidence. Taking these five pillars into account, AI designers and deployment teams need to ensure that these systems will behave as anticipated and build trust with human users.

Businesses should undertake several steps in order to ensure that AI systems are designed and implemented properly. These include the hiring of AI ethicists to work with corporate decision makers and software developers, forming an AI review board that regularly addresses corporate ethical questions, implementing AI training programs to educate ethical AI considerations, and develop AI audit trails and means for remediation when AI solutions inflict harm or damages on people or organizations.

Transforming the Human – Technology Relationship

AI can do amazing things to improve our everyday lives – if we act wisely and with vision today.  In the pretty near future, we will hit a moment when it will be impossible to course-correct. That’s because technology is being adopted so fast, and far and wide.  We have time but we have to act now.

It starts with ensuring that your AI is enabling and reflecting your company’s ethics, values, and industry regulatory policies.  Done right, AI can deliver business results and improve the human condition at a scale that will far exceed all the innovations in the past few decades combined.

Leave a Reply