fbpx
Home » When — and Why — You Should Explain How Your AI Works

When — and Why — You Should Explain How Your AI Works

0 comment 945 views

Summary: AI adds value by identifying patterns so complex that they can defy human understanding. That can create a problem: AI can be a black box, which often renders us unable to answer crucial questions about its operations. That matters more in some cases than others. Companies need to understand what it means for AI to be “explainable” and when it’s important to be able to explain how an AI produced its outputs. In general, companies need explainability in AI when: 1) regulation requires it, 2) it’s important for understanding how to use the tool, 3) it could improve the system, and 4) it can help determine fairness.

“With the amount of data today, we know there is no way we as human beings can process it all…The only technique we know that can harvest insight from the data, is artificial intelligence,” IBM CEO Arvind Krishna recently told the Wall Street Journal.

The insights to which Krishna is referring are patterns in the data that can help companies make predictions, whether that’s the likelihood of someone defaulting on a mortgage, the probability of developing diabetes within the next two years, or whether a job candidate is a good fit. More specifically, AI identifies mathematical patterns found in thousands of variables and the relations among those variables. These patterns can be so complex that they can defy human understanding.

This can create a problem: While we understand the variables we put into the AI (mortgage applications, medical histories, resumes) and understand the outputs (approved for the loan, has diabetes, worthy of an interview), we might not understand what’s going on between the inputs and the outputs. The AI can be a “black box,” which often renders us unable to answer crucial questions about the operations of the “machine”: Is it making reliable predictions? Is it making those predictions on solid or justified grounds? Will we know how to fix it if it breaks? Or more generally: can we trust a tool whose operations we don’t understand, particularly when the stakes are high?

To the minds of many, the need to answer these questions leads to the demand for explainable AI: in short, AI whose predictions we can explain.

What Makes an Explanation Good?

A good explanation should …

Read the Full Article at HBR

related posts

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept