1. BLOG
  2. NRI BLOG
  3. Get to know NRI
  4. Picking NRI's Brain- Yasuki Ok…

Picking NRI's Brain- Yasuki Okai : Putting Your Faith in Advanced Machines

Dec 06. 2017

Okai Profile

By Yasuki Okai, President, NRI Holdings America

 

When a computer program using a type of neural circuit network model known as deep learning (“DL”) defeated a Go grandmaster in March 2016, the news shocked the world. Since then, the application of DL has been spreading rapidly, in areas such as image pattern recognition, natural language processing, and even credit risk determinations in the financial world. That said, something that inevitably becomes an issue when leveraging DL for business is the matter of interpretability. Even if you were to analyze the inner workings of DL to learn why it produced a certain result, what you’d find is nothing but a row of numbers indicating how close the connection is between cells which are made to mimic the activity of neurons. DL’s computational logic does not have a simple structure that can be expressed through mathematical formulas. This matter of interpretability is especially crucial in the financial regulation industry, for example, where one could never get away with simply stating that the internal model says there’s “about this much” risk. You have to indicate quantitatively to what extent the model fits with past data, and then explain the model’s features in a comprehensible way that a third party can understand.

Now, something that can generally be said of the conditions for a comprehensible model is that the explanatory variables can’t be so numerous, the model’s function forms need to be simple, and global behaviors have to be easily predictable, but in fact, DL functions do not satisfy any of these requirements, and are fundamentally hard to grasp.

That’s why one has to come up with various ways to explain DL’s results in a comprehensible manner, such as by describing them in terms of several simple models that are strung together, or by narrowing it down to the most influential explanatory functions and looking at the degree of neural network connectivity, or even by appealing to someone’s intuition with easy-to-understand illustrations. These efforts almost certainly contribute to some extent to improving DL’s usability, but on the other hand, they also reveal the limitations of struggling to explain something that’s intrinsically hard to understand.

Whenever you assess somebody else, since there’s no way to directly probe the other person’s brain, you perform certain tests to ascertain what their capabilities are. Surely it’s also possible to make similar determinations for DL, whereby you’d test the DL and if you get certain results, you’d consider the test to have been “passed,” meaning the DL could be used with confidence. Plus, since it’s people doing the assessments, “comprehensibility” is important. If the effectiveness of a certain DL could be verified using a process whereby one machine loaded with the DL is linked to another machine and the two machines perform high-speed Q&A, the “comprehensibility” problem would become less important. Of course, that would require the presumption that our trust can safely be placed in the connected machines…which would only give rise to new problems, like how to trust a machine when we’re looking to enjoy the conveniences it offers, and whether the ways we go about placing our trust in advanced machines must also change.

Who viewed this page also viewed these pages