Join or Log in
Abstract: Interpretability of deep neural networks (DNNs) is essential since it enables users to understand the overall strengths and weaknesses of the models, conveys an understanding of how the models will behave in the future, and how to diagnose and correct potential problems. However, it is challenging to reason about what a DNN actually does due to its opaque or black-box nature.
[1703.04096] Improving Interpretability of Deep Neural Networks with Semantic Information 
Added a year ago by Francis Tseng
Show info
[1703.04096] Improving Interpretability of Deep Neural Networks with Semantic Information 
Info
Abstract: Interpretability of deep neural networks (DNNs) is essential since it enables users to understand the overall strengths and weaknesses of the models, conveys an understanding of how the models will behave in the future, and how to diagnose and correct potential problems. However, it is challenging to reason about what a DNN actually does due to its opaque or black-box nature.
1 Connection