Join or Log in
With the growing success of neural networks, there is a corresponding need to be able to explain their decisions - including building confidence about how they will behave in the real-world, detecting model bias, and for scientific curiosity. In order to do so, we need to both construct deep abstractions and reify (or instantiate) them in rich interfaces .
The Building Blocks of Interpretability 
Added 7 months ago by Francis Tseng
Show info
The Building Blocks of Interpretability 
Info
With the growing success of neural networks, there is a corresponding need to be able to explain their decisions - including building confidence about how they will behave in the real-world, detecting model bias, and for scientific curiosity. In order to do so, we need to both construct deep abstractions and reify (or instantiate) them in rich interfaces .
1 Connection