Join or Log in
One of the great biases that Machine Learning practitioners and Statisticians have is that our models and explanations of the world should be parsimonious. We've all bought into Occam's Razor: Among competing hypotheses, the one with the fewest assumptions should be selected. However, does that mean that our machine learning model's need to be sparse?
The Only Way to make Deep Learning Interpretable is to Have it Explain Itself 
Added 10 months ago by Francis Tseng
Show info
The Only Way to make Deep Learning Interpretable is to Have it Explain Itself 
Info
One of the great biases that Machine Learning practitioners and Statisticians have is that our models and explanations of the world should be parsimonious. We've all bought into Occam's Razor: Among competing hypotheses, the one with the fewest assumptions should be selected. However, does that mean that our machine learning model's need to be sparse?
1 Connection