The original thought is to choose the least
J
t
e
s
t
(
w
,
b
)
J_{test}(w,b)
Jtest(w,b).
But the test set J test is now overly optimistic that is lower than the actual estimate of the generalization error.
We divide the data set into three parts and pick the least
J
c
v
(
w
,
b
)
J_{cv}(w,b)
Jcv(w,b).
Underfit produces high bias while overfit produces high variance.
The notion of high bias and high variance, it doesn’t really happen for linear models applied to one deep. But it is possible sometimes they’re both at the same time.
Determine the baseline first.
if you have a small neural network like this, and you were to switch to a much larger neural network like this, you would think that the risk of overfitting goes up significantly. But it turns out that if you were to regularize this larger neural network appropriately, then this larger neural network usually will do at least as well or better than the smaller one. So long as the regularization has been chosen appropriately.
Hopefully looking through maybe around 100 examples will give you enough statistics about whether the most common types of errors and therefore where maybe most fruitful to focus your attention. After this analysis, if you find that a lot of errors are pharmaceutical spam emails then this might give you some ideas or inspiration for things to do next.
There are many ways to enhance data based on the data we had before.
The ratio of positive to negative examples is very skewed, very far from 50-50, then it turns out that the usual error metrics like accuracy don’t work that well.