What are you looking for?

The Eighty Five Percent Rule for optimal learning | Nature Communications

Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around

Source: The Eighty Five Percent Rule for optimal learning | Nature Communications

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php