From Wikipedia, the free encyclopedia

Empirical risk minimization (ERM) is a principle in statistical learning theory which defines a family of learning algorithms and is used to give theoretical bounds on the performance of learning algorithms.

Background

Consider the following situation, which is a general setting of many supervised learning problems. We have two spaces of objects X and Y and would like to learn a function \! h: X \to Y (often called hypothesis) which outputs an object y \in Y, given x \in X. To do so, we have in our disposal a training set of a few examples \! (x_1, y_1), \ldots, (x_m, y_m) where x_i \in X is an input and y_i \in Y is the corresponding response that we wish to get from \! h(x_i).

To put it more formally, we assume that there is a joint probability distribution P(x,y) over X and Y, and that the training set consists of m instances \! (x_1, y_1), \ldots, (x_m, y_m) drawn i.i.d. from P(x,y). Note that the assumption of a joint probability distribution allows us to model uncertainty in predictions (e.g. from noise in data) because y is not a deterministic function of x, but rather a random variable with conditional distribution P(y | x) for a fixed x.

We also assume that we are given a non-negative real-valued loss function L(\hat{y}, y) which measures how different is the prediction \hat{y} of a hypothesis from the true outcome y. The risk associated with hypothesis h(x) is then defined as the expectation of the loss function:

R(h) = \mathbf{E}[L(h(x), y)] = \int L(h(x), y)\,dP(x, y).

A loss function commonly used in theory is the 0-1 loss function: L(\hat{y}, y) = I(\hat{y} \ne y), where I(...) is the indicator notation.

The ultimate goal of a learning algorithm is to find a hypothesis \! h^* among a fixed class of functions \mathcal{H} for which the risk R(h) is minimal:

h^* = \arg \min_{h \in \mathcal{H}} R(h).

Empirical risk minimization

In general, the risk R(h) cannot be computed because the distribution P(x,y) is unknown to the learning algorithm (this situation is referred to as agnostic learning). However, we can compute an approximation, called empirical risk, by averaging the loss function on the training set:

\! R_\mbox{emp}(h) = \frac{1}{m} \sum_{i=1}^m L(h(x_i), y_i).

Empirical risk minimization principle states that the learning algorithm should choose a hypothesis \hat{h} which minimizes the empirical risk:

\hat{h} = \arg \min_{h \in \mathcal{H}} R_{\mbox{emp}}(h).

Thus the learning algorithm defined by ERM principle consists in solving the above optimization problem.

[edit] Properties


by 쿠리다쿠리 2010. 12. 9. 19:52