Blog: Machine Learning Equations by Saurabh Verma

Blog: Stability of Learning Algorithms Part 3.

• Tutorials

Extending to Ranking Algorithms

Let be a sample such that represents the ranking score. If , then . Let be such a ranking score learning function.

We define loss function on a pairwise sample and compute a real-valued loss i.e., . We also require our loss function to be symmetric on pairwise sample i.e, and bounded as follows, for ranking problem.

We define the generalization error or risk as follows:

Empirical error is defined as follows:

Let’s assume that our algorithm has uniform stability , i.e. it satisfies ,

Note: The definition here, involves rather than which is slightly different than uniform stability definitions.

Applying McDiarmid’s inequality to derive generalization bounds for uniform stable algorithms:

$$ \mathbf{P}\Bigg( \Big(R(A_S)-R_{emp}(A_S)\Big) - 2\beta \geq \epsilon \Bigg) \leq \underbrace{e^{-\frac{2\epsilon^2}{m(2\beta+\frac{2M}{m})^2}}}_{\delta} $$ $$ \implies \mathbf{P}\Bigg( \Big(R(A_S)-R_{emp}(A_S)\Big) \leq 2\beta + (m\beta+M)\sqrt{\frac{2\log \frac{1}{\delta}}{m}} \Bigg) \geq 1-\delta $$
comments powered by Disqus