Weak Aggregating Algorithm
Weak Aggregating Algorithm is one of the exponential weights algorithms, and has two differences with the Strong Aggregating Algorithm.
First is that it mixes losses without transfer them into an exponential space, so the formulae for generalized prediction in case of finite number {⚠ $K$
} of experts is {⚠ $g(\omega)= \sum\limits_{k=1}^K \lambda(\omega,\gamma^k)w_k$
}, where {⚠ $w_k$
} are the weights of the experts, {⚠ $\lambda$
} is a loss function, {⚠ $\omega$
} are possible events. Second is that it uses a decreasing learning rate {⚠ $\eta_t:=\eta / \sqrt{t+1}$
} to update the weights of the experts. It is clear, that one can use the same substitution function to provide a prediction. It has an advantage to provide theoretical bounds for weakly mixable functions (bounded functions), and even for unbounded loss functions. The theoretical bound for the loss of Weak Aggregating Algorithm is:
{⚠ $L_T \le L_T^k + 2L \sqrt {T \ln K}$
}
for all {⚠ $T$
} and {⚠ $k$
}, where {⚠ $L_T$
} is the loss suffered by master over the first {⚠ $T$
} trials, and {⚠ $L_T(k)$
} is the loss suffered by the {⚠ $k$
}-th expert over the first {⚠ $T$
} trials. The best {⚠ $\eta=\sqrt(\ln K)/L$
}. The constant ⚠ $L$
is an upper bound of a loss function: {⚠ $\lambda(\omega,\gamma) \le L$
}.
Bibliography
- Y.Kalnishkan and M.V.Vyugin. The weak aggregating algorithm and weak mixability. Journal of Computer and System Sciences, 74(8): 1228--1244 (2008).