|
| 1 | +# ADABOOST |
| 2 | + |
| 3 | +## AIMA3e |
| 4 | +__function__ ADABOOST(_examples_, _L_, _K_) __returns__ a weighted\-majority hypothesis |
| 5 | + __inputs__: _examples_, set of _N_ labeled examples (_x<sub>1</sub>_, _y<sub>1</sub>_),…,(_x<sub>N</sub>_,_y<sub>N</sub>_) |
| 6 | +    _L_, a learning algorithm |
| 7 | +    _K_, the number of hypotheses in the ensemble |
| 8 | + __local variables__: __w__, a vector of _N_ example weights, initially 1 ⁄ _N_ |
| 9 | +        __h__, a vector of _K_ hypotheses |
| 10 | +        __z__, a vector of _K_ hypothesis weights |
| 11 | + |
| 12 | + __for__ _k_ = 1 __to__ _K_ __do__ |
| 13 | +   __h__\[_k_\] ← _L_(_examples_, __w__) |
| 14 | +   _error_ ← 0 |
| 15 | +   __for__ _j_ = 1 __to__ _N_ __do__ |
| 16 | +     __if__ __h__\[_k_\](_x<sub>j</sub>_) ≠ _y<sub>j</sub>_ __then__ _error_ ← _error_ + __w__\[_j_\] |
| 17 | +   __for__ _j_ = 1 __to__ _N_ __do__ |
| 18 | +     __if__ __h__\[_k_\](_x<sub>j</sub>_) = _y<sub>j</sub>_ __then__ __w__\[_j_\] ← __w__\[_j_\] · _error_ ⁄ (1 − _error_) |
| 19 | +   __w__ ← NORMALIZE(__w__) |
| 20 | +   __Z__\[_k_\] ← log(1 − _error_) ⁄ _error_ |
| 21 | +  __return__ WEIGHTED\-MAJORITY(__h__, __z__) |
| 22 | + |
| 23 | +--- |
| 24 | +__Figure ??__ The ADABOOST variant of the boosting method for ensemble learning. The algorithm generates hypothesis by successively reweighting the training examples. The function WEIGHTED\-MAJORITY generates a hypothesis that returns the output value with the highest vote from the hypotheses in __h__, with votes weighted by __z__. |
0 commit comments