Transfer Learning using Decision Forests

Speaker:
Noam Segev, M.Sc. Thesis Seminar
Date:
Thursday, 19.3.2015, 14:00
Place:
Taub 601
Advisor:
Associate Professor Ran El-Yaniv

Transfer learning techniques are concerned with the creation of high performance predictive models, challenged with sparsely labeled training examples and using related learning tasks for which sufficient training sets are available. Transfer learning can be motivated by a common scenario in which we obtain a large annotated training set for the problem at hand ("source") and use it to build a classifier, only to learn that the examples came from a related, but different problem. Now only a small training set is available for the actual problem variant ("target"). While the two problem variants are related, a single model may not work well for both, and learning on the source alone may not suffice. We propose several random forest transfer algorithms, some refine a classifier learned on the source set using the target set, while another uses both sets directly during tree induction. We also combine our proposed algorithms in ensembles, building a committee of experts, and use them to detect fraud in online banking transactions. The proposed methods exhibit impressive experimental results over a range of problems.

Back to the index of events