|Title:||Transfer Learning Using Decision Forests
|Currently accessibly only within the Technion network|
|Abstract:||The goal of transfer learning is to create high performance predictive models on a target task, augmenting sparsely labeled training examples with training sets, or previously built models, of related learning tasks. Transfer learning can be motivated by a common scenario in which we obtain a large annotated training set for the problem at hand (“source”) and use it to build a classiﬁer, only to learn that the examples came from a related, but diﬀerent problem. Now only a small training set is available for the actual problem variant (“target”). While the two problem variants are related, a single model may not work well for both, and learning on the source alone may not suﬃce.
In this work we propose three transfer algorithms based on random forests. Two of our algorithms reﬁne a classiﬁer learned on the source set using the available target set, while the last uses both sets directly during tree induction. We also combine our proposed algorithms in ensembles, building a committee of experts, and use them to detect fraud in online banking transactions. The proposed methods exhibit impressive experimental results over a range of problems.
|Copyright||The above paper is copyright by the Technion, Author(s), or others. Please contact the author(s) for more information|
Remark: Any link to this technical report should be to this page (http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-info.cgi/2016/MSC/MSC-2016-02), rather than to the URL of the PDF files directly. The latter URLs may change without notice.
To the list of the MSC technical reports of 2016
To the main CS technical reports page
Computer science department, Technion