|Title:||Usages of Selective Regression
|Currently accessibly only within the Technion network|
|Abstract:||Using selective regression, it is possible to increase accuracy of predictions by abstaining from answering when there is insufficient knowledge. This work is about increasing the accuracy of selective regression even further and using simple selective models to create a more complex one, by using an ensemble of selective regressors. We demonstrate how to achieve improved accuracy by using two methods to build our ensemble.
In the first approach, we first split the samples in the input dataset into several clusters, and use each such cluster to train a regressor. Then, when given a new instance, we choose a regressor result that did not reject the new instance. In the second approach we train several regressors, where each regressor is using only a subset of the data's original features. This allows us to create several lower dimensionality regressors that are less prone to overfitting, especially when the training set is fairly small. We then choose which regressor should be used by discarding those that reject an example given for labeling.
We empirically tested the two approaches on various datasets, and saw that it can indeed boost accuracy compared to a single regressor or non-selective ensembles, depending on the distribution of the actual data.
Finally, we present conclusions drawn from our findings and raise some follow up research questions that arise from this work.
|Copyright||The above paper is copyright by the Technion, Author(s), or others. Please contact the author(s) for more information|
Remark: Any link to this technical report should be to this page (http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-info.cgi/2018/MSC/MSC-2018-14), rather than to the URL of the PDF files directly. The latter URLs may change without notice.
To the list of the MSC technical reports of 2018
To the main CS technical reports page
Computer science department, Technion