Arik Friedman, Ph.D. Thesis Seminar
Wednesday, 26.5.2010, 14:00
In recent years the data mining community has faced a new challenge. Having shown how effective its tools are in revealing the knowledge locked within huge databases, it is now required to develop methods that restrain the power of these tools to protect the privacy of individuals.
This research focuses on the problem of guaranteeing privacy of data mining output. In the talk I will present some of the recent research results. We consider the problem of data mining with formal privacy guarantees, given a data access interface based on the differential privacy framework. Differential privacy requires that computations be insensitive to changes in any particular individual's record, thereby restricting data leaks through the results. The privacy preserving interface ensures unconditionally safe access to the data and does not require from the data miner any expertise in privacy. However, a naive utilization of the interface to construct privacy preserving data mining algorithms could lead to inferior data mining results.
We address this problem by considering the privacy and the algorithmic requirements simultaneously, focusing on decision tree induction as a sample application.
The privacy mechanism has a profound effect on the performance of the methods chosen by the data miner. We demonstrate that this choice could make the difference between an accurate classifier and a completely useless one. Moreover, an improved algorithm can achieve the same level of accuracy and privacy as the naive implementation but with an order of magnitude fewer learning samples.