Thesis Advisor

Designing a Stability Metric for Assessing the Robustness of Anomaly Rankings

The idea of stability has applications in many areas of machine learning. However, stability has not yet been applied in context of anomaly detection, where there is need for a metric that quantifies the robustness of anomaly score rankings to changes in training data. We propose such a metric and the methodology used in computing it. We then propose to use an algorithm with the goal of maximizing stability by learning training points that contribute most to high stability. Finally, we apply this stability metric and the proposed contribution update algorithm on several benchmark datasets. This evaluation is used to compare stability on different anomaly detection algorithms, and assess the contribution update algorithm's ability to increase stability.