Theory Lunch Seminar

— 1:00pm

Location:
In Person - Gates Hillman 8102

Speaker:
DRAVYANSH SHARMA , Ph.D. Student, Computer Science Department, Carnegie Mellon University
https://www.cs.cmu.edu/~dravyans/

Reliable learning under adversarial attacks

The problem of designing learners that provide predictions that are guaranteed to be correct is of increasing importance in machine learning, especially with the growing interest in robustness against adversarial attacks. We will first consider data poisoning attacks, in which an adversary corrupts the training set available to the learner with the goal of inducing specific desired mistakes. 

We provide robustly-reliable predictions, in which the predicted label is guaranteed to be correct so long as the adversary has not exceeded a given corruption budget, even in the presence of instance targeted attacks, where the adversary knows the test example in advance and aims to cause a specific failure on that example. Remarkably, we provide a complete characterization of learnability in this setting, in particular, nearly-tight matching upper and lower bounds on the region that can be certified, as well as efficient algorithms for computing this region. We also extend these results to the active setting where the algorithm adaptively asks for labels of specific informative examples, and the difficulty is that the adversary might even be adaptive to this interaction, as well as to the agnostic learning setting where there is no perfect classifier even over the uncorrupted data. 

Finally, we will see how to design robustly-reliable learners in the presence of test-time attacks, where the test point on which the learner predicts may be corrupted within a metric ball around the point, and also characterize learnability in this setting.

Event Website:
https://www.cs.cmu.edu/~theorylunch/


Add event to Google
Add event to iCal