Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Recent results suggest that attacks against supervised machine learning systems are quite effective, while defenses are easily bypassed by new attacks. However, the specifications for machine learning systems currently lack precise adversary definitions, and the existing attacks make diverse, potent...
Saved in:
| Main Authors | , , , , |
|---|---|
| Format | Journal Article |
| Language | English |
| Published |
19.03.2018
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.48550/arxiv.1803.06975 |
Cover
| Summary: | Recent results suggest that attacks against supervised machine learning
systems are quite effective, while defenses are easily bypassed by new attacks.
However, the specifications for machine learning systems currently lack precise
adversary definitions, and the existing attacks make diverse, potentially
unrealistic assumptions about the strength of the adversary who launches them.
We propose the FAIL attacker model, which describes the adversary's knowledge
and control along four dimensions. Our model allows us to consider a wide range
of weaker adversaries who have limited control and incomplete knowledge of the
features, learning algorithms and training instances utilized. To evaluate the
utility of the FAIL model, we consider the problem of conducting targeted
poisoning attacks in a realistic setting: the crafted poison samples must have
clean labels, must be individually and collectively inconspicuous, and must
exhibit a generalized form of transferability, defined by the FAIL model. By
taking these constraints into account, we design StingRay, a targeted poisoning
attack that is practical against 4 machine learning applications, which use 3
different learning algorithms, and can bypass 2 existing defenses. Conversely,
we show that a prior evasion attack is less effective under generalized
transferability. Such attack evaluations, under the FAIL adversary model, may
also suggest promising directions for future defenses. |
|---|---|
| DOI: | 10.48550/arxiv.1803.06975 |