figshare
Browse
file.pdf (195.3 kB)

Paragraph: Thwarting Signature Learning by Training Maliciously

Download (195.3 kB)
journal contribution
posted on 2006-01-01, 00:00 authored by James Newsome, Brad Karp, Dawn Song
Defending a server against Internet worms and defending a user’s email inbox against spam bear certain similarities. In both cases, a stream of samples arrives, and a classifier must automatically determine whether each sample falls into a malicious target class (e.g., worm network traffic, or spam email). A learner typically generates a classifier automatically by analyzing two labeled training pools: one of innocuous samples, and one of samples that fall in the malicious target class. Learning techniques have previously found success in settings where the content of the labeled samples used in training is either random, or even constructed by a helpful teacher, who aims to speed learning of an accurate classifier. In the case of learning classifiers for worms and spam, however, an adversary controls the content of the labeled samples to a great extent. In this paper, we describe practical attacks against learning, in which an adversary constructs labeled samples that, when used to train a learner, prevent or severely delay generation of an accurate classifier. We show that even a delusive adversary, whose samples are all correctly labeled, can obstruct learning. We simulate and implement highly effective instances of these attacks against the Polygraph [15] automatic polymorphic worm signature generation algorithms.

History

Date

2006-01-01

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC