Formulation of Adversarial Machine Learning

Machine learning is being used in a variety of domains to restrict or prevent undesirable behaviors by hackers, fraudsters and even ordinary users.  Algorithms deployed for fraud prevention, network security, anti-money laundering belong to the broad area of adversarial machine learning where instead of ML trying to learn the patterns of benevolent nature, it is confronted with a malicious adversary that is looking for opportunities to exploit loopholes and weaknesses for personal gain.

Some current approaches to adversarial tasks include:

To evade these models an attacker needs to arm themselves with knowledge of the algorithm, feature space and the training data.  Attackers have to obtain this information through a limited number of probing opportunities.  The computational cost of evasion (i.e. identifying a negative instance) has been extensively studied and can be surprisingly low (Lowd & Nelson).

Feature Space

Designing the feature space for adversarial models is highly dependent on the use case and what limitations you wish to place on the adversary. For example, in spam detection the feature space will typically consist of words or word combinations that tend to appear in the message.  While in network security, the feature space may be represented by the volume of traffic along each link in the network during a particular time window.

The model is usually shown aggregate metrics along with categorical labels. Determining the appropriate time windows, the level of fragmentation of the categorical space, metric aggregations and transforms is typically the bulk of the effort in deploying such models.

Parameter Selection

Once the modeling technique and input features have been selected, careful thought has to be given to model parameter selection.  Depending on what modeling approach is being used, the parameters that will need to be tuned will include:  

  • Sensitivity / specificity trade-off for determining the classification threshold value
  • Threshold (distance) for categorizing observations as anomalies
  • Algorithm specific hyper-parameters like kernel types, tree structures, regularization parameters, etc.

Selection of these parameters is influenced by backtesting on known fraud incidents, volume of expected alerts and the potential cost of misses.

Other Considerations

There is a multitude of other considerations that have to be given to deployment of adversarial machine learning models including:

  • Model decay and retraining intervals
  • Online vs. batch training
  • Extreme class imbalance
  • Dealing with adversarial examples (remember Tay?)
  • Counterfactual conditions - evaluating model performance when the distribution of outcomes is being altered

Reference

Huang, Ling, et al. "Adversarial machine learning." Proceedings of the 4th ACM workshop on Security and artificial intelligence. ACM, 2011.

Lowd, Daniel, and Christopher Meek. "Adversarial learning." Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. ACM, 2005.

Nelson, Blaine, et al. "Classifier evasion: Models and open problems."Privacy and Security Issues in Data Mining and Machine Learning. Springer Berlin Heidelberg, 2010. 92-98.