Data poisoning attacks against machine learning algorithms


Yerlikaya F. A., Bahtiyar Ş.

EXPERT SYSTEMS WITH APPLICATIONS, cilt.208, 2022 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 208
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1016/j.eswa.2022.118101
  • Dergi Adı: EXPERT SYSTEMS WITH APPLICATIONS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Communication Abstracts, Computer & Applied Sciences, INSPEC, Metadex, Public Affairs Index, Civil Engineering Abstracts
  • Anahtar Kelimeler: Cybersecurity, Machine learning, Adversarial attack, Data poisoning, Label flipping attack
  • İstanbul Teknik Üniversitesi Adresli: Evet

Özet

For the past decade, machine learning technology has increasingly become popular and it has been contributing to many areas that have the potential to influence the society considerably. Generally, machine learning is used by various industries to enhance their performances. Moreover, machine learning algorithms are used to solve some hard problems of systems that may contain very critical information. This makes machine learning algorithms a target of adversaries, which is an important problem for systems that use such algorithms. Therefore, it is significant to determine the performance and the robustness of a machine learning algorithm against attacks. In this paper, we analyze empirically the robustness and performances of six machine learning algorithms against two types of adversarial attacks by using four different datasets and three metrics. In our experiments, we analyze the robustness of Support Vector Machine, Stochastic Gradient Descent, Logistic Regression, Random Forest, Gaussian Naive Bayes, and K-Nearest Neighbor algorithms to create learning models. We observe their performances in spam, botnet, malware, and cancer detection datasets when we launch adversarial attacks against these environments. We use data poisoning for manipulating training data during adversarial attacks, which are random label flipping and distance-based label flipping attacks. We analyze the performance of each algorithm for a specific dataset by modifying the amount of poisoned data and analyzing behaviors of accuracy rate, f1-score, and AUC score. Analyses results show that machine learning algorithms have various performance results and robustness under different adversarial attacks. Moreover, machine learning algorithms are affected differently in each stage of an adversarial attacks. Furthermore, the behavior of a machine learning algorithm highly depends on the type of the dataset. On the other hand, some machine learning algorithms have better robustness and performance results against adversarial attacks for almost all datasets.