Funder
National Natural Science Foundation of China
Subject
Law,General Computer Science
Reference42 articles.
1. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Srndic, N., Laskov, P., Giacinto, G., Roli, F., 2013. Evasion attacks against machine learning at test time, in: Machine Learning and Knowledge Discovery in Databases: European Conference, pp. 387–402. URL: https://doi.org/10.1007/978-3-642-40994-3_25.
2. Brendel, W., Rauber, J., Bethge, M., 2018. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models, in: International Conference on Learning Representations (ICLR). URL: https://openreview.net/forum?id=SyZI0GWCZ.
3. Carlini, N., Wagner, D.A., 2017. Towards evaluating the robustness of neural networks, in: Proceedings of the IEEE Symposium on Security and Privacy (S&P), pp. 39–57. URL: https://doi.org/10.1109/SP.2017.49.
4. Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J., 2017. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models, in: Proceedings of the ACM Workshop on Artificial Intelligence and Security, pp. 15–26. URL: https://doi.org/10.1145/3128572.3140448.
5. Croce, F., Andriushchenko, M., Singh, N.D., Flammarion, N., Hein, M., 2022. Sparse-RS: A versatile framework for query-efficient sparse black-box adversarial attacks. Proceedings of the AAAI Conference on Artificial Intelligence 36, 6437–6445. URL: https://ojs.aaai.org/index.php/AAAI/article/view/20595.