Hello everyone, welcome to my presentation for the paper PID-Based Approach to Adversarial Attacks. I’m Chen Wan, from Sun YAT-SEN University. This work is done with Biaohua and Fangjun.Adversarial attack can misguide the deep neural networks with adding small perturbations to normal examples, which is mainly determined by the gradient of the loss function with respect to inputs. Inspired by the classic proportional-integral-derivative (PID) controller in the field of automatic control, we propose a new PID-based approach for generating adversarial examples. The gradients in the present and past, and the derivative of gradient are considered in our method, which correspond to the components of P, I and D in the PID controller, respectively.
Hello everyone, welcome to my presentation for the paper PID-Based Approach to Adversarial Attacks. I’m Chen Wan, from Sun YAT-SEN University. This work is done with Biaohua and Fangjun.Adversarial attack can misguide the deep neural networks with adding small perturbations to normal examples, which is mainly determined by the gradient of the loss function with respect to inputs. Inspired by the classic proportional-integral-derivative (PID) controller in the field of automatic control, we propose a new PID-based approach for generating adversarial examples. The gradients in the present and past, and the derivative of gradient are considered in our method, which correspond to the components of P, I and D in the PID controller, respectively.<br>
正在翻译中..