The patch is an example of an adversarial attack on a neural network. In recent years, there has been a rise of neural networks for image classification and detection tasks, as their performance is unparalleled by alternative algorithms. At the same time, it was discovered that neural networks are sensitive to (small) perturbations of the input, raising questions regarding their reliability in certain applications. In this paper, we propose to use a specific adversarial attack, namely the patch attack, as a means for camouflaging objects in aerial imagery against neural network-based detection. We further investigate how the effectiveness of the patch depends on parameters like size, position and color.