Sparse adversarial attack approaches can be unified in a framework that is often based on a convex relaxation of the domain of the generated perturbation. However, there is a priori no real need for convexity relaxation. We consider generating adversarial images with a non-convex loss in a non-convex ℓₚ neighbourhood of an input image, thus stepping away from the ℓ₀ combinatorial problem while remaining continuous but sparser than the ℓ₁ ball. We formalise the concept of finding adversarial examples as an explicit optimization problem for the special case of p = 1/2, for which we have access to an efficient proximal operator. To enhance the interpretability of generated attacks, we adjust the regularization parameter of the perturbation for each pixel separately. This modification increases the likelihood of perturbing pixels close to the already perturbed pixels. Experiments show that our method computes highly sparse and interpretable adversarial examples for ImageNet models.