Adversarial examples constitute a threat against modern deep neural networks (DNNs). Despite numerous explorations on image-dependent adversarial perturbation (DAP), the investigation on universal adversarial perturbation (UAP) is relatively limited. The universal attack can be seen as a more practical attack because the perturbation can be generated beforehand and applied directly during the attack stage. How to generate UAP without access to the training data remains an open problem. In this work, we attempt to address this issue progressively. First, we propose a self-supervision loss to alleviate the need for ground-truth labels with the assumption that it is easier to get access to a training dataset without labels. Second, we attempt to address this issue by utilizing a very small amount of images. Our results show that our simple approach outperforms previous work by a large margin. Third, we attempt to generate a data-free UAP, i.e. without access to the training dataset at all. To this end, we propose to utilize artificial jigsaw images as the proxy dataset, and our approach outperforms existing methods by a large margin.