Pre-training without Natural Images

  • We would like to replace Supervised/Self-supervised Learning!

    Is it possible to use convolutional neural networks pre-trainedwithout any natural images to assist natural image understanding? Thepaper proposes a novel concept, Formula-driven Supervised Learning.We automatically generate image patterns and their category labels byassigning fractals, which are based on a natural law existing in the back-ground knowledge of the real world. Theoretically, the use of automati-cally generated images instead of natural images in the pre-training phaseallows us to generate an infinite scale dataset of labeled images. Althoughthe models pre-trained with the proposed Fractal DataBase (FractalDB), a database without natural images, does not necessarily outperform mod-els pre-trained with human annotated datasets at all settings, we are ableto partially surpass the accuracy of ImageNet/Places pre-trained mod-els. The image representation with the proposed FractalDB captures aunique feature in the visualization of convolutional layers and attentions.

  • Post destination

    ACCV 2020 Best Paper Honorable Mention Award

  • Member

    Hirokatsu Kataoka (AIST), Kazushige Okayasu (AIST), Asato Matsumoto (AIST), Eisuke Yamagata (TITech), Ryosuke Yamada (AIST), Nakamasa Inoue (TITech), Akio Nakamura (TDU), Yutaka Satoh (AIST)