Yahoo Search Búsqueda en la Web

Resultado de búsqueda

  1. 19 de jun. de 2022 · InsetGAN for Full-Body Image Generation. CVPR2022. Publication date: June 19, 2022. Anna Frühstück, Krishna Kumar Singh, Eli Shechtman, Niloy J. Mitra, Peter Wonka, Jingwan (Cynthia) Lu. While GANs can produce photo-realistic images in ideal conditions for certain domains, the generation of full-body human images remains difficult ...

    • Overview
    • Method
    • Comparisons
    • Results
    • Dataset
    • Code for On-the-fly Object-aware Mask Generation
    • Citation

    arXiv|pdf paper|appendix|Project

    The official repo for CM-GAN (Cascaded Modulation GAN) for Image Inpainting. We introduce a new cascaded modulation design that cascades global modulation with spatial adaptive modulation for better hole filling. We also introduce an object-aware training scheme to facilitate better object removal. CM-GAN significantly improves the existing state-of-the-art methods both qualitatively and quantitatively. The online demo will be released soon.

    NEWS (07/20/2022): We plan to release the online demo and our dataset soon in the next few days.

    NEWS (07/28/2022): The panoptic segmentation annotations on Places2 challange dataset are released. See here.

    NEWS (07/28/2022): The evluation results of CM-GAN are released, which contains the object-aware masks for evaluation and our results. See here.

    NEWS (07/31/2022): The code for object-aware mask generation is released, see here.

    We propose cascaded modulation GAN (CM-GAN) with a new modulation design that cascades global modulation with spatial adaptive modulation. To enable this, we also design a new spatial modulation scheme that is compatible to the state-of-the-art GANs (StyleGAN2 and StyleGAN3) with weight demodulation. We additionally propose an object-aware training...

    CM-GAN reconstructs better textures

    better global structure

    CM-GAN achieves better FID, LPIPS, U-IDS and P-IDS scores.

    Panoptic Annotations

    The panoptic segmentation annotations on Places2 are released. Please refer to Dropbox folder places2_panoptic_annotation to download the panoptic segmentation annotations on train, evaluation, and test sets ([data/test/val]_large_panoptic.tar) and the corresonding file lists ([data/test/val]_large_panoptic.txt). Images of Places2-challange dataset can be downloaded at the Places2 official website.

    Format of Panoptic Annotation

    The panoptic annotation of each image is represented by a png image and a json file. The png image saves the id of each segment, and JSON file saves category_id, isthing of id. Isthing represents whether the segment is a thing/stuff. To know more details about the data format, please run the following python script and refer to the demo script, which provides a detailed example on how to generate object-aware masks from the panoptic annotations. The metadata panoptic_metadata is also saved at mask_generator/_panoptic_metadata.txt

    Evaluation and CM-GAN Results

    The evluation set for inpainting is released. Please refer to evaluation folder on Dropbox, which contains the Places evluation set images at resolution 512x512 (image.tar), the object-aware masks for all evluation images (mask.tar), and the results of CM-GAN (cmgan-perc64.tar).

    The mask_generator/mask_generator.py contains the class and example for on-the-fly object-aware mask generation. Please run

    to generate a random mask and the masked image, which are save to mask_generator/output_mask.png and mask_generator/output_masked_image.png, respectively. An visual example is shown below: Note that we use 4 object masks only for illstration and the full object mask dataset is from PriFill, ECCV'20.

    Please consider cite our paper "CM-GAN: Image Inpainting with Cascaded Modulation GAN and Object-Aware Training" (Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Eli Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi, Jiebo Luo) if you find this work useful for your research.

    We also have another project on image manipulation. Please also feel free to cite this work if you find it interesting.

  2. In this paper we present several architectural and optimization recipes for generative adversarial network (GAN) based facial semantic inpainting. Current benchmark models are susceptible to initial solutions of non-convex optimization criterion of GAN based inpainting.

  3. 14 de mar. de 2022 · PDF | While GANs can produce photo-realistic images in ideal conditions for certain domains, the generation of full-body human images remains difficult... | Find, read and cite all the research ...

  4. InsetGAN results. We show a comparison of several examples of StyleGAN2-generated full-body humans. We concentrate on regions that often exhibit unwanted artifacts in our generated results. Using our InsetGAN method, we are able to generate both faces and shoes using dedicated models and generate appropriate bodies for the respective combination.

    • Gan Jingwan1
    • Gan Jingwan2
    • Gan Jingwan3
    • Gan Jingwan4
    • Gan Jingwan5
  5. 22 de mar. de 2022 · CM-GAN: Image Inpainting with Cascaded Modulation GAN and Object-Aware Training. Haitian Zheng, Zhe Lin, Jingwan Lu, Scott Cohen, Eli Shechtman, Connelly Barnes, Jianming Zhang, Ning Xu, Sohrab Amirghodsi, Jiebo Luo. Recent image inpainting methods have made great progress but often struggle to generate plausible image structures ...

  6. 14 de mar. de 2022 · InsetGAN for Full-Body Image Generation. Anna Frühstück, Krishna Kumar Singh, Eli Shechtman, Niloy J. Mitra, Peter Wonka, Jingwan Lu. While GANs can produce photo-realistic images in ideal conditions for certain domains, the generation of full-body human images remains difficult due to the diversity of identities, hairstyles ...