MaGGIe Training Setup: High-Performance Human Instance Matting with A100 GPUs

Table of Links

Abstract and 1. Introduction

  1. Related Works

  2. MaGGIe

    3.1. Efficient Masked Guided Instance Matting

    3.2. Feature-Matte Temporal Consistency

  3. Instance Matting Datasets

    4.1. Image Instance Matting and 4.2. Video Instance Matting

  4. Experiments

    5.1. Pre-training on image data

    5.2. Training on video data

  5. Discussion and References


Supplementary Material

  1. Architecture details

  2. Image matting

    8.1. Dataset generation and preparation

    8.2. Training details

    8.3. Quantitative details

    8.4. More qualitative results on natural images

  3. Video matting

    9.1. Dataset generation

    9.2. Training details

    9.3. Quantitative details

    9.4. More qualitative results

8.2. Training details


Figure 11. Our framework can generalize to any object. Without humans appearing in the image, our framework still performs the matting task very well to the mask-guided objects. (Best viewed in color and digital zoom).


Table 9. Full details of different input mask setting on HIM2K+M-HIM2K. (Extension of Table 3). Bold denotes the lowest average error.


Table 10. Full details of different training objective components on HIM2K+M-HIM2K. (Extension of Table 4). Bold denotes the lowest average error.

:::info
Authors:

(1) Chuong Huynh, University of Maryland, College Park (chuonghm@cs.umd.edu);

(2) Seoung Wug Oh, Adobe Research (seoh,jolee@adobe.com);

(3) Abhinav Shrivastava, University of Maryland, College Park (abhinav@cs.umd.edu);

(4) Joon-Young Lee, Adobe Research (jolee@adobe.com).

:::


:::info
This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.