Full body virtual try‐on with semi‐self‐supervised learning

Abstract
This paper proposes a full body virtual try-on which handles both top and bottom garments and generates realistic try-on images. For the full body virtual try-on, this paper addresses lack of suitable training data to align and fit top and bottom naturally. The proposed system consists of three modules: Clothing Guide Module (CGM), Geometric Matching Module (GMM), and Try-On Module (TOM). CGM is introduced to generate a clothing guide map (CGMap) which describes the shape of a garment on a model. Unlike the single-garment virtual try-on scheme, it is impractical to collect meaningful data at a large scale for the multi-garment system. To address this problem, two novel training strategies are proposed to leverage the existing training data. First, a pseudo triplet of model-top-bottom is generated from a pair of model-top or model-bottom which are already secured. Second, the CGM network is arranged to be exposed to both top and bottom garments during training. Then, the following GMM networks warp and align the top and bottom garments. Finally, TOM synthesizes a realistic try-on image with the aligned garment and the CGMap. Experimental results prove remarkable performance of the proposed method in the full body virtual try-on.
Funding Information
  • Korea Evaluation Institute of Industrial Technology (20008625)

This publication has 10 references indexed in Scilit: