Vertex2Image: Construct Human Figure Based On A Monocular Video
By: Zihao Wang
Department: Computer Science
Faculty Advisor: Dr. Shahrukh Humayoun
Human avatar construction is a trending research topic nowadays, as this technology can be applied to a number of domains for better online interactions, such as meta-universe. Our Vertex2Image model technique takes a single video source and constructs a target person from any arbitrary camera angle after training. Our model is based on SMPL vertices to collect color information and distill the information through a modified version of UNet++ to construct the representations. Although many deep learning architectures have been proposed in the literature, most of them suffer from long training time and no transfer learning to a new target. Our contribution is to train a generalized model to learn how textures are formed with sparse color information, then apply transfer learning to a specific target. Therefore, our training time for a new targeted person is drastically reduced to only 2 hours, instead of a couple of days, which is a typical training span for many existing models