Introduction
An Ongoing project that transfer from multi-view image data to continuous rendering model.
Implement on HUMBI dataset with Prof. Hyun Soo Park on UMN, which is not yet completed.
Method
- Generate SMPL mesh reconstruction (realistic 3D model of the human body) from multi-view images
Generate IUV maps
x, y: the spatial coordinates of collected points on the image
I: the patch index that indicates which of the 24 surface patches the point is on
UV: coordinates in the UV space. Each surface patch has a separate 2D parameterization
I tried 2 approaches:
Mapping:
Use the UV map provided by Densepose to get UV coordinates for each vertices on the SMPL model. Then project UV coordinates on each view based on the camera parameters. However, it has to differentiate vertices on the front side and vertices on the back side, to get the projection.
Use Densepose to generate IUV map for each view images with multiview images directly.
- Unwrap texture for each view for multi-view data
- Integrate view-specific textures to form a complete texture of the subject
Make up for the gaps and artifacts of the texture
Wrap the texture on the SMPL mesh model
TODO
- Fill up gaps and generate missing parts in the texture
- generate transition textures between two adjacent views
Plan to finish the task by this semester.