CS 184 Final Project Proposal: Point Cloud to Mesh


Members

Yinghao zhangyinghao@berkeley.edu
Yifan Wang wyf020803@berkeley.edu
Tianzhe Chu chutzh@berkeley.edu
Xueyang Yu yuxy@berkeley.edu

Summary

Our proposed project aims to implement the conversion from point cloud to mesh formats. By implementing this conversion process, we aim to enhance the flexibility and compatibility of 3D object representation in various applications, enabling users to work with the format that suits their needs best.

Problem Description

In computer graphics, a point cloud is a set of points in a 3D space, while a mesh is a set of triangles.

The ability to convert between point cloud and mesh formats is essential as certain applications require a specific format. For example, point clouds are commonly used for capturing 3D data from real-world objects, while meshes are often used for rendering and animation purposes.

Converting one format to the other requires a fundamental change in the data structure, which can be complex and computationally expensive. Besides, the conversion process may result in a loss of information or introduce artifacts, which is undesirable when we persue an equivalent conversion.

To achieve this goal, we plan to implement an existing algorithm that can accurately convert between point cloud and mesh formats. Specifically, we plan to implement the more challenging paper mentioned in the Final Project Idea.

Goals and Deliverables

Part 1: Baseline Plan

We plan to develop a robust and efficient algorithm for converting point cloud to mesh and mesh to point cloud formats. The algorithm should be accurate and reliable. We plan to use Python to implement it to fully utilize the computational resources in an easy way. Another goal is to reduce data loss and introduce minimal artifacts during the conversion process. The converted data should maintain its original quality, including texture, curvature, and features, as much as possible.

At last, optimize the conversion process to improve its performance and reduce computational complexity. This optimization should be achieved through techniques such as surface reconstruction and mesh simplification, minimizing the loss of data and improving the quality of the output.

To measure the performance, we will compute: Hausdorff distance between our result and the reference data, running time, and scalability of our algorithm by measuring its performance on datasets of varying sizes.

We will document the algorithm development process, including the research, implementation, optimization, and evaluation, in a report or paper. The documentation will enable others to understand and replicate our work.

Part 2: Inspirational Plan

We also plan to extend the baseline plan to achieve the following goals:

  1. Finding other mesh2point-clouds methods other than directly using the vertices in meshes
  2. Extending the baseline pipeline(point clouds to meshes) to a 3-stage version: images->point clouds->meshes. The first stage can be solved by a existing NeRF-style method i.e. PointNeRF by Xu et al. 2022.

Goals

Our expectations are as follows:

  1. For the first part, we expect equivalent rendering quality as the referred paper with optimized rendering speed.
  2. For the second part, we don't expect any state-of-the-art result but hope to obtain a pipeline that works optimally.

Schedule

Resources

We will implement the algorithm described in the paper. We will use the Stanford 3D Scanning Repository as our dataset. Our advanced work needs to implement a NeRF-like algorithm, which is described in Point-NeRF.