PETNet: Polycount and Energy Trade-off Deep Networks for Producing 3D Objects from Images

Abstract

We consider the task of predicting 3D object shapes from color images on mobile platforms, which has many real-world applications including augmented reality (AR), virtual reality (VR), and robotics. Recent work has developed a Graph Convolution Network (GCN) based approach to produce 3D object shapes in the form of a triangular mesh of increasing polycount (no. of triangles in the mesh). In this paper, we propose a novel approach to trade-off polycount of a 3D object shape for the energy consumed at run-time called Polycount-Energy Trade-off networks (PETNet). The key idea behind PETNets is to design an architecture of increasing complexity with a comparator module and leveraging the pre-trained GCN to perform input-specific adaptive predictions. We perform experiments using pre-trained GCN on the ShapeNet dataset. Results show that with the optimized PETNets, we can get up to 20%-37% gain in energy for negligible loss (0.01 to 0.02) in accuracy, and provides a fine-grained control on performance when compared to a fixed level performance with the state-of-the-art Pixel2Mesh network.

Publication
2020 57th ACM/IEEE Design Automation Conference (DAC)

Related