Title
Point Cloud Compression in MPEG
Abstract
Consumer and industry level 3D sensing devices are becoming more common than ever before, increasing the amount of available 3D data. 3D scans can capture the full geometry and details of a 3D scene, and are useful in many applications including virtual reality, 3D video, robotics and geographic information access. Among many representation formats for 3D data, point clouds are a tradeoff between the easiness of acquisition, realistic rendering, facility in manipulation and processing. However, point clouds are typically represented by extremely large amounts of data, which is a significant barrier for mass market applications. To address this challenge, the Moving Pictures Experts Group (MPEG) initiated a standardization activity on Point Cloud Compression (PCC).
This tutorial introduces the technologies developed during the MPEG standardization process for defining an international standard for point cloud compression. The diversity of point clouds in terms of density conducted to the design of two approaches: the first one, called V-PCC (Video based Point Cloud Compression) consists in projecting the 3D space into a set of 2D patches and encodes them by using traditional video technologies. The second one, called G-PCC (Geometry based Point Cloud Compression) is traversing directly the 3D space in order to create the predictors.
With the current V-PCC encoder implementation providing a compression of 125:1, a dynamic point cloud of 1 million points could be encoded at 8 Mbit/s with good perceptual quality. For the second approach, the current implementation of a lossless, intra-frame G PCC encoder provides a compression ratio up to 10:1 and acceptable quality lossy coding of ratio up to 35:1.
By providing high-level immersiveness at currently available bandwidths, the two MPEG standards are expected to enable several applications such as six Degrees of Freedom (6 DoF) immersive media, virtual reality (VR) / augmented reality (AR), immersive real-time communication, autonomous driving, cultural heritage, and a mix of individual point cloud objects with background 2D/360-degree video.
Speakers
Danillo B. Graziosi received the B.Sc., M.Sc. and D.Sc. degrees in electrical engineering from the Federal University of Rio de Janeiro, Rio de Janeiro, Brazil, in 2003, 2006 and 2011, respectively. He is currently the Manager of the Next-Generation Codec Group at Sony’s R&D Center US San Jose Laboratory. His research interests include video/image processing, light fields, and point cloud compression.
Ohji Nakagami received the B.Eng. and M.S. degrees in electronics and communication engineering from Waseda University, Tokyo, Japan, in 2002 and 2004, respectively. He has been with Sony Corporation, Tokyo, Japan, since 2004. Since 2011, he has been with the ITU-T Video Coding Experts Group and the ISO/IEC Moving Pictures Experts Group, where he has been contributing on video coding standardization. He is an Editor of video-based point cloud compression (ISO/IEC 23090-5) and of geometry-based point cloud compression (ISO/IEC 23090-9).
Euee S. Jang is a Professor in the Dept. of Computer Science & Engineering, Hanyang University, Seoul, Korea. His research interests include image/video coding, reconfigurable video coding, and computer graphics objects. He has authored more than 150 MPEG contribution papers, more than 30 journal or conference papers, 35 pending or accepted patents, and two book chapters.
Marius Preda is Associate Professor at Institut Polytechnique de Paris and Chairman of the 3D Graphics group of ISO’s MPEG. He contributes to several ISO standards with technologies in the fields of 3D graphics, virtual worlds and augmented reality, activity for which he has received several ISO Certifications of Appreciation. He received a Degree in Engineering from Politehnica Bucharest, a PhD in Mathematics and Informatics from University Paris V and an eMBA from IMT Business School, Paris.
Khaled Mammou received a Ph.D. degree in Applied Mathematics and Computer Science from the University of Paris V in 2008. He is a Senior Software Engineer at Apple working on designing and optimizing multimedia codec solutions and has been member of the ISO/IEC Moving Picture Experts Group (MPEG) Committee since 2005, especially focusing on 3D graphics compression. He chaired the MPEG Ad-Hoc Group on MR3DMC (Multi- Resolution 3D Mesh Coding) and significantly contributed to the standardization of the MR3DMC, SC3DMC (Scalable Complexity 3D Mesh Compression) and FAMC (Frame- based Animated Mesh Compression) MPEG standards for static and animated 3D mesh compression. Currently, he is the co-chair of the MPEG Point Cloud Compression Ah-Hoc Group and editor of the ISO/IEC specifications for video-based point cloud compression (ISO/IEC 23090-5) and geometry-based point cloud compression (ISO/IEC 23090-9).