Tencent Industry Workshop
Wednesday 28th Oct , 2020 – 18:00 – 19:00
Welcome to the Tencent Industry Workshop!
Tencent Media Lab is delighted to be hosting this industry workshop at ICIP 2020. Our engineers will be showcasing their efforts in designing multimedia algorithms and systems used by hundreds of millions worldwide from QQ, Tencent Meeting to Tencent Cloud, and discussing vision-related topics such as video coding, processing, understanding, generation, immersive media and more. The 2020 pandemic has shown how deeply multimedia systems affect our daily life and we will be highlighting just what it takes for us to keep developing our technology. We will be syncing up live with teams all over the world. Don’t miss it!
- Opening Address : Dr. Shan Liu (Chair) 5min.
- Intelligent Media : Dr. Songnan Li 10min.
- Standards : Dr. Xiang Li 10min.
- Media Compression : Dr. Bin Zhu 10min.
- Immersive Media : Ms. Weiwei Feng 10min.
- Q&A : 15min.
Dr. Shan Liu is a Tencent Distinguished Scientist and General Manager of Tencent Media Lab. She was formerly Director of Media Technology Division at MediaTek USA. She was also formerly with MERL, Sony and IBM. She has been actively contributing to international standards since the last decade and served co-Editor of HEVC SCC and the emerging VVC. She has numerous technical contributions adopted into various standards, such as HEVC, VVC, OMAF, DASH and PCC, etc. At the same time, technologies and products developed by her and under her leadership have served several hundred million users. Dr. Liu holds more than 150 granted US and global patents and has published more than 80 journal and conference papers. She was in the committee of Industrial Relationship of IEEE Signal Processing Society (2014-2015) and is on the Editorial Board of IEEE Transactions on Circuits and Systems for Video Technology (2018-2021). She was the VP of Industrial Relations and Development of Asia-Pacific Signal and Information Processing Association (2016-2017) and was named APSIPA Industrial Distinguished Leader in 2018. She was appointed Vice Chair of IEEE Data Compression Standards Committee in 2019. Dr. Liu obtained her B.Eng. degree in Electronics Engineering from Tsinghua University, M.S. and Ph.D. degrees in Electrical Engineering from University of Southern California.
Xiang Li (S’02-M’10-SM’13) received the B.Sc. and M.Sc. degrees in electronic engineering from Tsinghua University, Beijing, China, and the Dr.-Ing degree in Electrical, Electronic and Communication Engineering from University of Erlangen-Nuremberg, Germany.
He is currently a senior principal researcher and the head of video coding standards in Tencent’s Media Lab. Before joining Tencent, he was with Qualcomm, MediaTek, Institute of Communications Engineering at RWTH Aachen University, and Siemens. He has been working in the field of video compression for years and is an active contributor to international video coding standards. He served as chair and co-chair in a number of Ad Hoc groups, core experiments, including the co-chair of JEM reference software, VVC reference software, and co-editor of MPEG-5 EVC. Dr. Li is a senior member of IEEE. He has published over 50 journal and conference papers, 300+ standard contributions, and holds 240+ US granted and pending patents. His research interests include video coding and processing.
Dr. Songnan Li is currently the Director of Video Technology in Tencent Media Lab. He received his BS and MS degrees, in Computer Science and Engineering, from HIT in 2004 and 2006, respectively. He obtained Ph.D. of Electrical Engineering from the Chinese University of Hong Kong in 2012. From 2014 to 2016, he was a research assistant professor at CUHK. From 2016 to 2019, he was deputy director of TCL HK research center. Dr. Li has more than 16 years of experience in Image, Video and Computer Vision research and product development. His research interests include video processing and understanding, face reconstruction, 3D object tracking, image/video quality assessment, image and video compression, etc. His work has been published in prestigious journals and conferences such as TPAMI, TMM, TCSVT, TIP and CVPR.
Dr. Zhu Bin graduated from Tianjin University with a bachelor and a master degree and graduated with a Ph.D. from the Department of Electronic and Computer Engineering at Iowa State University in the United States, and won the outstanding graduate and excellent dissertation Award. After graduation, Bin worked in several well-known semiconductor start-up companies such as Equator and Stretch (both were later acquired) and were responsible for the video algorithm and software development. Later Bin joined Intel and Apple and led the team to develop core video technologies and products. He also had two years’ entrepreneurship experience. Since joining Tencent Multimedia Lab in December 2018, Bin has led the team responsible for the implementation, optimization and open source collaboration of the next-generation video codec international standard like AV1 and H.266, He is also responsible for the development, optimization and integration of video technologies for Tencent Meeting.
Feng Weiwei, Tencent multimedia expert engineer, joined Tencent America in 2018 as Tencent immersive media expert engineer and project leader. Ms. Freng has extensive ToB and ToC industry experience. She previously worked as a core multimedia system engineer in different industries such as social media, finance, etc. She has successfully delivered a number of key technologies and key services to millions of users.
YouTube/Chrome Industry Workshop
Monday 26th Oct, 2020 – 18:00 – 19:00
Welcome to the YouTube/Chrome Industry Workshop!
Stories from the technological crucible of User-Generated Video
Since 2012 the Chrome Media and YouTube Media Algorithms team have been showcasing their research and development activities at ICIP. Previous workshops have highlighted work in new video codecs and novel video enhancement pipelines used in YouTube. The goal is to motivate young engineers and researchers in the area of audiovisual processing and help show how research ideas become products in a video system. Engineers working in product teams speak about their experiences and highlight new technology being deployed to huge audiences worldwide. This year we are integrating live talks from different Google offices around the world. Don’t miss it!
- Balu Adsumili
- Debargha Mukerjee
Balu Adsumilli manages and leads the Media Algorithms group at YouTube/Google. He did his masters in University of Wisconsin Madison in 2002, and his PhD at University of California Santa Barbara in 2005, on watermark-based error resilience in video communications. From 2005 to 2011, he was Sr. Research Scientist at Citrix Online, and from 2011-2016, he was Sr. Manager Advanced Technology at GoPro, at both places developing algorithms for images/video quality enhancement, compression, capture, and streaming. He is an active member of IEEE (and MMSP TC), ACM, SPIE, and VES, and has co-authored more than 120 papers and patents. His fields of research include image/video processing, machine vision, video compression, spherical capture, VR/AR, visual effects, and related areas.
Dr. Debargha Mukherjee received his M.S./Ph.D. degrees in ECE from University of California Santa Barbara in 1999. Thereafter, through 2009 he was with Hewlett Packard Laboratories, conducting research on video/image coding and processing. Since 2010 he has been with Google Inc., where he is currently a Principal Engineer involved with open-source video codec research and development, notably VP9 and AV1. Prior to that he was responsible for video quality control and 2D-3D conversion on YouTube. Debargha has authored/co-authored more than 100 papers on various signal processing topics, and holds more than 60 US patents, with many more pending. He has delivered many workshops and talks on Google’s royalty-free line of codecs since 2012, and more recently on the AV1 video codec from the Alliance for Open Media (AOM). He has served as Associate Editors of the IEEE Trans. on Circuits and Systems for Video Technology and IEEE Trans. on Image Processing. He is also a member of the IEEE Image, Video, and Multidimensional Signal Processing Technical Committee (IVMSP TC).
Dr. Yilin Wang received his PhD from the University of North Carolina at Chapel Hill in
2014, working on topics in computer vision and image processing. After graduation, he joined the Media Algorithm team in Youtube/Google. His research fields include video processing infrastructure, video quality assessment, and video compression.
Dr. Jingning Han received the B.S. degree in electrical engineering from Tsinghua University in 2007, and the M.S. and Ph.D. degrees in electrical and computer engineering from the University of California Santa Barbara in 2008 and 2012, respectively. He is currently a Senior Staff Engineer with Google. He is a main architect of the VP9 and AV1 codecs. His research interests include video coding and computer architecture.
He held more than 50 U.S. patents in the field of video coding and published more than 60 research articles. He received the Dissertation Fellowship in 2012 from the Department of Electrical and Engineering, University of California Santa Barbara. He was a recipient of the Best Student Paper Award at the IEEE International Conference on Multimedia and Expo in 2012. He received the IEEE Signal Processing Society Best Young Author Paper award in 2015.
Efficient video compression and quality measurement at Facebook
Wednesday 28th Oct , 2020 – 17:00 – 18:00
Abstract: “Billions of videos get processed at Facebook and Instagram, with ever increasing quality requirements but under strict compute resource constraints. We will show how our processing pipeline integrates video encoders and quality metrics with scalable compute requirements, how we utilize HW and SW components to achieve our goal and how we track and optimize system performance. We intend to show some of the research work in optimizing video quality at scale in Instagram, and use of video quality metrics to optimize video delivery and porting video encoders and quality metrics into HW.”
- I. Katsavounidis
Visala Vaduganathan is an ASIC & FPGA Engineer at Facebook working on hardware solutions for Video Transcoders. She received her B.S degree in Electronics and Instrumentation Engineering from Annamalai University and started her career in ASIC Design as memory model developer for Arasan Chip Systems. Further she moved on to MediaQ where she got the golden opportunity to work on MPEG4/MPEG2 hardware designs for a SOC. This was the first of many to come hardware solutions for video codecs she worked on. Over the next decade and a half she contributed to multi-standard video encoders and decoders in Nvidia’s Tegra SOCs and in Qualcomm Snapdragon SOCs. She earned her Master’s degree in Electrical Engineering from San Jose State University while working at Nvidia.
Xing Cindy Chen is a member of the Infrastructure ASIC Team at Facebook working on algorithms and architecture of HW accelerators for video processing at scale. Before joining Facebook, she worked at Nvidia and Intel for over 15 years as a senior ASIC architect. She worked on algorithm investigation, architecture definition and functional / performance simulation for various processing units in GPUs and SOCs, such as rasterization, texturing, stereo vision and camera image processing. She received her Ph.D. in Electrical Engineering from Stanford University. Her technical interests lie in algorithms and HW acceleration of video coding/analysis, computer graphics, and machine learning.
Deepa Palamadai Sundar is an ASIC Design Engineer at Facebook, specializing on developing energy efficient accelerator solutions for video transcoding at scale. Prior to joining Facebook, she worked at Intel for over 7 years as a Senior Design Engineer, building Xeon-Phi processors for high performance supercomputing. She has worked on several aspects of hardware design involving micro-architecture, RTL development, power analysis & optimization and post-silicon debug. Deepa received her Master’s degree in Electrical Engineering from Stanford University and her Bachelor’s degree in Electronics and Communication from Madras Institute of Technology, India.
Gaurang Chaudhari is currently working with Facebook to improve and optimize video quality and transcoding efficiency. He received his B.S degree from Veermata Jijabai Technology Institute (VJTI), Mumbai, India and his M.S. degree from University of Southern California, Los Angeles, CA. After that, he worked several years on video codec and computer vision algorithm implementations in hardware. Prior to Facebook, he worked on mapping video pre/post processing and computer vision algorithms on the Qualcomm Snapdragon mobile platforms. His recent research interests include video coding, processing and quality assessment, algorithm complexity optimization, energy efficient optimizations for video transcoding, mapping algorithms on hardware etc.
Haixia Shi is a member of Instagram Media Infrastructure at Facebook working on video quality measurement and encoding improvements for Instagram user generated content. Prior to joining Facebook, he spent years at NVIDIA working on hardware driver and firmware for MPEG-2, VC-1 and AVC/SVC video decoder and playback, at Google working on Chrome OS and Android graphics framework, and at Apple working on hardware driver and firmware for iOS HEVC and AVC hardware encoder. He received his M.S. in Computer Science from University of California, San Diego in 2005.
Cosmin Stejerean is a member of the Media Algorithms team at Facebook, working on optimizing the quality of video at scale. He has a broad technology background spanning
software development, information security and IT operations, and received a B.S. degree in Business Management from Western Governors University. He spent several years as the CTO of Fulcrum Technologies, working on supply chain management software for communications service providers, where he observed the financial impact of upgrading networks to keep up with the growth of video streaming. This motivated him to leave this role and pursue his passion for optimizing video. His research interests include video quality assessment, QoE optimizations on cellular networks, and reducing complexity of video transcoding.
Dr. Ioannis Katsavounidis is a member of Video Fundamentals and Research, part of the Video Infrastructure team, leading technical efforts in improving video quality across all video products at Facebook. Before joining Facebook, he spent 3.5 years at Netflix, contributing to the development and popularization of VMAF, Netflix’s video quality metric, as well as inventing the Dynamic Optimizer, a shot-based video quality optimization framework that brought significant bitrate savings across the whole streaming spectrum. Before that, he was a professor for 8 years at the University of Thesally’s Electrical Engineering Department in Greece, teaching video compression, signal processing and information theory. He has over 100 publications and patents in the general field of video coding, but also high energy experimental physics. His research interests lie in video coding, video quality, adaptive streaming and hardware/software partitioning of multimedia processing.
Shankar Regunathan is a member of Video Algorithms at Facebook working on video quality measurement and encoding improvements with particular focus on user generated content. Prior to joining Facebook, he spent several years at Microsoft working on VC-1, JPEG-XR and contributions towards AVC/SVC. He received a Ph.D in EE from University of California, Santa Barbara. He has received the IEEE Signal Processing Society Best Paper Award in 2004 and 2007. His research interests lie at the intersection of video compression and quality metrics with computational efficiency.
Empathic Reality – How to create human-centric content to the virtual and augmented reality
Tuesday 27th Oct, 2020 – 14:00-15:00
The use smart technology in your office building to boost employee well-being, happiness, productivity and innovation is the core of Empathic Building business. Empathic Building is a human-centric digital service and design platform that focuses on improving employee well-being, happiness and performance. By understanding workspaces, meeting rooms and employee behavior, buildings can become human-centric, instantly responding to issues and offering options for workspaces, equipment or colleague locations. The workshop will explore how to transform Empathic Building concept into an Empathic Reality and what possible role signal, image and video processing and smart sensors will play to make this transformation a success.
Organizer & Speaker
Tomi is recognized as a superhero of the digital era. He is a visionary, futurist, entrepreneur, and father of Empathic Reality. Tomi is the Head and Founder of Empathic Building Company Name Haltian, where he is leading Empathic Building for explosive international growth.
Prior to this, he worked for Tieto (later TietoEvry) where he was the Founder, Advisor and Evangelist of the concept of Empathic Building. He led the developing of new innovations and services in Empathic Building Core product. He was responsible for growth outside Nordic (EU and North America) countries helping customers create empathic content for their digital twins.