Research Focus
  • 3D Modeling of Objects and People

Research in this area focuses on the use of sensors such as cameras to create high-quality 3D models, which are used as the basis for photorealistic rendering. Animated models are also created for people and deformable objects, which in turn aid further animation and simulation.

  • Photorealistic Rendering

Research in this area focuses on the development of rendering engines that are capable of rendering various photorealistic textures, animations, and lighting effects.

  • Natural HCI

Research in this area focuses on the use of sensors such as cameras to capture and map human body movements, gestures, and facial expressions in 3D, further improving character animation and HCI.

  • Environmental Mapping and Positioning

Research in this area focuses on using cameras and sensors to recreate environments, such as malls, streets, and museums, as digital 3D models. Information from high resolution cameras helps achieve high-precision positioning, paving the way for technologies such as mixed reality, AR navigation, and path planning for autonomous robots.

  • Robotic Manipulation

Research in this area focuses on the use of sensors such as cameras to create 3D models for objects and their environments with the goal of allowing robots to manipulate real-world objects.

Products and Applications
  • Holographic Store

    Devices scan the interior and exterior areas of physical stores to create VR models that can be viewed on mobile devices. Useful information about products and services can be overlaid in the VR rendering, providing customers with a one-of-a-kind experience. Virtual shopping assistants can also be presented in the VR rendering, presenting customers with a more immersive experience.

  • AR Parallel World

    Devices scan closed- and open-world environments to recreate the world around us. Information from cameras allow for high-precision positioning, which lays the foundation for applications such as AR navigation, AR information overlapping, AR photo shooting, and AR customer services. Content development platforms are also provided for enterprises to customize their applications.

  • Intelligent Robotics for Data Center O&M

    The XR Lab independently develops an intelligent robot that can perform O&M in data centers. Equipped with a robotic arm and by complex algorithms that integrate visual and tactile sensing, the robot can automatically replace disks, stock-take assets, and perform refined inspections. The robot provides a full-fledged O&M solution that supports unattended monitoring and inspection, enhancing data security.

Academic Achievements
Publications and Presentations
  • Zhiwen Fan, Lingjie Zhu, Honghua Li, Xiaohao Chen, Siyu Zhu, Ping Tan. FloorPlanCAD: A Large-Scale CAD Drawing Dataset for Panoptic Symbol Spotting. ICCV 2021.
  • Lizhe Liu, Xiaohao Chen, Siyu Zhu , Ping Tan. CondLaneNet: a Top to down Lane Detection Framework Based on Conditional Convolution. ICCV 2021.
  • Shitao Tang, Chengzhou Tang, Rui Huang, Siyu Zhu , Ping Tan. Learning Camera Localization via Dense Scene Matching. CVPR 2021.
  • Weihao Yuan, Yazhan Zhang, Bingkun Wu, Siyu Zhu, Ping Tan, Michael Yu Wang, Qifeng Chen. Stereo Matching by Self-supervision of Multiscopic Vision. IROS 2021.
  • Chuan Fang, Shuai Ding, Zilong Dong, Honghua Li, Siyu Zhu , Ping Tan. Single Shot is Enough: Panoramic Infrastructure Based Calibration of Multiple Cameras and 3D LiDARs. IROS 2021.
  • Hualie Jiang, Zhe Sheng, Siyu Zhu , Zilong Dong, Rui Huang. UniFuse: Unidirectional Fusion for 360 Panorama Depth Estimation. RAL 2021.
  • Xiaodong Gu, Zhiwen Fan, Zuozhuo Dai, Siyu Zhu , Ping Tan. Cascade Cost Volume for High Resolution Multi View Stereo and Stereo Matching. CVPR 2020 Oral.
  • Sicong Tang, Feitong Tan, Kelvin Cheng, Zhaoyang Li, Siyu Zhu , Ping Tan. A Neural Network for Detailed Human Depth Estimation from a Single Image. ICCV 2019 Oral.
  • Zuozhuo Dai, Mingqiang Chen, Xiaodong Gu, Siyu Zhu , Ping Tan. Batch Feature Erasing for Person Re-identification and Beyond. ICCV 2019.
  • Luwei Yang, Ziqian Bai, Chengzhou Tang, Honghua Li, Yasutaka Furukawa, Ping Tan. SANet: Scene Agnostic Network for Camera Localization. ICCV 2019.

Scan QR code
关注Ali TechnologyWechat Account