Programme Schedule 

 
  Time Day 1 (26 Dec 2023) Day 2 (27 Dec 2023)
  8:30 Registration  
Session 1 9:30 Welcome Talk
Prof Alex Kot
Director, ROSE Lab, EEE, NTU
Session Chair:
Prof Lin Weisi [Associate Chair, SCSE, NTU]

Invited Speakers:

Prof Shi Boxin [Associate Director, Insitute for Visual Technology, School of Computer Science, Peking University]
Title: Neural Radiance Fields with Light Transport Occlusions

Prof Liu Manhua [Shanghai Jiao Tong University]
Title: Multimodal Brain Image Computing and Analysis for Brain Age Eestimation

Prof Liu Jun [Singapore University of Technology and Design]
Title: Diffusion Models for 3D Pose and Mesh Estimation

Mr Li Xiaotong [PhD Student, Peking University]
Title: Exploring Model Transferability through the Lens of Potential Energy
9:35 Peng Cheng Mind: Series of AI Large Model in Open-source
Prof Gao Wen
Director, Peng Cheng Laboratory
10:05 Harnessing the Power of Deep Learning for Urban Sound Sensing and Noise Mitigation
Prof Gan Woon Seng [Director, Smart Nation Lab, EEE, NTU]
IEEE-SPS Distinguished Lecturer
  11:05 Break
Session 2 11:30 Session Chair:
Prof Jiang XD [Director, Centre for Information Sciences and Systems, EEE, NTU]

Invited Speakers:

Prof Wang Danwei [Director, ST Engineering-NTU Corporate Lab, NTU]
Title: Robust Perception, 3D digital Twin, IOT and Applications

Prof Yu Zitong [Great Bay University]
Title:  Human-Centric Subtle Vision Computing


Prof Wang Yan [Zijiang Young Scholar, East China Normal University]
Title:  Medical Image Analysis Beyond Limitations

Prof Shen Wei [Shanghai Jiao Tong University]
Title: Segment Anything in 3D 
Session Chair:
Prof Cong Gao  [Co-Director, Singtel Cognitive and Artificial Intelligence Lab, NTU]

Invited Speakers:

Dr Cheng Jun [I2R, A*Star]
Title: Evidential Local-global Fusion for Stereo Matching

Prof Li Leida [Huashan Scholar, Xidian University]
Title: Recent Advances in Image Aesthetics Assessment

Dr Chen Zhuo [Peng Cheng Lab]
Title: Smart sensing and its practice in traffic control

Prof Tu Zhigang [Wuhan University]
Title: Human Action Recognition and Capturing in Video
  13:00 Lunch
Session 3 14:30 Session Chair:
Prof Lu Shijian [SCSE, NTU]

Invited Speakers:

Prof Cai Jianfei [Head, Data Science & AI Department, Monash University]
Title:  Stitchable neural networks

Prof Lu Jiwen [Deputy Chair, Department of Automation, Tsinghua Univesity]
Title: General Visual Big Model

Prof Wang Shiqi [City University of Hong Kong] 
Title: Generative Visual Compression: Hype or Hope?

Prof Zhang Xinfeng [University of Chinese Academy of Sciences]
Title: Video Coding Framework: From Signal Prediction Compression to Model Representation and Compression

Session Chair:
Prof Yap Kim Hui [EEE, NTU]

Invited Speakers:

Prof Wan Renjie [Hong Kong Baptist University]
Title: Protecting the copyright of neural radiance fields


Dr Yang Wenhan [National Young Scholar Program, Peng Cheng Lab]
Title: Towards Visual Computing in Open Scenarios

Prof Chen Tao [Fudan University]
Title: Efficient Computer Vision and Deep Model Compression

Ms Wang Wenjing [PhD Student, Peking University]
Title: Joint high-low level for low light adaptation

  16:00 Break
Session 4 16:30

Panel Discussion:
Academic Career Opportunities and Future Trends in Funded Research

Panel Members:
Prof Lam Kwok Yan [Associate Vice President (Strategy and Partnerships), NTU]
Prof Cai Jianfei [Head, Data Science & AI Department, Monash University]
Prof Duan Lingyu [Peking University]

Facilitator: 
Prof Alex Kot [Director, ROSE Lab, EEE, NTU]

Networking Session:
Prof Yang Tingting [Director of Human Resources & Education Office, Peng Cheng Lab]
  18:00 Welcome Dinner  

 

All timings are given in Singapore time (GMT+8).

 

 

 

IEEE Signal Processing Society Distinguished Lecturer

Speaker: Prof Woon-Seng Gan, Professor, Nanyang Technological University

Title: Harnessing the Power of Deep Learning for Urban Sound Sensing and Noise Mitigation

Biography

Woon-Seng Gan is a Professor of Audio Engineering and Director of the Smart Nation TRANS (national) Lab in the School of Electrical and Electronic Engineering at Nanyang Technological University, Singapore. He received his BEng (1st Class Hons) and Ph.D. degrees, both in Electrical and Electronic Engineering from the University of Strathclyde, the UK in 1989 and 1993, respectively. He has held several leadership positions at Nanyang Technological University, including serving as the Head of the Information Engineering Division from 2011 to 2014, and as the Director of the Centre for Info-comm Technology from 2016 to 2019. His research has been concerned with the connections between the physical world, signal processing, and sound control, which resulted in the practical demonstration and licensing of spatial audio algorithms, directional sound beams, and active noise control for headphones and open windows. He has published more than 400 international refereed journals and conference papers and has translated his research into 6 granted patents. He is a Fellow of the Audio Engineering Society (AES); a Fellow of the Institute of Engineering and Technology (IET); and selected as the IEEE Signal Processing Society Distinguished Lecturer from 2023-2024. He served as an Associate Editor of the IEEE/ACM Transaction on Audio, Speech, and Language Processing (TASLP; 2012-15). He is currently serving as a Senior Area Editor of the IEEE Signal Processing Letters (SPL, 2019-); Associate Technical Editor of the Journal of Audio Engineering Society (JAES; 2013-); Senior Editorial member of the Asia Pacific Signal and Information Processing Association Transaction on Signal and Information Processing (ATSIP; 2011-); and Associate Editor of the EURASIP Journal on Audio, Speech, and Music Processing (EJASMP, 2007-). He is also the President-elect (2023-2024) of the Asia Pacific Signal and Information Processing Association (APSIPA).

 

Abstract

In the digital age, the integration of sensing, processing, and sound emission into IoT devices has made their economical deployment in urban environment possible. These intelligent sound sensors, like the Audio Intelligence Monitoring at the Edge (AIME) devices deployed in Singapore, operate 24/7 and adapt to varying environmental conditions. As digital ears complementing the digital eyes of CCTV cameras, these devices provide public agencies with a wealth of aural data, enabling the development of comprehensive and effective sound mitigation policies. In this presentation, we will examine the critical requirements for intelligent sound sensing and explore how deep learning techniques can be utilized to extract meaningful information, such as noise type, dominant noise source direction, sound pressure and frequency of occurrence of the environmental noise. Additionally, we will introduce new deep-learning-based active noise control and mitigation approaches, including reducing noise from entering residential buildings and generating acoustic perfumes to mask annoyance in urban environments; and how these deep learning models can be deployed in an edge-cloud architecture. Our aim is to demonstrate how deep learning models can advance the field of acoustic sensing and noise mitigation and highlight current challenges and trends for future progress