Shang-Fu Chen

I am a Ph.D. student advised by Prof. Shao-Hua Sun at National Taiwan University.

I completed my bachelor's degree in Electronic Engineering at NTU. During my Ph.D., I previously worked in the Vision and Learning Lab at NTU and collaborated with Chunghwa Telecom Laboratories. I also worked with Inventec AI Center during my internship.

Email  /  CV  /  Google Scholar  /  Github

profile photo

I'm interested in reinforcement learning, computer vision, and machine learning. My research is about utilizing machine learning techniques to solve application problems, including multi-label classification, anomaly detection, and imitation learning.

Diffusion Model-Augmented Behavioral Cloning
Hsiang-Chun Wang*, Shang-Fu Chen*, Ming-Hao Hsu, Chun-Mao Lai, Shao-Hua Sun,
Frontiers4LCD Workshop at International Conference on Machine Learning (ICML), 2023
Project Page / Paper / Poster

This work aims to augment BC by employing diffusion models for modeling expert behaviors and designing a learning objective that leverages learned diffusion models to guide policy learning. To this end, we propose an imitation learning framework that benefits from modeling both the conditional and joint probability of the expert distribution. Our proposed diffusion model-augmented behavioral cloning (DBC) employs a diffusion model trained to model expert behaviors and learns a policy to optimize both the BC loss (conditional) and our proposed diffusion model loss (joint). Our proposed method outperforms baselines or achieves competitive performance in various continuous control domains, including navigation, robot arm manipulation, dexterous manipulation, and locomotion.

Domain-Generalized Textured Surface Anomaly Detection
Shang-Fu Chen, Yu-Min Liu, Chia-Ching Lin, Trista Pei-Chun Chen, Yu-Chiang Frank Wang
IEEE International Conference on Multimedia and Expo (ICME), 2022

In this paper, we address the task of domain-generalized textured surface anomaly detection. We propose a patch-based meta-learning model that exhibits promising generalization ability. By observing normal and abnormal surface data across multiple source domains, our model can generalize to an unseen textured surface of interest and localize abnormal regions in the query images. Our experiments verify that our model performs favorably against state-of-the-art anomaly detection and domain generalization approaches in various settings.

Learning Facial Liveness Representation for Domain Generalized Face Anti-spoofing
Zih-Ching Chen*, Lin-Hsi Tsao*, Chin-Lun Fu*, Shang-Fu Chen, Yu-Chiang Frank Wang
IEEE International Conference on Multimedia and Expo (ICME), 2022

This work aims to apply domain generalization and feature disentanglement for the face anti-spoofing (FAS) problem, which aims at distinguishing face spoof attacks from authentic ones. We propose a learning frame to disentangle facial liveness representation from the irrelevant ones (i.e., facial content and image domain features). The resulting liveness representation exhibits sufficient domain invariant properties and thus can be applied for performing domain-generalized FAS. Our experiments verify that our model performs favorably against state-of-the-art approaches on five benchmark datasets with various settings.

Representation Decomposition For Image Manipulation And Beyond
Shang-Fu Chen, Jia-Wei Yan, Ya-Fan Su, Yu-Chiang Frank Wang
IEEE International Conference on Image Processing (ICIP), 2021

This work aims to apply feature disentanglement on existing/trained generative models. To this end, we propose a decomposition-GAN (dec-GAN), which can decompose an existing latent representation into content and attribute features. Guided by the classifier pre-trained on the attributes of interest, our dec-GAN decomposes the attributes of interest from the latent representation, while data recovery and feature consistency objectives enforce the learning of our proposed method. Our experiments on multiple image datasets confirm the effectiveness and robustness of our dec-GAN over recent representation disentanglement models.

Order-Free RNN with Visual Attention for Multi-Label Classification
Shang-Fu Chen*, Yi-Chen Chen*, Chih-Kuan Yeh, Yu-Chiang Frank Wang
The AAAI Conference on Artificial Intelligence (AAAI), 2018

We propose a recurrent neural network (RNN) based model for image multi-label classification. Our model integrates the learning of visual attention and Long Short-Term Memory (LSTM) layers. The LSTM module learns the labels of interest and their co-occurrences, while the attention module captures the associated image regions. Unlike existing approaches, training our model does not require pre-defined label orders. We introduce a robust inference process to address the prediction error propagation problem. Our experiments on NUS-WISE and MS-COCO datasets confirm the design of our network and its effectiveness in solving multi-label classification problems.

Special thanks to Jon Barron for the source code of this website.