Pin-Yu Chen
Pin-Yu Chen is a Principal Research Staff Member of the Trusted AI Group at the IBM Thomas J. Watson Research Center in New York, USA. He is also the Chief Scientist of the RPI-IBM AI Research Collaboration program and a PI of MIT-IBM Watson AI Lab projects.
Education
Chen received his Ph.D. in electrical engineering and computer science and an M.A. in Statistics from the University of Michigan, Ann Arbor, in 2016. He completed his M.S. in communication engineering from National Taiwan University in 2011 and a B.S. in electrical engineering and computer science from National Chiao Tung University, Taiwan, in 2009.
Awards
Chen has received numerous awards, including:
- Chia-Lun Lo Fellowship from the University of Michigan Ann Arbor
- NIPS 2017 Best Reviewer Award
- IEEE GLOBECOM 2010 GOLD Best Paper Award
- NeurIPS Best Paper Award at the ICLR 2023 BANDS Workshop
- IBM Pat Goldberg Memorial Best Paper Award (2022 and 2023)
- ECCV Best Paper Award at the ECCV 2022 AROW Workshop
- UAI 2022 Best Paper Runner-Up Award
- IBM Corporate Technical Award on Trustworthy AI (2021)
- 3x IBM Outstanding Research Accomplishment Awards (2020)
- 3x IBM Research Accomplishment Awards (2019)
Research
Chen's research focuses on adversarial machine learning and the robustness of neural networks. His long-term vision is to build trustworthy machine learning systems. He has published over 20 papers on trustworthy machine learning at major AI and machine learning conferences. His research interests also include graph and network data analytics and their applications to data mining, machine learning, signal processing, and cyber security.
Publications
Chen has co-authored numerous publications, including:
- "Time-LLM: Time Series Forecasting by Reprogramming Large Language Models" (2024)
- "Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!" (2024)
- "Robust Mixture-of-Expert Training for Convolutional Neural Networks" (2023)
- "How to Backdoor Diffusion Models?" (2023)
- "FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning" (2023)
- "Distributed Adversarial Training to Robustify Deep Neural Networks at Scale" (2022)
- "AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models" (2020)
- "Adversarial T-shirt! Evading Person Detectors in A Physical World" (2020)
- "PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach" (2019)