Who’s Tsung-Wei Huang
Sep 1st, 2022

Tsung-Wei Huang
Assistant Professor
University of Utah
Email:
tsung-wei.huang@utah.edu
Personal webpage
https://tsung-wei-huang.github.io/
Research interests
Design automation and high-performance computing.
Short bio
Dr. Tsung-Wei Huang received his B.S. and M.S. degrees from the Department of Computer Science at Taiwan’s National Cheng-Kung University in 2010 and 2011, respectively. He then received his Ph.D. degree from the Department of Electrical and Computer Engineering (ECE) at the University of Illinois at Urbana-Champaign (UIUC) in 2017. He has been researching on high-performance computing systems with application focus on design automation algorithms and machine learning kernels. He has created several open-source software, such as Taskflow and OpenTimer, that are being used by many people. Dr. Huang receives several awards for his research contributions, including ACM SIGDA Outstanding PhD Dissertation Award in 2019, NSF CAREER Award in 2022, Humboldt Research Fellowship Award in 2022. He also received the 2022 ACM SIGDA Service Award for recognizing his community service that engaged students in design automation research.
Research highlights
(1) Parallel Programming Environment: Modern scientific computing relies on a heterogeneous mix of computational patterns, domain algorithms, and specialized hardware to achieve key scientific milestones that go beyond traditional capabilities. However, programming these applications often requires complex expert-level tools and a deep understanding of parallel decomposition methodologies. Our research investigates new programming environments to assist researchers and developers to tackle the implementation complexities of high-performance parallel and heterogeneous programs.
(2) Electronic Design Automation (EDA): The ever-increasing design complexity in VLSI implementation has far exceeded what many existing EDA tools can scale with reasonable design time and effort. A key fundamental challenge is that EDA must incorporate new parallel paradigms comprising manycore CPUs and GPUs to achieve transformational performance and productivity milestones. Our research investigates new computing methods to advance the current state-of-the-art by assisting everyone to efficiently tackle the challenges of designing, implementing, and deploying parallel EDA algorithms on heterogeneous nodes.
(3) Machine Learning Systems: Machine learning has become centric to a wide range of today’s applications, such as recommendation systems and natural language processing. Due to the unique performance characteristics, GPUs are increasingly used for machine learning applications and can dramatically accelerate neural network training and inference. Modern GPUs are fast and are equipped with new programming models and scheduling runtimes that can bring significant yet largely untapped performance benefits to many machine learning applications. Our research investigates novel parallel algorithms and frameworks to accelerate machine learning system kernels with order-of-magnitude performance breakthrough.