Who’s Ming-Chang Yang
Emerging non-volatile memory and storage technologies, memory and storage systems, and the next-generation memory/storage architecture designs.
Ming-Chang Yang is currently an Assistant Professor at the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received his B.S. degree from the Department of Computer Science at National Chiao-Tung University, Hsinchu, Taiwan, in 2010. He received his Master and Ph.D. degrees (supervised by Professor Tei-Wei Kuo) from the Department of Computer Science and Information Engineering at National Taiwan University, Taipei, Taiwan, in 2012 and 2016, respectively. His primary research interests include emerging non-volatile memory and storage technologies, memory and storage systems, and the next-generation memory/storage architecture designs.
Dr. Yang has published more than 70 research papers, which were mainly published in top journals (e.g., IEEE TC, IEEE TCAD, IEEE TVLSI, and ACM TECS) and top conferences (e.g., USENIX OSDI, USENIX FAST, USENIX ATC, ACM/IEEE DAC, ACM/IEEE ICCAD, ACM/IEEE CODES+ISSS, and ACM/IEEE EMSOFT). He received two best paper awards (from IEEE NVMSA 2019 and ACM/IEEE ISLPED 2020) for his research contributions on emerging non-volatile memory; also, he was awarded TSIA Ph.D. Student Semiconductor Award from Taiwan Semiconductor Industry Association (TSIA) in 2016 because of his research achievements on flash memory.
The main research interest of Dr. Yang’s research group is in embracing emerging memory/storage technologies, including various types of non-volatile memory (NVM) as well as the shingled magnetic recording (SMR) and interlaced magnetic recording (IMR) technologies for the next-generation hard disk drive (HDD), in computer systems.
Particularly, in view of the common read-write asymmetry (in both latency and energy) of NVM, one series of Dr. Yang’s research work attempts to alleviate the side effects caused by such asymmetry via innovating the application and/or algorithm designs. For example, one of their most recent research studies devises a novel dynamic hashing scheme for NVM called SEPH, which exhibits excellent performance scalability, efficiency, and predictability on the real product of NVM (i.e., Intel® Optane™ DCPMM). Also, Dr. Yang’s research group revamps the algorithmic design of random forest, one core algorithm of machine learning (ML), for NVM. This line of study receives particular attention and recognition from the community, including winning two best paper awards from NVMSA 2019 and ISLPED 2020. Moreover, Dr. Yang’s research group is also the pioneer in exploring the memory subsystem design based on an emerging type of NVM called racetrack memory (RTM).
On the other hand, even though the cutting-edge SMR and IMR technologies bring lower cost-per-GB to HDD, they also impose the write amplification problem on HDD, resulting in severe write performance degradation. In light of this, Dr. Yang’s research group introduces a couple of novel data management designs into different system layers for SMR-based or IMR-based HDD. For example, they architect KVIMR, a data management middleware for constructing a cost-effective yet high-throughput LSM-tree based KV store on IMR-based HDD. KVIMR exhibits significant throughput improvement and even excellent compatibility with the mainstream LSM-tree based KV stores (such as RocksDB and LevelDB). In addition, at the block layer, they put forward a novel design called Virtual Persistent Cache (VPC) that adaptively exploits the computing and management resources from the host system to ultimately improve the write responsiveness of SMR-based HDD. Moreover, they realize a firmware design called MAGIC, which shows great potential to close the performance gap between traditional and IMR-based HDDs.
Apart from the system work on adapting emerging memory/storage technologies, Dr. Yang’s research group is also of special interest to data-intensive or data-driven applications. For instance, they aim to optimize the efficiency and practicality of out-of-core graph processing systems, which feature offloading the enormous graph data from memory into storage for better scalability at a low cost. Also, they develop new frameworks for graph representation learning and graph neural networks with significant performance improvements.