Who’s Ming-Chang Yang

March 2023

Ming-Chang Yang

Associate Professor

Department of Computer Science and Engineering, The Chinese University of Hong Kong

Email:

mcyang@cse.cuhk.edu.hk

Personal webpage:

http://www.cse.cuhk.edu.hk/~mcyang/

Research interests

Emerging non-volatile memory and storage technologies, memory and storage systems, and the next-generation memory/storage architecture designs.

Short bio

Ming-Chang Yang is currently an Assistant Professor at the Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received his B.S. degree from the Department of Computer Science at National Chiao-Tung University, Hsinchu, Taiwan, in 2010. He received his Master and Ph.D. degrees (supervised by Professor Tei-Wei Kuo) from the Department of Computer Science and Information Engineering at National Taiwan University, Taipei, Taiwan, in 2012 and 2016, respectively. His primary research interests include emerging non-volatile memory and storage technologies, memory and storage systems, and the next-generation memory/storage architecture designs.

Dr. Yang has published more than 70 research papers, which were mainly published in top journals (e.g., IEEE TC, IEEE TCAD, IEEE TVLSI, and ACM TECS) and top conferences (e.g., USENIX OSDI, USENIX FAST, USENIX ATC, ACM/IEEE DAC, ACM/IEEE ICCAD, ACM/IEEE CODES+ISSS, and ACM/IEEE EMSOFT). He received two best paper awards (from IEEE NVMSA 2019 and ACM/IEEE ISLPED 2020) for his research contributions on emerging non-volatile memory; also, he was awarded TSIA Ph.D. Student Semiconductor Award from Taiwan Semiconductor Industry Association (TSIA) in 2016 because of his research achievements on flash memory.

Research highlights

The main research interest of Dr. Yang’s research group is in embracing emerging memory/storage technologies, including various types of non-volatile memory (NVM) as well as the shingled magnetic recording (SMR) and interlaced magnetic recording (IMR) technologies for the next-generation hard disk drive (HDD), in computer systems.

Particularly, in view of the common read-write asymmetry (in both latency and energy) of NVM, one series of Dr. Yang’s research work attempts to alleviate the side effects caused by such asymmetry via innovating the application and/or algorithm designs. For example, one of their most recent research studies devises a novel dynamic hashing scheme for NVM called SEPH, which exhibits excellent performance scalability, efficiency, and predictability on the real product of NVM (i.e., Intel® Optane™ DCPMM). Also, Dr. Yang’s research group revamps the algorithmic design of random forest, one core algorithm of machine learning (ML), for NVM. This line of study receives particular attention and recognition from the community, including winning two best paper awards from NVMSA 2019 and ISLPED 2020. Moreover, Dr. Yang’s research group is also the pioneer in exploring the memory subsystem design based on an emerging type of NVM called racetrack memory (RTM).

On the other hand, even though the cutting-edge SMR and IMR technologies bring lower cost-per-GB to HDD, they also impose the write amplification problem on HDD, resulting in severe write performance degradation. In light of this, Dr. Yang’s research group introduces a couple of novel data management designs into different system layers for SMR-based or IMR-based HDD. For example, they architect KVIMR, a data management middleware for constructing a cost-effective yet high-throughput LSM-tree based KV store on IMR-based HDD. KVIMR exhibits significant throughput improvement and even excellent compatibility with the mainstream LSM-tree based KV stores (such as RocksDB and LevelDB). In addition, at the block layer, they put forward a novel design called Virtual Persistent Cache (VPC) that adaptively exploits the computing and management resources from the host system to ultimately improve the write responsiveness of SMR-based HDD. Moreover, they realize a firmware design called MAGIC, which shows great potential to close the performance gap between traditional and IMR-based HDDs.

Apart from the system work on adapting emerging memory/storage technologies, Dr. Yang’s research group is also of special interest to data-intensive or data-driven applications. For instance, they aim to optimize the efficiency and practicality of out-of-core graph processing systems, which feature offloading the enormous graph data from memory into storage for better scalability at a low cost. Also, they develop new frameworks for graph representation learning and graph neural networks with significant performance improvements.

Who’s Wang Ying

February 2023

Wang Ying

Associate Professor

Institute of Computing Technology, Chinese Academy of Sciences.

Email:

wangying2009@ict.ac.cn

Personal webpage

https://wangying-ict.github.io/

Research interests

Domain-Specific chips, processor architecture and design automation

Short bio

Ying Wang is an associate professor in Institute of Computing Technology, Chinese Academy of Sciences. Wang’s research expertise is focused on VLSI testing, reliability and the design automation of domain-specific processors such as accelerators for computer vision, deep learning, graph computing and robotics. His group has conducted pioneering work in the open-source frameworks for automated neural network accelerator generation and customization. He has published more than 30 papers at DAC, and over 120 papers on other IEEE/ACM conferences and journals. He holds over 30 patents related to chip design. Wang is also a co-founder of Jeejio Tech in Beijing, which is granted the Special Start-up Award of the year 2018 by Chinese Academy of Sciences. Among Wang’s honors, it also includes the Young Talent Development Program Awardee from Chinese Association of Science and Technology (as one of the two awardees of computer science in 2016), 2017 CCF Intel outstanding researcher award, 2019 Early Career Award from Chinese Computer Federation and etc. He is the recipient of Under 40 Innovator award of DAC at 2021. Dr. Wang has also received several awards from international conferences, including the winner of System Design Contest at DAC 2018 and IEEE rebooting LPIRC 2016, the Best Paper Award at ITC-Asia 2018, GLSVLSI2021 (2nd place), ICCD 2019, and the best paper of 2011 IEEE Transaction on Computers, as well as the best paper nominee in ASPDAC.

Research highlights

Dr. Wang’s innovative research in the DeepBurning project has significantly contributed to one of the viable approaches toward automatic specialized accelerator generation and is considered one of the representative works in this area, which is to start from the software framework to directly generate a specialized circuit design implemented on FPGA or ASICs. After the initial project of DeepBurning1.0, he continues to pioneer several on-going works including ELNA (DAC2017), Dadu (DAC2018), 3D-Cube (DAC2019), DeepBurning-GL (ICCAD2020) and DeepBurning-Seg (Micro-2022), which also follows the same technical route of automatic hardware accelerator generation but has been extended to different applications and architectures. Also, the DeepBurning series not only develops horizontally to different areas, but also vertically go to the high level processor design stacks including early-stage design parameter exploration, ISA extension and compiler-hardware co-design. In general, his holistic work on this field has attracted considerable attention from different Chinese EDA companies. Based on the agile chip customization technology initiated by Dr. Wang, his company, Jeejio, is able to develop highly-customized chip solutions at a relatively low cost, and help its customers stay competitive in the niche IoT markets. Dr. Wang’s team has proposed the RISC-V compatible Sage architecture that can be used to customize AIoT SoC solution with user-redeemable computing power, for audio/video/image processing capability and also automotive scenarios.

Who’s Heba Abunahla

January 2023

Heba Abunahla

Assistant Professor

Quantum and Computer Engineering department, TU Delft, Netherlands.

Email:

Heba.nadhmi@gmail.com

Research interests

• Emerging RRAM devices
• Smart sensors
• Hardware security
• Graphene-based electronics
• CNTs-based electronics
• Neuromorphic computing

Short bio

Heba Abunahla is currently Assistant Professor at the Quantum and Computer Engineering department, Delft University of Technology. Abunahla received the BSc, MSc and PhD degrees (with honors) from United Arab Emirates University, University of Sharjah and Khalifa University, respectively, via competitive scholarship. Prior to joining TU Delft as an Assistant Professor, Abunahla spent over five years as Postdoctoral Fellow and Research Scientist working extensively on the design, fabrication and characterization of emerging memory devices with great emphasis on computing, sensing and security applications.

Abunahla owns two patents, has published one book, and has co-authored over 30 conference and journal papers. In 2017. Abunahla had a collaborative project with University of Adelaide, Australia, on developing novel non-enzymatic glucose sensor. According to her achievements, she received Australian Global Talent Permanent Residency in 2021. Moreover, Abunahla’s innovation of deploying emerging RRAM devices in neuromorphic computing has been selected by Nature-Scientific Reports to be among Top 100 in materials Science. Also, her recent achievement in fabricating RRAM-based tunable filters was selected to be published in the first issue of Innovation@UAE Magazine launched by Ministry of Education.

Abunahla has several awards and competitive scholarships. E.g., she is the recipient of Unique Fellowship for Top Female Academic Scientists – Electrical Engineering, Mathematics & Computer Science (2022) from Delft University of Technology. Abunahla serves as a lead Editor in Frontiers in Neuroscience. She is an active reviewer for several high impact journals and conferences.

Research highlights

Secure intelligent memory and sensors are crucial components in our daily electronic devices and systems. CMOS has been the core technology to provide such requirements for decades. However, the limitations associated to power and area have led to the need for an alternative technology, called Resistive-RAM (RRAM). RRAM devices are able to perform memory and computation in the same cell, which enables in-memory computation feature. Moreover, RRAM can be deployed as a smart sensor due to its ability to change its I-V characteristic against the surrounding environment. Inherit stochasticity in RRAM junctions is also a great asset for security applications.

Abunahla has built a strong expertise in the field of micro-electronics design, modeling, fabrication, and characterization of high-performance and high-density memory devices. Abunahla developed novel RRAM devices that have been uniquely deployed in sensing, computing, security, and communication applications. For instance, Abunahla demonstrated a novel approach to measuring glucose levels for an adult human, and demonstrated the ability to fabricate such biosensor using a simple, low-cost standard photolithography process. In contrast to other sensors, the developed sensor has the ability to accurately measure glucose levels at neutral pH conditions (i.e. pH=7). Abunahla filed a US patent for this device and all the details of the innovation are published by the prestigious Nature Scientific Reports. This work has great commercialization opportunity; being unique and cutting edge in nature, and Abunahla is currently working with her team toward providing lab-on-chip sensing approach based on this technology.

Furthermore, Abunahla has recently innovated flexible memory devices, namely NeuroMem, that can mimic the memorization behavior of the brain. This unique feature makes NeuroMem a potential candidate for emerging in-memory-computing applications. This work is the first to report on the great potential of this technology for Artificial Intelligence (AI) inference for edge devices. Abunahla filed a US patent for this innovation and published the work in the prestigious Nature Scientific Reports. Further, her innovative research in using nanoscale devices for Gamma-ray sensing using Sol-gel/drop-coated micro-think nanomaterials is very unique and has been filed as US patent and published by the prestigious Journal of Materials Chemistry and Physics. Moreover, Abunahla has fabricated novel RRAM-based tunable filters which prove the possibility of tuning RF devices without any localized surface mount device (SMD) element or complex realization technique. In the field of hardware security, Abunahla developed an efficient flexible RRAM-based true random number generation, named SecureMem. The data generated by SecureMem prototype passed all NIST tests without any post-processing or hardware overhead.

Who’s Aman Arora

December, 2022

Aman Arora

Graduate Fellow

The University of Texas at Austin

Email:

aman.kbm@utexas.edu

Personal webpage

https://amanarora.site

Research interests

Reconfigurable computing, Domain-specific acceleration, Hardware for Machine Learning

Short bio

Aman Arora is a Graduate Fellow and Ph.D. Candidate in the Department of Electrical and Computer Engineering at the University of Texas at Austin. His research vision is to minimize the gap between ASICs and FPGAs in terms of performance and efficiency, and to minimize the gap between CPUs/GPUs and FPGAs in terms of programmability. He imagines a future where FPGAs are first-class citizens in the world of computing and first-choice for accelerating new workloads. His PhD dissertation research focuses on the search for efficient reconfigurable fabrics for Deep Learning (DL) by proposing new DL-optimized blocks for FPGAs. His research has resulted in 11 paper publications in top conferences and journals in the field of reconfigurable computing and computer architecture and design. His work received a Best Paper Award at the IEEE FCCM conference in 2022, and he currently holds a fellowship from the UT Austin Graduate School. His research has been funded by the NSF. Aman has served as a secondary reviewer in top conferences like ACM FPGA (in 2021 and 2022). He is also the leader of the AI+FPGA committee at Open-Source FPGA (OSFPGA) Foundation, where he leads research efforts and organizes training webinars. He has 12 years of experience in the semiconductor industry in design, verification, testing and architecture roles. Most recently, he worked in the GPU Deep Learning architecture team at NVIDIA.

Research highlights

Aman’s past and current research focusses on architecting efficient reconfigurable acceleration substrates (or fabrics) for Deep Learning (DL). With Moore’s law slowing down, the requirements of resource-hungry applications like DL growing & changing rapidly, and climate change already knocking at our doors, this research theme has never been more relevant and important.

Aman has proposed changing the architecture of FPGAs to make them better DL accelerators. He proposed replacing a portion of the FPGA’s programmable logic area with new blocks called Tensor Slices, which are specialized for performing matrix operations like matrix-matrix multiplication and matrix-vector multiplication that are common in DL workloads. The FPGA industry has parallelly developed similar blocks like Intel AI Tensor Block and Achronix Machine Learning Processor.

In addition, Aman proposed adding compute capabilities to the on-chip memory blocks on FPGAs, so they can operate on data without having to move the data to compute units on the FPGA. He was the first to exploit the dual port nature of FPGA BRAMs to design these blocks instead of using technologies that significantly impact the circuitry of the RAM array and degrade its performance. He calls these new blocks CoMeFa RAMs. This work won the Best Paper Award at IEEE FCCM 2022.

Aman also led a team effort spanning three universities – UT Austin, University of Toronto, and University of New Brunswick – to develop an open-source DL benchmark suite called Koios. These benchmarks can be used to perform FPGA architecture and CAD research, and are integrated into VTR, which is the most popular open-source FPGA CAD flow.

Other research projects Aman has worked on or is working on include: (1) developing a parallel reconfigurable spatial acceleration fabric consisting of PIM (Processing-In-Memory) blocks connected using an FPGA-like interconnect, (2) implementing efficient accelerators for Weightless Neural Networks (WNNs) on FPGAs, (3) enabling support for open-source tools in FPGA research tools like COFFE, and (4) using Machine Learning (ML) to perform cross-prediction of power consumption on FPGAs and developing an open-source dataset that can be widely used for such prediction problems.

Aman hopes to start and direct a research lab at a university soon. His future research will straddle the entire stack of computer engineering: programmability at the top, architecture exploration in the middle, and hardware design at the bottom. The research thrusts he plans to focus on are next-gen reconfigurable fabrics, ML and FPGA related tooling, enabling the creation of an FPGA app store, and sustainable acceleration.

Who’s Xun Jiao

Nov 1st, 2022

Xun Jiao

Assistant Professor

Villanova University

Email:

xun.jiao@villanova.edu

Personal webpage

https://vu-detail.github.io/people/jiao

Research interests

Robust Computing, Efficient Computing, AI/Machine Learning, Brain-inspired Computing, Fuzz Testing

Short bio

Xun Jiao is an assistant professor in ECE department of Villanova University. He leads the Dependable, Efficient, and Intelligent Computing Lab (DETAIL). Before that, he obtained his Ph.D. degree from UC San Diego in 2018. He earned a dual first-class Bachelor degree from Joint Program of Queen Mary University of London and Beijing University of Posts and Telecommunications in 2013. His research interests are on robust and efficient computing for intelligent applications such as AI and machine learning. He published 50+ papers in international conferences and journals. He received 6 paper awards/nominations in international conferences such as DATE, EMSOFT, DSD, and SELSE. He is an associate editor of IEEE Trans on CAD, and a TPC member of DAC, ICCAD, ASP-DAC, GLSVLSI, LCTES. His research is sponsored by NSF, NIH, L3Harris, and Nvidia. He has delivered an invited presentation at U.S. Congressional House. He is a recipient of 2022 IEEE Young Engineer of the Year Award (Philadelphia Section).

Research highlights

Robust computing
• With continuous scaling of CMOS technology, circuits are even more susceptible to timing errors caused by microelectronic variations such as voltage and temperature variations, making them a notable threat to circuit/system reliability. Dr. Jiao has adopted a cross-layer approach (circuit-architecture-application) to combat errors/faults originated in hardware. Specifically, Dr. Jiao has pioneered in developing machine learning-based models to model/predict the errors in hardware and take proactive actions such as instruction-based frequency scaling to prevent errors. By exploiting the application-level error resilience of different applications (e.g., AI/machine learning, multimedia), Dr. Jiao has also developed various approximate computing techniques for more efficient execution.

Energy-efficient computing
• Energy efficiency has become a top priority for both high-performance computing systems and resource-constrained embedded systems. Dr. Jiao proposed solutions to this challenge at multiple abstraction levels. He proposed intelligent dynamic voltage and frequency scaling (DVFS) for circuits and systems, as well as designing novel efficient architecture such as in-memory computing and bloom filter to execute emerging workloads such as deep neural networks.

AI/brain-inspired computing
• Hyperdimensional computing (HDC) was introduced as an alternative computational model mimicking the “human brain” at the functionality level. Compared with DNNs, the advantages of HDC include smaller model size, less computation cost, and one/few-shot learning, making it a promising alternative computing paradigm. Dr. Jiao’s work has been pioneering the robustness of HDC against adversarial attacks and hardware errors, which has earned him a best paper nomination at DATE 2022. He also applied HDC to various application domains such as natural language processing, drug discovery, and anomaly detection, which demonstrated promising performance compared to traditional learning methods.

Fuzzing-based secure system
• Cyber-security in the digital age is a first-class concern. The ever-increasing use of digital devices, unfortunately, is facing significant challenges, due to the serious effects of security vulnerabilities. Dr. Jiao has developed a series of vulnerability detection techniques based on fuzzing, and has applied to software, firmware, and hardware. Over 100 previously unknown vulnerabilities are discovered and are reported to the US National Vulnerability Database with unique CVE assignments. He received two best paper nominations from EMSOFT 2019 and 2020.

Who’s Tsung-Wei Huang

Sep 1st, 2022

Tsung-Wei Huang

Assistant Professor

University of Utah

Email:

tsung-wei.huang@utah.edu

Personal webpage

https://tsung-wei-huang.github.io/

Research interests

Design automation and high-performance computing.

Short bio

Dr. Tsung-Wei Huang received his B.S. and M.S. degrees from the Department of Computer Science at Taiwan’s National Cheng-Kung University in 2010 and 2011, respectively. He then received his Ph.D. degree from the Department of Electrical and Computer Engineering (ECE) at the University of Illinois at Urbana-Champaign (UIUC) in 2017. He has been researching on high-performance computing systems with application focus on design automation algorithms and machine learning kernels. He has created several open-source software, such as Taskflow and OpenTimer, that are being used by many people. Dr. Huang receives several awards for his research contributions, including ACM SIGDA Outstanding PhD Dissertation Award in 2019, NSF CAREER Award in 2022, Humboldt Research Fellowship Award in 2022. He also received the 2022 ACM SIGDA Service Award for recognizing his community service that engaged students in design automation research.

Research highlights

(1) Parallel Programming Environment: Modern scientific computing relies on a heterogeneous mix of computational patterns, domain algorithms, and specialized hardware to achieve key scientific milestones that go beyond traditional capabilities. However, programming these applications often requires complex expert-level tools and a deep understanding of parallel decomposition methodologies. Our research investigates new programming environments to assist researchers and developers to tackle the implementation complexities of high-performance parallel and heterogeneous programs.

(2) Electronic Design Automation (EDA): The ever-increasing design complexity in VLSI implementation has far exceeded what many existing EDA tools can scale with reasonable design time and effort. A key fundamental challenge is that EDA must incorporate new parallel paradigms comprising manycore CPUs and GPUs to achieve transformational performance and productivity milestones. Our research investigates new computing methods to advance the current state-of-the-art by assisting everyone to efficiently tackle the challenges of designing, implementing, and deploying parallel EDA algorithms on heterogeneous nodes.

(3) Machine Learning Systems: Machine learning has become centric to a wide range of today’s applications, such as recommendation systems and natural language processing. Due to the unique performance characteristics, GPUs are increasingly used for machine learning applications and can dramatically accelerate neural network training and inference. Modern GPUs are fast and are equipped with new programming models and scheduling runtimes that can bring significant yet largely untapped performance benefits to many machine learning applications. Our research investigates novel parallel algorithms and frameworks to accelerate machine learning system kernels with order-of-magnitude performance breakthrough.

Who’s Mohsen Imani

Aug 1st, 2022

Mohsen Imani

Assistant Professor

Department of Computer Science,
University of California Irvine

Email:

m.imani@uci.edu

Personal webpage

https://www.ics.uci.edu/~mohseni/

Research interests

Brain-Inspired Computing, Computer Architecture, Embedded Systems

Short bio

Mohsen Imani is an Assistant Professor in the Department of Computer Science at UC Irvine. He is also a director of Bio-Inspired Architecture and Systems Laboratory (BIASLab). He is working on a wide range of practical problems in the area of brain-inspired computing, machine learning, computer architecture, and embedded systems. His research goal is to design real-time, robust, and programmable computing platforms that can natively support a wide range of learning and cognitive tasks on edge devices. Dr. Imani received his Ph.D. from the Department of Computer Science and Engineering at UC San Diego. He has a stellar record of publication with over 120 papers in top conferences/journals. His contribution has led to a new direction in brain-inspired hyperdimensional computing that enables ultra-efficient and real-time learning and cognitive support. His research was also the main initiative in opening up multiple industrial and governmental research programs. Dr. Imani’s research has been recognized with several awards, including the Bernard and Sophia Gordon Engineering Leadership Award, the Outstanding Researcher Award, and the Powell Fellowship Award. He also received the Best Doctorate Research from UCSD, the best paper award in Design Automation and Test in Europe (DATE) in 2022, and several best paper nomination awards at multiple top conferences including Design Automation Conference (DAC) in 2019 and 2020, Design Automation and Test in Europe (DATE) in 2020, and International Conference on Computer-Aided Design (ICCAD) in 2020.

Reasearch highlights

Dr. Imani’s research has been instrumental in developing practical implementations of Hyper-dimensional (HD) computing – a computational technique modeled after the brain. The Hyper-dimensional computing system enabled large-scale learning in real-time, including both training and inference. He has developed such a system by not only accelerating machine learning algorithms in hardware but also redesigning the algorithms themselves using strategies that more closely model the ultimate efficient learning machine: the human brain. HD computing is motivated by the observation that the key aspects of human memory, perception, and cognition can be explained by the mathematical properties of high-dimensional spaces. It thereby models the human memory using points of a high-dimensional space, that is, with hypervectors (tens of thousand dimensions.) These points can be manipulated under a formal algebra to represent semantic relationships between objects, and thus we can devise various cognitive solutions which memorize and learn from the relation of data. HD computing also mimics several desirable properties of the human brain including robustness to noise and failure of memory cells, and one-shot learning which does not require a gradient-based algorithm. Dr. Imani exploited these key principles of brain functionalities to create cognitive platforms. The platforms include (1) novel HD algorithms supporting classification and clustering which represent the most popular categories of algorithms used regularly by professional data scientists, (2) novel HD hardware accelerators capable of up to three orders of magnitude improvement in energy efficiency relative to GPU implementations, and (3) an integrated software infrastructure that makes it easy for users to integrate HD computing as a part of systems, and that enables secure distributed learning on encrypted information using HD computing. The software contributions are backed by efficient hardware acceleration in GPU, FPGA, and processing in-memory. Dr. Imani leveraged the memory-centric nature of HD computing to develop efficient hardware/software infrastructure for a highly-parallel PIM acceleration. In HD computing, hypervectors have holographic distribution, where the information is uniformly distributed over a large number of dimensions. This makes HD computing significantly robust to the failure of an individual memory component (Robust to ∼30% failure in the hardware). In particular, Dr. Imani exploited this robustness to design an approximate in-memory associative search that checks the similarity of hypervectors in about tens of nano-seconds, while providing orders of magnitude improvement in energy efficiency as compared to today’s exact processors.

Who’s Xunzhao Yin

Aug 1st, 2022

Xunzhao Yin

Assistant Professor

Zhejiang University

Email:

xzyin1@zju.edu.cn

Personal webpage

https://person.zju.edu.cn/en/xunzhaoyin

Research interests

Circuits and architectures based on emerging technologies & computational paradigms; hardware-software co-design & optimization; computing-in-memory & brain-inspired computing; hardware solutions for unconventional computing, etc.

Short bio

Xunzhao Yin (S’16-M’19) is an assistant professor of the College of Information Science and Electronic Engineering at Zhejiang University. He received his Ph.D. degree in Computer Science and Engineering from University of Notre Dame in 2019 and B.S. degree in Electronic Engineering from Tsinghua University in 2013, respectively. His research interests include emerging circuit/architecture designs and novel computing paradigms with both CMOS and emerging technologies. He has published top journals and conference papers including Nature Electronics, IEEE TC, IEEE TCAD, IEEE TCAS, IEEE TED, DAC, ICCAD, IEDM, Symposium on VLSI, etc. He has received the best paper award nomination of ICCAD 2020, DATE2022, etc.  He serves as the Associate Editor of ACM SIGDA E-Newsletter, and Review Editor of Frontiers in Electronics.

Research highlights

Prof. Yin’s research interests span across architectures, circuits and devices, his research goal is to develop highly effective solutions that create a bridge between emerging devices and circuit and architecture innovations to develop highly efficient and scalable non-Von Neumann architectures/hardware platforms to address the computational challenges demanded by ML and IoT applications. Towards this goal, Prof. Yin’s work has specifically addressed the design of efficient emerging circuits and architectures that (i) interact with various emerging device technologies, e.g., Ferroelectric FET (FeFET), and (ii) complement non-Von Neumann computational paradigms for computationally-hard optimization problems. Some of his research highlights are summarized below:

Prof. Yin proposed to leverage the merged memory and computation property of FeFET to address the memory wall issues present in the AI inference module based on conventional CMOS architecture, and proposed a series of FeFET based ultra-compact, ultra-low power designs of content addressable memory (CAM) that achieve superior information density and power efficiency for data-intensive search tasks. By extending the search functionality of CAM to similarity metric calculation, his work further improved the hardware efficiency in the context of emerging applications, e.g., few-shot learning, hyperdimensional computing, database query, etc., making CAMs more applicable for various computation domains. Prof. Yin is also quite fascinated with constructing accelerators that embrace novel architectures and technologies, especially the notion of “letting physics do the computation” to achieve higher performance and energy efficiency than traditional digital machines. He developed an analog circuit based hardware system to realize a novel continuous time dynamical system (CTDS) which solves the satisfiability problems (SAT) with drastically reduced hardware time. He is further researching on the potential hardware-software co-design solutions with the help of emerging devices and computing paradigms for solving complex combinatorial optimization problems.

Who’s Ahmedullah Aziz

June 1st, 2022

Headshot of Ahmedullah Aziz in the Student Union on August 05, 2019. Photo by Steven Bridges/University of Tennessee

Ahmedullah Aziz

Assistant Professor

University of Tennessee Knoxville

Email:

aziz@utk.edu

Personal webpage

https://nordic.eecs.utk.edu/

Research interests

Cryogenic Electronics, Beyond-CMOS Technologies, Neuromorphic Hardware, Superconducting Devices/Circuits, VLSI

Short bio

Dr. Ahmedullah Aziz is an Assistant Professor of Electrical Engineering & Computer Science at the University of Tennessee, Knoxville, USA. He earned his Ph.D. in Electrical & Computer Engineering from Purdue University in 2019, an MS degree in Electrical Engineering from the Pennsylvania State University (University Park) in 2016, and a BS degree in Electrical & Electronic Engineering from Bangladesh University of Engineering & Technology (BUET) in 2013. Before beginning his graduate studies, Dr. Aziz worked in the ‘Tizen Lab’ of the Samsung R&D Institute in Bangladesh as a full-time Engineer. During graduate education, he worked as a Co-Op Engineer (Intern) in the Technology Research division of Global Foundries (Fab 8, NY, USA). He received several awards and accolades for his research, including the ‘ACM SIGDA Outstanding Ph.D. Dissertation Award (2021)’ from the Association of Computing Machinery, ‘EDAA Outstanding Ph.D. Dissertation Award (2020)’ from the European Design and Automation Association, ‘Outstanding Graduate Student Research Award (2019)’ from the College of Engineering, Purdue University, and ‘Icon’ award from Samsung (2013). He was a co-recipient of two best publication awards (2015, 2016) from the SRC-DARPA STARnet Center and the best project award (2013) from CNSER. In addition, he received several scholarships and recognition for academic excellence, including – Dean’s Award, JB Gold Medal, and Chairman’s Award. He is a technical program committee (TPC) member for multiple flagship conferences (including DAC, ISCAS, GLSVLSI, Nano) and a reviewer for several journals from reputed publishers (IEEE, AIP, Elsevier, Frontiers, IOP Science, Springer Nature). He served as a review panelist for the US Department of Energy (DOE) and a guest editor for – ‘Frontiers in Nanotechnology’, ‘Photonics’, and ‘Micromachines’.

Reasearch highlights

Dr. Aziz is an expert in device-circuit co-design and electronic design automation (EDA). His research laid the foundation for physics-based and semi-physical compact modeling of multiple emerging device technologies, including – Mott switches, oxide memristors, ferroelectric transistors, Josephson Junctions, cryotrons, topological memory/switches, and so on. His exemplary contributions to the field of low-power electronics have been internationally recognized through two prestigious distinguished dissertation awards by (i) the Association for Computing Machinery (ACM) – 2021 and (ii) European Design and Automation Association (EDAA) – 2020. His research portfolio comprises multiple avenues of exploratory nanoelectronics, spanning from device modeling to circuit/array design. In addition, Dr. Aziz has been a trailblazer in cryogenic memory technologies, facilitating critical advancements in quantum computing systems and space electronics. His works on memristive (room-temperature) and superconducting (cryogenic) neuromorphic systems have paved the way for dense, reconfigurable, and bio-plausible computing hardware.

Who’s Li Jiang

July 1st, 2022

Li Jiang

Assistant Professor

Shanghai Jiao Tong University

Email:

ljiang_cs@sjtu.edu.cn

Personal webpage

https://www.cs.sjtu.edu.cn/~jiangli/

Research interests

Compute-in-memory, Neuromorphic Computing, Domain Specific Architecture for AI, Database, networking etc.

Short bio

Li Jiang received the B.S. degree from the Dept. of CS&E, Shanghai Jiao Tong University in 2007, the MPhil, and the Ph.D. degree from the Dept. of CS&E, the Chinese University of Hong Kong in 2010 and 2013, respectively. He has published more than 80 peer-review papers in top-tier computer architecture, EDA and AI/Database conferences and journals, including ISCA, MICRO, DAC, ICCAD, AAAI, ICCV, SigIR, TC, TCAD, TPDS and etc. He received the Best Paper Award in DATE’22, Best Paper Nomination in ICCAD10, and DATE21. According to the IEEE Digital Library, five articles ranked in the top 5 of citations of all papers collected at its conferences. Some of the achievements have been introduced into the IEEE P1838 standard, and several technologies have been in commercial use in cooperation with TSMC, Huawei, and Alibaba.

He got the best Ph.D. Dissertation award in ATS 2014, and he was in the final list of TTTC’s E. J. McCluskey Doctoral Thesis Award. He received ACM Shanghai Rising Star award and CCF VLSI early career award in 2019. He received the 2nd class prize of Wu Wenjun Award for Artificial Intellegence. He serves as co-chair and TPC member in several international and national conferences, such as MICRO, DATE, ASP-DAC, ITC-Asia, ATS, CFTC, CTC, etc. He is an Associate Editor of IET Computers Digital Techniques, VLSI, the Integration Journal. He is the co-founder of ChinaDA and ACM/SigDA East China Branch.

Reasearch highlights

Prof. Li Jiang has been working on the test and repair architecture of 3D ICs that can dramatically reduce costs, advocating and emphasizing the precious resources sharing mechanism. They optimize the 3D SoC test architecture under test-pin count and thermal dissipation constraints by sharing the test-access-mechanism (TAM) and test wire of pre-bond wafer-level and post-bond package-level tests. They further propose the inter-die spare-sharing technique and the die-matching algorithms to improve the stack yield of 3D stacked memory. This work is nominated as the best paper in ICCAD 2010. Based on this method, they work with TSMC to propose a novel BISR architecture that can cluster and map faulty rows/columns across die to the same spare row/column to enhance the reparability. This series of works have been widely accepted by the mainstream and introduced into the IEEE P1838 standard.

To improve the assembly yield in the TSV fabrication process, they develop a fault model considering TSV coupling effect that has not been carefully investigated before. It leads their attention to a unique phenomenon, i.e., the faulty TSVs can be clustered. Thus, they propose a novel spare-TSV sharing architecture composed of a lightweight switch design, two effective and efficient repair algorithms, and a TSV-grid mapping mechanism that can avoid catastrophic TSV clustering defects.

ReRAM cell needs multiple programming pulses to avoid device programming variation and resistance drifting. To overcome the resulting programming latency and energy, they propose a Self-Terminating Write (STW) circuit that heavily reuses the inherent PIM peripherals (e.g., ADC and Trans-Impedance Amplifier) to obtain 2-bit precision via a single program pulse. This work is the best paper award of DATE 2022.