Xunzhao Yin

Aug 1st, 2022

Xunzhao Yin

Assistant Professor

Zhejiang University

Email:

xzyin1@zju.edu.cn

Personal webpage

https://person.zju.edu.cn/en/xunzhaoyin

Research interests

Circuits and architectures based on emerging technologies & computational paradigms; hardware-software co-design & optimization; computing-in-memory & brain-inspired computing; hardware solutions for unconventional computing, etc.

Short bio

Xunzhao Yin (S’16-M’19) is an assistant professor of the College of Information Science and Electronic Engineering at Zhejiang University. He received his Ph.D. degree in Computer Science and Engineering from University of Notre Dame in 2019 and B.S. degree in Electronic Engineering from Tsinghua University in 2013, respectively. His research interests include emerging circuit/architecture designs and novel computing paradigms with both CMOS and emerging technologies. He has published top journals and conference papers including Nature Electronics, IEEE TC, IEEE TCAD, IEEE TCAS, IEEE TED, DAC, ICCAD, IEDM, Symposium on VLSI, etc. He has received the best paper award nomination of ICCAD 2020, DATE2022, etc.  He serves as the Associate Editor of ACM SIGDA E-Newsletter, and Review Editor of Frontiers in Electronics.

Research highlights

Prof. Yin’s research interests span across architectures, circuits and devices, his research goal is to develop highly effective solutions that create a bridge between emerging devices and circuit and architecture innovations to develop highly efficient and scalable non-Von Neumann architectures/hardware platforms to address the computational challenges demanded by ML and IoT applications. Towards this goal, Prof. Yin’s work has specifically addressed the design of efficient emerging circuits and architectures that (i) interact with various emerging device technologies, e.g., Ferroelectric FET (FeFET), and (ii) complement non-Von Neumann computational paradigms for computationally-hard optimization problems. Some of his research highlights are summarized below:

Prof. Yin proposed to leverage the merged memory and computation property of FeFET to address the memory wall issues present in the AI inference module based on conventional CMOS architecture, and proposed a series of FeFET based ultra-compact, ultra-low power designs of content addressable memory (CAM) that achieve superior information density and power efficiency for data-intensive search tasks. By extending the search functionality of CAM to similarity metric calculation, his work further improved the hardware efficiency in the context of emerging applications, e.g., few-shot learning, hyperdimensional computing, database query, etc., making CAMs more applicable for various computation domains. Prof. Yin is also quite fascinated with constructing accelerators that embrace novel architectures and technologies, especially the notion of “letting physics do the computation” to achieve higher performance and energy efficiency than traditional digital machines. He developed an analog circuit based hardware system to realize a novel continuous time dynamical system (CTDS) which solves the satisfiability problems (SAT) with drastically reduced hardware time. He is further researching on the potential hardware-software co-design solutions with the help of emerging devices and computing paradigms for solving complex combinatorial optimization problems.

Read More

2022 ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED) Table of Content

Full Citation in the ACM Digital Library

SESSION: Session 1: Energy-efficient and Robust Neural Networks

Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars

  • Abhiroop Bhattacharjee
  • Youngeun Kim
  • Abhishek Moitra
  • Priyadarshini Panda

Spiking Neural Networks (SNNs) have recently emerged as the low-power alternative to Artificial Neural Networks (ANNs) owing to their asynchronous, sparse, and binary information processing. To improve the energy-efficiency and throughput, SNNs can be implemented on memristive crossbars where Multiply-and-Accumulate (MAC) operations are realized in the analog domain using emerging Non-Volatile-Memory (NVM) devices. Despite the compatibility of SNNs with memristive crossbars, there is little attention to study on the effect of intrinsic crossbar non-idealities and stochasticity on the performance of SNNs. In this paper, we conduct a comprehensive analysis of the robustness of SNNs on non-ideal crossbars. We examine SNNs trained via learning algorithms such as, surrogate gradient and ANN-SNN conversion. Our results show that repetitive crossbar computations across multiple time-steps induce error accumulation, resulting in a huge performance drop during SNN inference. We further show that SNNs trained with a smaller number of time-steps achieve better accuracy when deployed on memristive crossbars.

Identifying Efficient Dataflows for Spiking Neural Networks

  • Deepika Sharma
  • Aayush Ankit
  • Kaushik Roy

Deep feed-forward Spiking Neural Networks (SNNs) trained using appropriate learning algorithms have been shown to match the performance of state-of-the-art Artificial Neural Networks (ANNs). The inputs to an SNN layer are 1-bit spikes distributed over several timesteps. In addition, along with the standard artificial neural network (ANN) data structures, SNNs require one additional data structure – the membrane potential (Vmem) for each neuron which is updated every timestep. Hence, the dataflow requirements for energy-efficient hardware implementation of SNNs can be different from the standard ANNs. In this paper, we propose optimal dataflows for deep spiking neural network layers. To evaluate the energy and latency of different dataflows, we considered three hardware architectures with varying on-chip resources to represent a class of spatial accelerators. We developed a set of rules leading to optimum dataflow for SNNs that achieve more than 90% improvement in Energy-Delay Product (EDP) compared to the baseline for some workloads and architectures.

Sparse Periodic Systolic Dataflow for Lowering Latency and Power Dissipation of Convolutional Neural Network Accelerators

  • Jung Hwan Heo
  • Arash Fayyazi
  • Amirhossein Esmaili
  • Massoud Pedram

This paper introduces the sparse periodic systolic (SPS) dataflow, which advances the state-of-the-art hardware accelerator for supporting lightweight neural networks. Specifically, the SPS dataflow enables a novel hardware design approach unlocked by an emergent pruning scheme, periodic pattern-based sparsity (PPS). By exploiting the regularity of PPS, our sparsity-aware compiler optimally reorders the weights and uses a simple indexing unit in hardware to create matches between the weights and activations. Through the compiler-hardware codesign, SPS dataflow enjoys higher degrees of parallelism while being free of the high indexing overhead and without model accuracy loss. Evaluated on popular benchmarks such as VGG and ResNet, the SPS dataflow and accompanying neural network compiler outperform prior work in convolutional neural network (CNN) accelerator designs targeting FPGA devices. Against other sparsity-supporting weight storage formats, SPS results in 4.49 × energy efficiency gain while lowering storage requirements by 3.67 × for total weight storage (non-pruned weights plus indexing) and 22,044 × for indexing memory.

SESSION: Session 2: Novel Computing Models (Chair: Priyadarshini Panda, Yale)

QMLP: An Error-Tolerant Nonlinear Quantum MLP Architecture using Parameterized Two-Qubit Gates

  • Cheng Chu
  • Nai-Hui Chia
  • Lei Jiang
  • Fan Chen

Despite potential quantum supremacy, state-of-the-art quantum neural networks (QNNs) suffer from low inference accuracy. First, the current Noisy Intermediate-Scale Quantum (NISQ) devices with high error rates of 10− 3 to 10− 2 significantly degrade the accuracy of a QNN. Second, although recently proposed Re-Uploading Units (RUUs) introduce some non-linearity into the QNN circuits, the theory behind it is not fully understood. Furthermore, previous RUUs that repeatedly upload original data can only provide marginal accuracy improvements. Third, current QNN circuit ansatz uses fixed two-qubit gates to enforce maximum entanglement capability, making task-specific entanglement tuning impossible, resulting in poor overall performance. In this paper, we propose a Quantum Multilayer Perceptron (QMLP) architecture featured by error-tolerant input embedding, rich nonlinearity, and enhanced variational circuit ansatz with parameterized two-qubit entangling gates. Compared to prior arts, QMLP increases the inference accuracy on the 10-class MNIST dataset by 10% with 2 × fewer quantum gates and 3 × reduced parameters. Our source code is available and can be found in https://github.com/chuchengc/QMLP/.

Design and Logic Synthesis of a Scalable, Efficient Quantum Number Theoretic Transform

  • Chao Lu
  • Shamik Kundu
  • Abraham Kuruvila
  • Supriya Margabandhu Ravichandran
  • Kanad Basu

The advent of quantum computing has engendered a widespread proliferation of efforts utilizing qubits for optimizing classical computational algorithms. Number Theoretic Transform (NTT) is one such popular algorithm that accelerates polynomial multiplication significantly and is consequently, the core arithmetic operation in most homomorphic encryption algorithms. Hence, fast and efficient execution of NTT is highly imperative for practical implementation of homomorphic encryption schemes in different computing paradigms. In this paper, we, for the first time, propose an efficient and scalable Quantum Number Theoretic Transform (QNTT) circuit using quantum gates. We introduce a novel exponential unit for modular exponential operation, which furnishes an algorithmic complexity of O(n). Our proposed methodology performs further optimization and logic synthesis of QNTT, that is significantly fast and facilitates efficient implementations on IBM’s quantum computers. The optimized QNTT achieves a gate-level complexity reduction from power of two to one with respect to bit length. Our methodology utilizes 44.2% fewer gates, thereby minimizing the circuit depth and a corresponding reduction in overhead and error probability, for a 4-point QNTT compared to its unoptimized counterpart.

A Charge Domain P-8T SRAM Compute-In-Memory with Low-Cost DAC/ADC Operation for 4-bit Input Processing

  • Joonhyung Kim
  • Kyeongho Lee
  • Jongsun Park

This paper presents a low cost PMOS-based 8T (P-8T) SRAM Compute-In-Memory (CIM) architecture that efficiently per-forms the multiply-accumulate (MAC) operations between 4-bit input activations and 8-bit weights. First, bit-line (BL) charge-sharing technique is employed to design the low-cost and reliable digital-to-analog conversion of 4-bit input activations in the pro-posed SRAM CIM, where the charge domain analog computing provides variation tolerant and linear MAC outputs. The 16 local arrays are also effectively exploited to implement the analog mul-tiplication unit (AMU) that simultaneously produces 16 multipli-cation results between 4-bit input activations and 1-bit weights. For the hardware cost reduction of analog-to-digital converter (ADC) without sacrificing DNN accuracy, hardware aware system simulations are performed to decide the ADC bit-resolutions and the number of activated rows in the proposed CIM macro. In addition, for the ADC operation, the AMU-based reference col-umns are utilized for generating ADC reference voltages, with which low-cost 4-bit coarse-fine flash ADC has been designed. The 256×80 P-8T SRAM CIM macro implementation using 28nm CMOS process shows that the proposed CIM shows the accuracies of 91.46% and 66.67% with CIFAR-10 and CIFAR-100 dataset, respectively, with the energy efficiency of 50.07-TOPS/W.

SESSION: Session 3: Efficient and Intelligent Memories (Chair: Kshitij Bhardwaj, LLNL)

FlexiDRAM: A Flexible in-DRAM Framework to Enable Parallel General-Purpose Computation

  • Ranyang Zhou
  • Arman Roohi
  • Durga Misra
  • Shaahin Angizi

In this paper, we propose a Flexible processing-in-DRAM framework named FlexiDRAM that supports the efficient implementation of complex bulk bitwise operations. This framework is developed on top of a new reconfigurable in-DRAM accelerator that leverages the analog operation of DRAM sub-arrays and elevates it to implement XOR2-MAJ3 operations between operands stored in the same bit-line. FlexiDRAM first generates an efficient XOR-MAJ representation of the desired logic and then appropriately allocates DRAM rows to the operands to execute any in-DRAM computation. We develop ISA and software support required to compute in-DRAM operation. FlexiDRAM transforms current memory architecture to a massively parallel computational unit and can be leveraged to significantly reduce the latency and energy consumption of complex workloads. Our extensive circuit-to-architecture simulation results show that averaged across two well-known deep learning workloads, FlexiDRAM achieves ∼ 15 × energy-saving and 13 × speedup over the GPU outperforming recent processing-in-DRAM platforms.

Evolving Skyrmion Racetrack Memory as Energy-Efficient Last-Level Cache Devices

  • Ya-Hui Yang
  • Shuo-Han Chen
  • Yuan-Hao Chang

Skyrmion racetrack memory (SK-RM) has been regarded as a promising alternative to replace static random-access memory (SRAM) as a large-size on-chip cache device with high memory density. Different from other nonvolatile random-access memories (NVRAMs), data bits of SK-RM can only be altered or detected at access ports, and shift operations are required to move data bits across access ports along the racetrack. Owing to these special characteristics, word-based mapping and bit-interleaved mapping architectures have been proposed to facilitate reading and writing on SK-RM with different data layouts. Nevertheless, when SK-RM is used as an on-chip cache device, existing mapping architectures lead to the concerns of unpredictable access performance or excessive energy consumption during both data reads and writes. To resolve such concerns, this paper proposes extracting the merits of existing mapping architectures for allowing SK-RM to seamlessly switch its data update policy by considering the write latency requirement of cache accesses. Promising results have been demonstrated through a series of benchmark-driven experiments.

Exploiting successive identical words and differences with dynamic bases for effective compression in Non-Volatile Memories

  • Swati Upadhyay
  • Arijit Nath
  • Hemangee Kapoor

Emerging Non-volatile memories are considered as potential candidates for replacing traditional DRAM in main memory. However, downsides like long write latency, high write energy, and low write endurance make their direct adoption in the memory hierarchy challenging. Approaches that reduce the number of bits written are beneficial to overcome such drawbacks.

In this direction, we propose a compression technique that reduces overall bits written to the NVM, thus improving its lifetime. The proposed method, SIBR, compresses the incoming blocks to PCM by either eliminating the words to be written or by reducing the number of bits written for each word. For the former, words that have either zero content or are identical to consecutive words are not written. The latter is done by computing the difference of each word with a base word and storing only the difference (or delta) instead of the full word. The novelty of our contribution is to update the base word at run-time, thus achieving better compression. It is shown that computing the delta with a dynamically decided base compared to a fixed base gives smaller delta values. The dynamic base is another word in the same block. SIBR outperforms two state-of-the-art compression techniques by achieving a fairly low compression ratio and high coverage. Experimental results show a substantial reduction in bit-flips and improvement in lifetime.

SESSION: Session 4: Circuit Design and Methodology for IoT Applications (Chair: Hun-Seok Kim, UMich)

HOGEye: Neural Approximation of HOG Feature Extraction in RRAM-Based 3D-Stacked Image Sensors

  • Tianrui Ma
  • Weidong Cao
  • Fei Qiao
  • Ayan Chakrabarti
  • Xuan Zhang

Many computer vision tasks, ranging from recognition to multi-view registration, operate on feature representation of images rather than raw pixel intensities. However, conventional pipelines for obtaining these representations incur significant energy consumption due to pixel-wise analog-to-digital (A/D) conversions and costly storage and computations. In this paper, we propose HOGEye, an efficient near-pixel implementation for a widely-used feature extraction algorithm—Histograms of Oriented Gradients (HOG). HOGEye moves the key but computation-intensive derivative extraction (DE) and histogram generation (HG) steps into the analog domain by applying a novel neural approximation method in a resistive random-access memory (RRAM)-based 3D-stacked image sensor. The co-location of perception (sensor) and computation (DE and HG) and the alleviation of A/D conversions allow HOGEye design to achieve significant energy saving. With negligible detection rate degradation, the entire HOGEye sensor system consumes less than 48μW@30fps for an image resolution of 256 × 256 (equivalent to 24.3pJ/pixel) while the processing part only consumes 14.1pJ/pixel, achieving more than 2.5 × energy efficiency improvement than the state-of-the-art designs.

A Bit-level Sparsity-aware SAR ADC with Direct Hybrid Encoding for Signed Expressions for AIoT Applications

  • Ruicong Chen
  • H. T. Kung
  • Anantha Chandrakasan
  • Hae-Seung Lee

In this work, we propose the first bit-level sparsity-aware SAR ADC with direct hybrid encoding for signed expressions (HESE) for AIoT applications. ADCs are typically a bottleneck in reducing the energy consumption of analog neural networks (ANNs). For a pre-trained Convolutional Neural Network (CNN) inference, a HESE SAR for an ANN can reduce the number of non-zero signed digit terms to be output, and thus enables a reduction in energy along with the term quantization (TQ). The proposed SAR ADC directly produces the HESE signed-digit representation (SDR) using two thresholds per cycle for 2-bit look-ahead (LA). A prototype in 65nm shows that the HESE SAR provides sparsity encoding with a Walden FoM of 15.2fJ/conv.-step at 45MS/s. The core area is 0.072mm2.

Analysis of the Effect of Hot Carrier Injection in An Integrated Inductive Voltage Regulator

  • Shida Zhang
  • Nael Mizanur Rahman
  • Venkata Chaitanya Krishna Chekuri
  • Carlos Tokunaga
  • Saibal Mukhopadhyay

This paper presents a simulation-based study to evaluate the effect of Hot Carrier Injection (HCI) on the characteristics of an on-chip, digitally-controlled, switched inductor voltage regulator (IVR) architecture. Our methodology integrates device-level aging models, circuit simulations in SPICE, and control loop simulations in Simulink. We characterize the effect of HCI on individual components of an IVR, and their combined effect on the efficiency and transient performance. Our analysis using an IVR designed in 65nm CMOS shows that aging of the power stages has a smaller impact on performance compared to that of the control loop. Further, we perform a comparative analysis to show that, with a 1.8V supply, HCI leads to higher aging-induced degradation of IVR than Negative Bias Temperature Instability (NBTI). Finally, our simulation shows that parasitic inductance near IVR input aggravates NBTI and parasitic capacitance near IVR output aggravates HCI effects on IVR’s performance.

SESSION: Session 5: Advances in Hardware Security (Chair: Apoorva Amarnath, IBM)

RACE: RISC-V SoC for En/decryption Acceleration on the Edge for Homomorphic Computation

  • Zahra Azad
  • Guowei Yang
  • Rashmi Agrawal
  • Daniel Petrisko
  • Michael Taylor
  • Ajay Joshi

As more and more edge devices connect to the cloud to use its storage and compute capabilities, they bring in security and data privacy concerns. Homomorphic Encryption (HE) is a promising solution to maintain data privacy by enabling computations on the encrypted user data in the cloud. While there has been a lot of work on accelerating HE computation in the cloud, little attention has been paid to optimize the en/decryption on the edge. Therefore, in this paper, we present RACE, a custom-designed area- and energy-efficient SoC for en/decryption of data for HE. Owing to similar operations in en/decryption, RACE unifies the en/decryption datapath to save area. RACE efficiently exploits techniques like memory reuse and data reordering to utilize minimal amount of on-chip memory. We evaluate RACE using a complete RTL design containing a RISC-V processor and our unified accelerator. Our analysis shows that, for the end-to-end en/decryption, using RACE leads to, on average, 48 × to 39729 × (for a wide range of security parameters) more energy-efficient solution than purely using a processor.

Sealer: In-SRAM AES for High-Performance and Low-Overhead Memory Encryption

  • Jingyao Zhang
  • Hoda Naghibijouybari
  • Elaheh Sadredini

To provide data and code confidentiality and reduce the risk of information leak from memory or memory bus, computing systems are enhanced with encryption and decryption engine. Despite massive efforts in designing hardware enhancements for data and code protection, existing solutions incur significant performance overhead as the encryption/decryption is on the critical path. In this paper, we present Sealer, a high-performance and low-overhead in-SRAM memory encryption engine by exploiting the massive parallelism and bitline computational capability of SRAM subarrays. Sealer encrypts data before sending it off-chip and decrypts it upon receiving the memory blocks, thus, providing data confidentiality. Our proposed solution requires only minimal modifications to the existing SRAM peripheral circuitry. Sealer can achieve up to two orders of magnitude throughput-per-area improvement while consuming 3 × less energy compared to prior solutions.

SESSION: Session 6: Novel Physical Design Methodologies (Chair: Marisa Lopez Vallejo, UPM)

Hier-3D: A Hierarchical Physical Design Methodology for Face-to-Face-Bonded 3D ICs

  • Anthony Agnesina
  • Moritz Brunion
  • Alberto Garcia-Ortiz
  • Francky Catthoor
  • Dragomir Milojevic
  • Manu Komalan
  • Matheus Cavalcante
  • Samuel Riedel
  • Luca Benini
  • Sung Kyu Lim

Hierarchical very-large-scale integration (VLSI) flows are an understudied yet critical approach to achieving design closure at giga-scale complexity and gigahertz frequency targets. This paper proposes a novel hierarchical physical design flow enabling the building of high-density and commercial-quality two-tier face-to-face-bonded hierarchical 3D ICs. We significantly reduce the associated manufacturing cost compared to existing 3D implementation flows and, for the first time, achieve cost competitiveness against the 2D reference in large modern designs. Experimental results on complex industrial and open manycore processors demonstrate in two advanced nodes that the proposed flow provides major power, performance, and area/cost (PPAC) improvements of 1.2 to 2.2 × compared with 2D, where all metrics are improved simultaneously, including up to power savings.

A Study on Optimizing Pin Accessibility of Standard Cells in the Post-3 nm Node

  • Jaehoon Jeong
  • Jonghyun Ko
  • Taigon Song

Nanosheet FETs (NSFETs) are expected to be the post-FinFET device in the technology nodes of 5 nm and beyond. However, despite the high potential of NSFETs, few studies report the impact of NSFETs in the digital VLSI’s perspective. In this paper, we present a study of NSFETs for the optimal standard cell (SDC) library design and pin accessibility-aware layout for less routing congestion and low power consumption. For this objective, we present five novel methodologies to tackle the pin accessibility issues that rise in SDC designs in extremely-low routing resource environments (4 tracks) and emphasize the importance of local trench contact (LTC) in it. Using our methodology, we improve design metrics such as power consumption, total area, and wirelength by -11.0%, -13.2%, and 16.0%, respectively. By our study, we expect the routing congestion issues that additionally occur in advanced technology nodes to be handled and better full-chip designs to be done in 3 nm and beyond.

Improving Performance and Power by Co-Optimizing Middle-of-Line Routing, Pin Pattern Generation, and Contact over Active Gates in Standard Cell Layout Synthesis

  • Sehyeon Chung
  • Jooyeon Jeong
  • Taewhan Kim

This paper addresses the combined problem of the three core tasks, namely routing on the middle-of-line (MOL) layer, generating I/O pin patterns (PP), and allocating contacts over active gates (COAG) in cell layout synthesis with 7nm and below technology. As yet, the existing cell layout generators have paid partial or little attention to those tasks, even with no awareness of the synergistic effects. This work overcomes this limitation by proposing a systematic and tightly-linked solution to the combined problem to boost the synergistic effects on chip implementation. Precisely, we solve the problem in three steps: (1) fully utilizing the horizontal routing resource on MOL layer by formulating the problem of in-cell routing into a weighted interval scheduling problem, (2) simultaneously performing the remaining horizontal in-cell routing and PP generation on metal 1 layer through the COAG exploitation while ensuring the pin accessibility constraint, and (3) completing in-cell routing by allocating vertical routing resource on MOL layer. Through experiments with benchmark designs, it is shown that our proposed layout method is able to generate standard cells with on average 34.2% shorter total length of metal 1 wire while retaining pin patterns that ensure pin accessibility, resulting in the chip implementations with up to 72.5% timing slack improvement and up to 15.6% power reduction that produced by using the conventional best available cells. In addition, by using less wire and vias, our in-cell router is able to consistently reduce the worst delay of cells, noticeably, reducing the sum of setup time and clock-to-Q delay of flip-flops by 1.2% ∼ 3.0% on average over that by the existing best cells.

SESSION: Session 7: Enablers for Energy-efficient Platforms (Chair: Xue Lin, Northeastern)

Neural Contextual Bandits Based Dynamic Sensor Selection for Low-Power Body-Area Networks

  • Berken Utku Demirel
  • Luke Chen
  • Mohammad Al Faruque

Providing health monitoring devices with machine intelligence is important for enabling automatic mobile healthcare applications. However, this brings additional challenges due to the resource scarcity of these devices. This work introduces a neural contextual bandits based dynamic sensor selection methodology for high-performance and resource-efficient body-area networks to realize next generation mobile health monitoring devices. The methodology utilizes contextual bandits to select the most informative sensor combinations during runtime and ignore redundant data for decreasing transmission and computing power in a body area network (BAN). The proposed method has been validated using one of the most common health monitoring applications: cardiac activity monitoring. Solutions from our proposed method are compared against those from related works in terms of classification performance and energy while considering the communication energy consumption. Our final solutions could reach 78.8% AU-PRC on the PTB-XL ECG dataset for cardiac abnormality detection while decreasing the overall energy consumption and computational energy by 3.7 × and 4.3 ×, respectively.

3D IC Tier Partitioning of Memory Macros: PPA vs. Thermal Tradeoffs

  • Lingjun Zhu
  • Nesara Eranna Bethur
  • Yi-Chen Lu
  • Youngsang Cho
  • Yunhyeok Im
  • Sung Kyu Lim

Micro-bump and hybrid bonding technologies have enabled 3D ICs and provided remarkable performance gain, but the memory macro partitioning problem also becomes more complicated due to the limited 3D connection density. In this paper, we evaluate and quantify the impacts of various macro partitioning on the performance and temperature in commercial-grade 3D ICs. In addition, we propose a set of partitioning guidelines and a quick constraint-graph-based approach to create floorplans for logic-on-memory 3D ICs. Experimental results show that the optimized macro partitioning can help improve the performance of logic-on-memory 3D ICs by up to 15%, at the cost of 8°C temperature increase. Assuming air cooling, our simulation shows the 3D ICs are thermally sustainable with 97°C maximum temperature.

A Domain-Specific System-On-Chip Design for Energy Efficient Wearable Edge AI Applications

  • Yigit Tuncel
  • Anish Krishnakumar
  • Aishwarya Lekshmi Chithra
  • Younghyun Kim
  • Umit Ogras

Artificial intelligence (AI) based wearable applications collect and process a significant amount of streaming sensor data. Transmitting the raw data to cloud processors wastes scarce energy and threatens user privacy. Wearable edge AI devices should ideally balance two competing requirements: (1) maximizing the energy efficiency using targeted hardware accelerators and (2) providing versatility using general-purpose cores to support arbitrary applications. To this end, we present an open-source domain-specific programmable system-on-chip (SoC) that combines a RISC-V core with a meticulously determined set of accelerators targeting wearable applications. We apply the proposed design method to design an FPGA prototype and six real-life use cases to demonstrate the efficacy of the proposed SoC. Thorough experimental evaluations show that the proposed SoC provides up to 9.1 × faster execution and up to 8.9 × higher energy efficiency than software implementations in FPGA while maintaining programmability.

SESSION: Session 8: System Design for Energy-efficiency and Resiliency (Chair: Aatmesh Shrivastava, Northeastern)

SACS: A Self-Adaptive Checkpointing Strategy for Microkernel-Based Intermittent Systems

  • Yen-Ting Chen
  • Han-Xiang Liu
  • Yuan-Hao Chang
  • Yu-Pei Liang
  • Wei-Kuan Shih

Intermittent systems are usually energy-harvesting embedded systems that harvest energy from ambient environment and perform computation intermittently. Due to the unreliable power, these intermittent systems typically adopt different checkpointing strategies for ensuring the data consistency and execution progress after the systems are resumed from unpredictable power failures. Existing checkpointing strategies are usually suitable for bare-metal intermittent systems with short run time. Due to the improvement of energy-harvesting techniques, intermittent systems are having longer run time and better computation power, so that more and more intermittent systems tend to function with a microkernel for handling more/multiple tasks at the same time. However, existing checkpointing strategies were not designed for (or aware of) such microkernel-based intermittent systems that support the running of multiple tasks, and thus have poor performance on preserving the execution progress. To tackle this issue, we propose a design, called self-adaptive checkpointing strategy (SACS), tailored for microkernel-based intermittent systems. By leveraging the time-slicing scheduler, the proposed design dynamically adjust the checkpointing interval at both run time and reboot time, so as to improve the system performance by achieving a good balance between the execution progress and the number of performed checkpoints. A series of experiments was conducted based on a development board of Texas Instrument (TI) with well-known benchmarks. Compared to the state-of-the-art designs, experiment results show that our design could reduce the execution time by at least 46.8% under different conditions of ambient environment while maintaining the number of performed checkpoints in an acceptable scale.

Drift-tolerant Coding to Enhance the Energy Efficiency of Multi-Level-Cell Phase-Change Memory

  • Yi-Shen Chen
  • Yuan-Hao Chang
  • Tei-Wei Kuo

Phase-Change Memory (PCM) has emerged as a promising memory and storage technology in recent years, and Multi-Level-Cell (MLC) PCM further reduces the per-bit cost to improve its competitiveness by storing multiple bits in each PCM cell. However, MLC PCM has high energy consumption issue in its write operations. In contrast to existing works that try to enhance the energy efficiency of the physical program&verify strategy for MLC PCM, this work proposes a drift-tolerant coding scheme to enable the fast write operation on MLC PCM without sacrificing any data accuracy. By exploiting the resistance drift and asymmetric write characteristic of PCM cells, the proposed scheme can reduce the write energy consumption of MLC PCM significantly. Meanwhile, a segmentation strategy is proposed to further improve the write performance with our coding scheme. A series of analyses and experiments was conducted to evaluate the capability of the proposed scheme. The results show that the proposed scheme can reduce 6.2–17.1% energy consumption and 3.2–11.3% write latency under six representative benchmarks, compared with the existing well-known schemes.

A Unified Forward Error Correction Accelerator for Multi-Mode Turbo, LDPC, and Polar Decoding

  • Yufan Yue
  • Tutu Ajayi
  • Xueyang Liu
  • Peiwen Xing
  • Zihan Wang
  • David Blaauw
  • Ronald Dreslinski
  • Hun Seok Kim

Forward error correction (FEC) is a critical component in communication systems as the errors induced by noisy channels can be corrected using the redundancy in the coded message. This paper introduces a novel multi-mode FEC decoder accelerator that can decode Turbo, LDPC, and Polar codes using a unified architecture. The proposed design explores the similarities in these codes to enable energy efficient decoding with minimal overhead in the total area of the unified architecture. Moreover, the proposed design is highly reconfigurable to support various existing and future FEC standards including 3GPP LTE/5G, and IEEE 802.11n WiFi. Implemented in GF 12nm FinFET technology, the design occupies 8.47mm2 of chip area attaining 25% logic and 49% memory area savings compared to a collection of single-mode designs. Running at 250MHz and 0.8V, the decoder achieves per-iteration throughput and energy efficiency of 690Mb/s and 44pJ/b for Turbo; 740Mb/s and 27.4pJ/b for LDPC; and 950Mb/s and 45.8pJ/b for Polar.

SESSION: Poster Session

Canopy: A CNFET-based Process Variation Aware Systolic DNN Accelerator

  • Cheng Chu
  • Dawen Xu
  • Ying Wang
  • Fan Chen

Although systolic accelerators have become the dominant method for executing Deep Neural Networks (DNNs), their performance efficiency (quantified as Energy-Delay Product or EDP) is limited by the capabilities of silicon Field-Effect Transistors (FETs). FETs constructed from Carbon Nanotubes (CNTs) have demonstrated > 10 × EDP benefits, however, the processing variations inherent in carbon nanotube FETs (CNFETs) fabrication compromise the EDP benefits, resulting > 40% performance degradation. In this work, we study the impact of CNT process variations and present Canopy, a process variation aware systolic DNN accelerator by leveraging the spatial correlation in CNT variations. Canopy co-optimizes the architecture and dataflow to allow computing engines in a systolic array run at their best performance with non-uniform latency, minimizing the performance degradation incurred by CNT variations. Furthermore, we devise Canopy with dynamic reconfigurability such that the microarchitectural capability and its associated flexibility achieves an extra degree of adaptability with regard to the DNN topology and processing hyper-parameters (e.g., batch size). Experimental results show that Canopy improves the performance by 5.85 × (4.66 ×) and reduces the energy by 34% (90%) when inferencing a single (a batch of) input compared to the baseline design under an iso-area comparison across seven DNN workloads.

Layerwise Disaggregated Evaluation of Spiking Neural Networks

  • Abinand Nallathambi
  • Sanchari Sen
  • Anand Raghunathan
  • Nitin Chandrachoodan

Spiking Neural Networks (SNNs) have attracted considerable attention due to their suitability to processing temporal input streams, as well as the emergence of highly power-efficient neuromorphic hardware platforms. The computational cost of evaluating a Spiking Neural Network (SNN) is strongly correlated with the number of timesteps for which it is evaluated. To improve the computational efficiency of SNN evaluation, we propose layerwise disaggregated SNNs (LD-SNNs), wherein the number of timesteps is independently optimized for each layer of the network. In effect, LD-SNNs allow for a better allocation of computational effort across layers in a network, resulting in an improved tradeoff between accuracy and efficiency. We propose a methodology to design optimized LD-SNNs from any given SNN. Across four benchmark networks, LD-SNNs achieve 1.67-3.84x reduction in synaptic updates and 1.2-2.56x reduction in neurons evaluated. These improvements translate to 1.25-3.45x faster inference on four different hardware platforms including two server-class platforms, a desktop platform and an edge SoC.

Tightly Linking 3D Via Allocation Towards Routing Optimization for Monolithic 3D ICs

  • Suwan Kim
  • Sehyeon Chung
  • Taewhan Kim
  • Heechun Park

Monolithic 3D (M3D) is a revolutionary technology for high-density and high-performance chip design in the post-Moore era. However, it suffers from considerable thermal confinement due to the transistor stacking and insulating materials between the layers. As a way of reducing power, thereby mitigating the thermal problem, we propose a comprehensive physical design methodology that incorporates two new important items, one is blockage aware MIV (monolithic inter-tier via) placement and the other is 3D net ordering for routing, intending to optimize wire length. Precisely, we propose a three-step approach: (1) retrieving the MIV region candidates for each 3D net, (2) fine-tuning placement to secure MIV spots in the presence of blockages, and (3) performing M3D routing with net ordering to consider the fine-tuned placement result. We implement the proposed M3D design flow by utilizing commercial 2D IC EDA tools while providing seamless optimization for cross-tier connections. In the meantime, our experiments confirm that proposed M3D design flow saves wire length per cross-tier net by up to 41.42%, which corresponds to 7.68% less total net switching power, equivalently 36.79% lower energy-delay-product over the conventional state-of-the-art M3D design flow.

Enabling Capsule Networks at the Edge through Approximate Softmax and Squash Operations

  • Alberto Marchisio
  • Beatrice Bussolino
  • Edoardo Salvati
  • Maurizio Martina
  • Guido Masera
  • Muhammad Shafique

Complex Deep Neural Networks such as Capsule Networks (CapsNets) exhibit high learning capabilities at the cost of compute-intensive operations. To enable their deployment on edge devices, we propose to leverage approximate computing for designing approximate variants of the complex operations like softmax and squash. In our experiments, we evaluate tradeoffs between area, power consumption, and critical path delay of the designs implemented with the ASIC design flow, and the accuracy of the quantized CapsNets, compared to the exact functions.

Multi-Complexity-Loss DNAS for Energy-Efficient and Memory-Constrained Deep Neural Networks

  • Matteo Risso
  • Alessio Burrello
  • Luca Benini
  • Enrico Macii
  • Massimo Poncino
  • Daniele Jahier Pagliari

Neural Architecture Search (NAS) is increasingly popular to automatically explore the accuracy versus computational complexity trade-off of Deep Learning (DL) architectures. When targeting tiny edge devices, the main challenge for DL deployment is matching the tight memory constraints, hence most NAS algorithms consider model size as the complexity metric. Other methods reduce the energy or latency of DL models by trading off accuracy and number of inference operations. Energy and memory are rarely considered simultaneously, in particular by low-search-cost Differentiable NAS (DNAS) solutions.

We overcome this limitation proposing the first DNAS that directly addresses the most realistic scenario from a designer’s perspective: the co-optimization of accuracy and energy (or latency) under a memory constraint, determined by the target HW. We do so by combining two complexity-dependent loss functions during training, with independent strength. Testing on three edge-relevant tasks from the MLPerf Tiny benchmark suite, we obtain rich Pareto sets of architectures in the energy vs. accuracy space, with memory footprints constraints spanning from 75% to 6.25% of the baseline networks. When deployed on a commercial edge device, the STM NUCLEO-H743ZI2, our networks span a range of 2.18x in energy consumption and 4.04% in accuracy for the same memory constraint, and reduce energy by up to 2.2 × with negligible accuracy drop with respect to the baseline.

Visible Light Synchronization for Time-Slotted Energy-Aware Transiently-Powered Communication

  • Alessandro Torrisi
  • Maria Doglioni
  • Kasim Sinan Yildirim
  • Davide Brunelli

Energy-harvesting IoT devices that operate without batteries paved the way for sustainable sensing applications. These devices force applications to run intermittently since the ambient energy is sporadic, leading to frequent power failures. Unexpected power failures introduce several challenges to wireless communication since nodes are not synchronized and stop operating during data transmission. This paper presents a novel self-powered autonomous circuit design to remedy this problem. This circuit uses visible-light communication (VLC) to enable synchronization for time-slotted energy-aware transiently powered communication. Therefore, it aligns the activity phases of the batteryless sensors so that energy status communication occurs when these nodes are active simultaneously. Evaluations showed that our circuit has an ultra-low power consumption, can work with zero energy cost by relying only on the harvested energy, and supports efficient intermittent communication over intermittently powered nodes.

Directed Acyclic Graph-based Neural Networks for Tunable Low-Power Computer Vision

  • Abhinav Goel
  • Caleb Tung
  • Nick Eliopoulos
  • Xiao Hu
  • George K. Thiruvathukal
  • James C. Davis
  • Yung-Hsiang Lu

Processing visual data on mobile devices has many applications, e.g., emergency response and tracking. State-of-the-art computer vision techniques rely on large Deep Neural Networks (DNNs) that are usually too power-hungry to be deployed on resource-constrained edge devices. Many techniques improve DNN efficiency of DNNs by compromising accuracy. However, the accuracy and efficiency of these techniques cannot be adapted for diverse edge applications with different hardware constraints and accuracy requirements. This paper demonstrates that a recent, efficient tree-based DNN architecture, called the hierarchical DNN, can be converted into a Directed Acyclic Graph-based (DAG) architecture to provide tunable accuracy-efficiency tradeoff options. We propose a systematic method that identifies the connections that must be added to convert the tree to a DAG to improve accuracy. We conduct experiments on popular edge devices and show that increasing the connectivity of the DAG improves the accuracy to within 1% of the existing high accuracy techniques. Our approach requires 93% less memory, 43% less energy, and 49% fewer operations than the high accuracy techniques, thus providing more accuracy-efficiency configurations.

Energy Efficient Cache Design with Piezoelectric FETs

  • Reena Elangovan
  • Ashish Ranjan
  • Niharika Thakuria
  • Sumeet Gupta
  • Anand Raghunathan

Piezoelectric FETs (PeFETs) are a promising class of ferroelectric devices that use the piezoelectric effect to modulate strain in the channel. They present several desirable properties for on-chip memory, such as non-volatility, high-density, and low-power write capability. In this work, we present the first effort to design and evaluate cache architectures using PeFETs.

Two key goals in cache design are to maximize capacity and minimize latency. Accordingly, we consider two different variants of PeFET bit-cells – a high-density variant (HD-PeFET) that does not use a separate access transistor, and a high-performance 1T-1PeFET variant (HP-PeFET) that sacrifices density for lower access latency. We note that at the application level, there exists significant heterogeneity in the sensitivity of applications to cache capacity and latency. To enable a better tradeoff between these conflicting design goals, we propose a hybrid PeFET cache comprising of both HP-PeFET and HD-PeFET regions at the granularity of cache ways. We make the key observation that frequently reused blocks residing in the HD-PeFET region are detrimental to overall cache performance due to the higher access latency. Hence, we also propose a cache management policy to identify and migrate these blocks from the HD-PeFET region to the HP-PeFET region at runtime. We develop models of HD-PeFET and HP-PeFET caches using the CACTI framework and evaluate their benefits across a suite of PARSEC and SPLASH-2X benchmarks. We demonstrate 1.11x and 4.55x average improvements in performance and energy, respectively, using the proposed hybrid PeFET last-level cache against a baseline with traditional SRAM cache at iso-area.

Predictive Model Attack for Embedded FPGA Logic Locking

  • Prattay Chowdhury
  • Chaitali Sathe
  • Benjamin Carrion Schaefer

With most VLSI design companies now being fabless it is imperative to develop methods to protect their Intellectual Property (IP). One approach that has become very popular due to its relative simplicity and practicality is logic locking. One of the problems with traditional locking mechanisms is that the locking circuitry is built into the netlist that the VLSI design company delivers to the foundry which has now access to the entire design including the locking mechanism. This implies that they could potentially tamper with this circuitry or reverse engineer it to obtain the locking key. One relatively new approach that has been coined logic locking through omission, or hardware redaction, maps a portion of the design to an embedded FPGA (eFPGA). The bitstream of the eFPGA now acts as the locking key. This new approach has been shown to be more secure as the foundry has no access to the bitstream during the manufacturing stage. The obvious drawbacks are the increase in design complexity and the area and performance overheads associated with the eFPGA. In this work we propose, to the best of our knowledge, the first attack on these type of new locking mechanisms by substituting the exact logic mapped onto the eFPGA by a synthesizable predictive model that replicates the behavior of the exact logic. We show that this approach is applicable in the context of approximate computing where hardware accelerators tolerate certain degree of errors at their outputs. Experimental results show that our proposed approach is very effective finding suitable predictive models while simultaneously reducing the overall power consumption.

2022 Student Research Forum at ASPDAC (SRF@ASPDAC)

The Student Research Forum at the ASP-DAC is renovated from a traditional poster session hosted by ACM SIGDA for PhD students to present and discuss their dissertations with experts in the design automation community. Starting from 2015, the forum includes both PhD and MS students, offering great opportunity for the students to establish contacts for their future career. In addition, the forum helps the companies and academic institutes to get an overview of the latest research and discover the extraordinary candidates for their employment.

This Year’s Awards (2022)

Best Poster – Research
Intelligent Circuit Design and Implementation with Machine Learning in EDA
Zhiyao Xie, Duke University

Best Poster – Presentation
Algorithm-Hardware Co-design of Transformer on FPGA Devices
Xinyi Zhang, University of Pittsburgh 

Most Popular Poster
ASBP: Automatic Structured Bit-Pruning for RRAM-based NN Accelerator
Songyun Qu, Chinese Academy of Sciences


Call for Submission: SIGDA Student Research Forum at ASP-DAC 2021 (SRF@ASP-DAC 2021)

Considering ASP-DAC 2021 is going virtual due to COVID-19, SRF@ASP-DAC 2021 will be held as a virtual forum. The forum welcomes all students, professors and industrial professionals from the relevant research community. The student author of each accepted submission by the forum is required to have a registration to ASP-DAC 2021 at least at the full student rate. The forum will provide financial support equivalent to the full student rate for each accepted submission.

ELIGIBILITY

  • Students must be within 1 year (M.S.) or 2 years (Ph.D.) of dissertation completion or have completed their dissertation during the last 12 months.
  • Dissertation topic must be relevant to the ASP-DAC community.
  • Previous ASP-DAC forum presenters are not eligible.
  • Students who have presented previously at DAC/DATE PhD forums are eligible.
  • Only students with at least one published or accepted conference, symposium or journal “full” paper are eligible for awards.
  • Students must attend the forum virtually to present the poster in person without substitute presentations, or else please contact the SRF Chair in advance.

SUBMISSION REQUIREMENTS
A two-page PDF abstract of the dissertation (in two-column format, using 10pt. fonts and single-spaced lines), including name, institution, adviser, contact information, estimated (or actual) graduation date, whether the work has been presented at DAC PhD Forum or DATE PhD Forum, as well as figures and bibliography (if applicable). The two-page limit on the abstract will be strictly enforced: any material beyond the second page will be ignored by the reviewers. Each accepted abstract has to prepare a poster and a short video presentation, and the student has to attend the forum virtually for real-time interactions.

To be considered for awards, a student must explicitly indicate, in the title of the two-page abstract, the venues for which the work was published or accepted, and a list of all papers authored or co-authored by the student should be included in the bibliography of the two-page abstract. The papers must be related to the dissertation topic. Those on topics unrelated to the dissertation abstract will not be considered.

Submission website: https://easychair.org/conferences/?conf=aspdacsrf2021

IMPORTANT DATES
  Submission Deadline: November 30, 2020 (firm)
  Date of Acceptance Notification: December 14, 2020
  Poster and Short Video Submission Deadline: January 5, 2021
  Forum Date: January 19, 2021

CONTACT INFORMATION
For queries, please send an e-mail to Prof. Weichen Liu (liu [at] ntu.edu.sg). Please include “SRF@ASP-DAC 2021” in the subject of your email.

Organizers

Chair:
Weichen Liu, Nanyang Technological University, Singapore
Co-Chairs:
Lei Jiang, Indiana University Bloomington, US
Yaoyao Ye, Shanghai Jiao Tong University, China
Secretariat:
Jun Zhou, Nanyang Technological University, Singapore
Technical Committee:
Hiromitsu Awano, Kyoto University, Japan
Donkyu Baek, Chungbuk National University, Korea
Ateet Bhalla, Independent Technology Consultant, India
Yuan-Hao Chang, Academia Sinica, Taiwan
Wanli Chang, University of York, UK
Xianzhang Chen, Chongqing University, China
Yi-Jung Chen, National Chi Nan University, Taiwan
Xiang Chen, George Mason University, US
Haibao Chen, Shanghai Jiao Tong University, China
Sudipta Chattopadhyay, Singapore Univ. of Technology and Design, Singapore
Hsiang-Yun Cheng, Academia Sinica, Taiwan
Luan Huu Kinh Duong, Nanyang Technological University, Singapore
Shao-Yun Fang, National Taiwan University of Science and Technology, Taiwan
Ann Gordon-Ross, University of Florida, US
Chien-Chung Ho, National Chung Cheng University, Taiwan
Weiwen Jiang, University of Notre Dame, US
Yukihide Kohira, The University of Aizu, Japan
Hyung-Gyu Lee, Daegu University, Korea
Sicheng Li, Hewlett Packard Labs, US
Yongfu Li, Shanghai Jiao Tong University, China
Qingan Li, Wuhan University, China
Chun-Han Lin, National Taiwan Normal University, Taiwan
Ren-Shuo Liu, National Tsing Hua University, Taiwan
Jaehyun Park, University of Ulsan, Korea
Muhammad Shafique, New York University Abu Dhabi, UAE
Liang Shi, East China Normal University, China
Donghwa Shin, Soongsil University, Korea
Masashi Tawada, Waseda University, Japan
Hoeseok Yang, Ajou University, Korea
Ming-Chang Yang, The Chinese University of Hong Kong, China
Lei Yang, University of New Mexico, US
Bei Yu, The Chinese University of Hong Kong, China
Qian Zhang, University of California, Los Angeles, US
ASP-DAC liaison:
Masashi Tawada, Waseda University, Japan
Yukio Mitsuyama, Kochi University of Technology, Japan

Sponsors

ACM SIGDA
Cadence Design Systems, Inc.
Synopsys, Inc.

Ahmedullah Aziz

June 1st, 2022

Ahmedullah Aziz

Assistant Professor

University of Tennessee Knoxville

Email:

aziz@utk.edu

Personal webpage

https://nordic.eecs.utk.edu/

Research interests

Cryogenic Electronics, Beyond-CMOS Technologies, Neuromorphic Hardware, Superconducting Devices/Circuits, VLSI

Short bio

Dr. Ahmedullah Aziz is an Assistant Professor of Electrical Engineering & Computer Science at the University of Tennessee, Knoxville, USA. He earned his Ph.D. in Electrical & Computer Engineering from Purdue University in 2019, an MS degree in Electrical Engineering from the Pennsylvania State University (University Park) in 2016, and a BS degree in Electrical & Electronic Engineering from Bangladesh University of Engineering & Technology (BUET) in 2013. Before beginning his graduate studies, Dr. Aziz worked in the ‘Tizen Lab’ of the Samsung R&D Institute in Bangladesh as a full-time Engineer. During graduate education, he worked as a Co-Op Engineer (Intern) in the Technology Research division of Global Foundries (Fab 8, NY, USA). He received several awards and accolades for his research, including the ‘ACM SIGDA Outstanding Ph.D. Dissertation Award (2021)’ from the Association of Computing Machinery, ‘EDAA Outstanding Ph.D. Dissertation Award (2020)’ from the European Design and Automation Association, ‘Outstanding Graduate Student Research Award (2019)’ from the College of Engineering, Purdue University, and ‘Icon’ award from Samsung (2013). He was a co-recipient of two best publication awards (2015, 2016) from the SRC-DARPA STARnet Center and the best project award (2013) from CNSER. In addition, he received several scholarships and recognition for academic excellence, including – Dean’s Award, JB Gold Medal, and Chairman’s Award. He is a technical program committee (TPC) member for multiple flagship conferences (including DAC, ISCAS, GLSVLSI, Nano) and a reviewer for several journals from reputed publishers (IEEE, AIP, Elsevier, Frontiers, IOP Science, Springer Nature). He served as a review panelist for the US Department of Energy (DOE) and a guest editor for – ‘Frontiers in Nanotechnology’, ‘Photonics’, and ‘Micromachines’.

Reasearch highlights

Dr. Aziz is an expert in device-circuit co-design and electronic design automation (EDA). His research laid the foundation for physics-based and semi-physical compact modeling of multiple emerging device technologies, including – Mott switches, oxide memristors, ferroelectric transistors, Josephson Junctions, cryotrons, topological memory/switches, and so on. His exemplary contributions to the field of low-power electronics have been internationally recognized through two prestigious distinguished dissertation awards by (i) the Association for Computing Machinery (ACM) – 2021 and (ii) European Design and Automation Association (EDAA) – 2020. His research portfolio comprises multiple avenues of exploratory nanoelectronics, spanning from device modeling to circuit/array design. In addition, Dr. Aziz has been a trailblazer in cryogenic memory technologies, facilitating critical advancements in quantum computing systems and space electronics. His works on memristive (room-temperature) and superconducting (cryogenic) neuromorphic systems have paved the way for dense, reconfigurable, and bio-plausible computing hardware.

Read More

Li Jiang

July 1st, 2022

Li Jiang

Assistant Professor

Shanghai Jiao Tong University

Email:

ljiang_cs@sjtu.edu.cn

Personal webpage

https://www.cs.sjtu.edu.cn/~jiangli/

Research interests

Compute-in-memory, Neuromorphic Computing, Domain Specific Architecture for AI, Database, networking etc.

Short bio

Li Jiang received the B.S. degree from the Dept. of CS&E, Shanghai Jiao Tong University in 2007, the MPhil, and the Ph.D. degree from the Dept. of CS&E, the Chinese University of Hong Kong in 2010 and 2013, respectively. He has published more than 80 peer-review papers in top-tier computer architecture, EDA and AI/Database conferences and journals, including ISCA, MICRO, DAC, ICCAD, AAAI, ICCV, SigIR, TC, TCAD, TPDS and etc. He received the Best Paper Award in DATE’22, Best Paper Nomination in ICCAD10, and DATE21. According to the IEEE Digital Library, five articles ranked in the top 5 of citations of all papers collected at its conferences. Some of the achievements have been introduced into the IEEE P1838 standard, and several technologies have been in commercial use in cooperation with TSMC, Huawei, and Alibaba.

He got the best Ph.D. Dissertation award in ATS 2014, and he was in the final list of TTTC’s E. J. McCluskey Doctoral Thesis Award. He received ACM Shanghai Rising Star award and CCF VLSI early career award in 2019. He received the 2nd class prize of Wu Wenjun Award for Artificial Intellegence. He serves as co-chair and TPC member in several international and national conferences, such as MICRO, DATE, ASP-DAC, ITC-Asia, ATS, CFTC, CTC, etc. He is an Associate Editor of IET Computers Digital Techniques, VLSI, the Integration Journal. He is the co-founder of ChinaDA and ACM/SigDA East China Branch.

Reasearch highlights

Prof. Li Jiang has been working on the test and repair architecture of 3D ICs that can dramatically reduce costs, advocating and emphasizing the precious resources sharing mechanism. They optimize the 3D SoC test architecture under test-pin count and thermal dissipation constraints by sharing the test-access-mechanism (TAM) and test wire of pre-bond wafer-level and post-bond package-level tests. They further propose the inter-die spare-sharing technique and the die-matching algorithms to improve the stack yield of 3D stacked memory. This work is nominated as the best paper in ICCAD 2010. Based on this method, they work with TSMC to propose a novel BISR architecture that can cluster and map faulty rows/columns across die to the same spare row/column to enhance the reparability. This series of works have been widely accepted by the mainstream and introduced into the IEEE P1838 standard.

To improve the assembly yield in the TSV fabrication process, they develop a fault model considering TSV coupling effect that has not been carefully investigated before. It leads their attention to a unique phenomenon, i.e., the faulty TSVs can be clustered. Thus, they propose a novel spare-TSV sharing architecture composed of a lightweight switch design, two effective and efficient repair algorithms, and a TSV-grid mapping mechanism that can avoid catastrophic TSV clustering defects.

ReRAM cell needs multiple programming pulses to avoid device programming variation and resistance drifting. To overcome the resulting programming latency and energy, they propose a Self-Terminating Write (STW) circuit that heavily reuses the inherent PIM peripherals (e.g., ADC and Trans-Impedance Amplifier) to obtain 2-bit precision via a single program pulse. This work is the best paper award of DATE 2022.

Read More

Fan Chen

July 1st, 2022

Fan Chen

Assistant Professor

Indiana University Bloomington

Email:

fc7@iu.edu

Personal webpage

https://homes.luddy.indiana.edu/fc7/

Research interests

Beyond-CMOS Computing, Quantum Machine Learning, Accelerator Architecture for Emerging Applications, Emerging Nonvolatile Memory

Short bio

Fan Chen is an assistant professor in the Department of Intelligent Systems Engineering at the Indiana University Bloomington. Dr. Chen received her Ph.D. from the department of Electrical and Computer Engineering at Duke University. Dr. Chen is a recipient of the 2022 NSF Faculty Early Career Development Program (CAREER) Award, the 2021 Service Recognition Award of Great Lakes Symposium on VLSI (GLSVLSI), the 2019 Cadence Women in Technology Scholarship, the Best Paper Award and the Ph.D. forum Best Poster Award at 2018 Asia and South Pacific Design Automation Conference (ASP-DAC). Dr. Chen serves as the publication chair of ISLPED 2022/2021, chair of SIGDA University Booth at DAC 2022/2021, web and registration chair of GLSVLSI 2022, proceedings chair of ASAP 2021, arrangement chair of GLSVLSI 2021. Dr. Chen also serves on the editorial board of IEEE Circuits and Systems Magazine (CAS-M). She is a technical reviewer for over 30+ international conferences/journals, such as IEEE TC, IEEE TCAS-I, IEEE TNNLS, IEEE D&T, IEEE IoT-J, ACM TACO, ACM TODAES, ACM JETC, etc.

Reasearch highlights

Prof Chen’ research interests are focused on beyond-CMOS computing, quantum machine learning, accelerator architecture for emerging applications. Her latest work on quantum machine learning investigates fundamentally novel quantum equivalent of deep learning frameworks derived from the working principles of quantum computers, paving the way for general-purpose quantum algorithms on noisy intermediate-scale quantum devices. Another notable contribution of Prof. Chen’s work is accelerator architecture designs for emerging applications including deep learning and bioinformatics. The memory and computing requirements of such big-data applications pose significant technical challenges for their adoption in a broader range of services. Prof. Chen investigates how system/architecture/algorithm co-designed domain-specific accelerators can help on performance and energy efficiency. Prof. Chen’s works have been recognized by the academic community and appeared in top conferences, such as HPCA, DAC, ICCAD, DATE, ISLPED, ASP-DAC, and ESWEEK. Her research on “System Support for Scalable, Fast, and Power-Efficient Genome Sequencing” has been honored with the National Science Foundation Faculty Early Career Development CAREER Award.

Read More

Kuan-Hsun Chen

May 1st, 2022

Kuan-Hsun Chen

Assistant Professor

University of Twente, the Netherlands

Email:

k.h.chen@utwente.nl

Personal webpage

https://people.utwente.nl/k.h.chen

Research interests

Real-Time Embedded Systems, Non-Volatile Memories, Architecture-Aware Software Design, Resource-Constrained Machine Learning

Short bio

Dr.-Ing. Kuan-Hsun Chen is an assistant professor at the Chair of Computer Architecture and Embedded Systems (CAES) for the University of Twente in the Netherlands. He earned his Ph.D. degree (05.2019) in Computer Science (Dr.-Ing.) from TU Dortmund University, Germany with distinction (summa cum laude), and his master’s degree in Computer Science at National Tsing Hua University in Taiwan. He has published more than 40 scientific works in top peer-reviewed journals and international conferences. His key research interests are in design for real-time embedded systems, non-volatile memories, architecture-aware software design, and resource-constrained machine learning. Dr. Chen currently serves as Associate Editor in the journal of AIMS Applied Computing and Intelligence (AIMS-ACI) and Guest Editor for the Journal of Signal Processing Systems (JSPS). He is also a Technical Program Committee (TPC) member for various leading international conferences in the computer science area like Real-Time Systems Symposium (RTSS), International Conference on High Performance Computing, Data, & Analytics (HiPC), and others. He is also a reviewer for many peer-reviewed journals and conferences (TC, TECS, TCPS, RTAS, IROS, ECML PKDD) in computer science. Dr. Chen holds one best student paper award at RTCSA’18, one best paper nomination at DATE’21, and one dissertation award at TU Dortmund University in 2019. He was granted by the German Academic Exchange Service (DAAD) one research project as Principal Investigator and one personal grant for postdoctoral exchange in Japan for the Summer of 2021. He is also a volunteer mentor in the European Space Agency (ESA) Summer of Code in Space (2017) and Google Summer of Code since 2016 for open-source development on a popular real-time operating system, namely RTEMS.

Reasearch highlights

Embedded systems in various safety-critical domains, such as computing systems in automotive and avionic devices, are important for modern society. Due to their intensive interaction with the physical environment, where time naturally progresses, the correctness of the system depends not only on the functional correctness of the delivered results but also on the timeliness of the instant at which these results are delivered. Dr. Chen’s research results cover a wide range of scientific issues in such areas, and two central-most research areas are as follows: Dependable Real-Time Systems: Along with technology shrinking, the presence of hardware faults is growing, which risks the correct system behavior. Against such faults, software tolerance techniques are prominent due to their flexibility. However, their time overhead also makes timeliness a pressing issue. Under this context, three kinds of treatments are studied: 1) In soft real-time systems, occasional deadline misses are acceptable. A series of analyses for the probability of deadline misses are developed. The most effective one is to efficiently derive safe upper bounds on the probability of deadline misses with several magnitude speed-up, in comparison to conventional convolution-based approaches (https://ieeexplore.ieee.org/abstract/document/7993392). 2) By modeling inherent safety margin in applications, soft errors can also be safely ignored in control applications. A runtime adaptive method is thus developed to only compensate when it is necessary while satisfying hard real-time constraints. This work was presented in LCTES’16 and published in ACM SIGPLAN (https://dl.acm.org/doi/abs/10.1145/2980930.2907952). 3) On multi-core systems, several approaches are developed to optimize the system reliability via the deployment of redundant multithreading. A reliability-driven task mapping technique is developed for homogeneous multi-core architectures with reliability and performance heterogeneity, which was published in IEEE Transactions on Computers (https://ieeexplore.ieee.org/abstract/document/7422036). Architecture-Aware Software Design: To unleash the scarce computational power on embedded systems, he focuses on how to exploit a given architecture, especially for data analysis applications, e.g., data mining and machine learning. He develops code generators to automate the optimization of the memory layouts for the tree-based inference model. Given a trained model, the optimized code sessions are generated in C++ to reduce cache misses for various CPU architectures and speed up the runtime. This work is recently published in ACM Transactions on Embedded Computing Systems (https://dl.acm.org/doi/abs/10.1145/3508019). He also works on the system design for non-volatile memories, which feature several advantages like low leakage power, high density, and low unit costs. However, they also impose novel technical constraints, especially limited endurance. His research results on software-based memory analyses, wear-leveling approaches, etc. One highlight is the exploration of energy-aware real-time scheduling for hybrid memory architectures. In this work, a multi-processor procrastination algorithm (HEART) is proposed, based on partitioned earliest-deadline-first (pEDF) scheduling, which facilitates reducing energy consumption by actively enlarging the hibernation time. This work was presented in EMSOFT’21 and published in ACM Transactions on Embedded Computing Systems (https://dl.acm.org/doi/abs/10.1145/3477019).

Read More

2022 International Symposium on Physical Design (ISPD) Table of Content

 Full Citation in the ACM Digital Library

SESSION: Session 1: Opening Session and First Keynote

Session details: Session 1: Opening Session and First Keynote

  • Laleh Behjat
  • Stephen Yang

The Need for Speed: From Electric Supercars to Cloud Bursting for Design

  • Dean Drako

Our industry has insatiable need for speed. In addition to fast products for consumer electronics, medical, mil-aero, security, smart sensors, AI processing, robots, and more, we also continuously push for higher performance for the processing and communication infrastructure needs for true hyper-connectivity.

Dean Drako will compare our industry’s drive for speed to electric supercars. He will then drill down into four key elements that advanced design and verification teams deploy to speed the delivery of their innovative products to successfully meet market windows.

SESSION: Session 2: Placement, Clock Tree Synthesis, and Optimization

Session details: Session 2: Placement, Clock Tree Synthesis, and Optimization

  • Deepashree Sengupta

RTL-MP: Toward Practical, Human-Quality Chip Planning and Macro Placement

  • Andrew B. Kahng
  • Ravi Varadarajan
  • Zhiang Wang

In a typical RTL-­to-­GDSII flow, floorplanning plays an essential role in achieving decent quality of results (QoR). A good floorplan typically requires interaction between the frontend designer, who is responsible for the functionality of the RTL, and the backend physical design engineer. The increasing complexity of macro-­dominated designs (especially machine learning accelerators with autogenerated RTL) has made the floorplanning task even more challenging and time­-consuming. In this paper, we propose RTL-­MP, a novel macro placer which utilizes RTL information and tries to “mimic” the interaction between the frontend RTL designer and the backend physical design engineer to produce human-­quality floorplans. By exploiting the logical hierarchy and processing logical modules based on connection signatures, RTL-­MP can capture the dataflow inherent in the RTL and use the dataflow information to guide macro placement. We also apply autotuning to optimize hyperparameter settings based on input designs. We have built RTL­-MP based on OpenROAD infrastructure and applied RTL-­MP to a set of industrial designs. RTL­-MP outperforms state-­of-­the­-art commercial macro placers and achieves QoR similar to that of handcrafted floorplans.

Clock Design Methodology for Energy and Computation Efficient Bitcoin Mining Machines

  • Chien-Pang Lu
  • Iris Hui-Ru Jiang
  • Chih-Wen Yang

Bitcoin mining machines become a new driving force to push the physical limitation of semiconductor process technology. Instead of peak performance, mining machines pursue energy and computation efficiency of implementing cryptographic hash functions. Therefore, the state-of-the-art ASIC design of mining machines adopts near-threshold computing, deep pipelines, and uni-directional data flow. According to these design properties, in this paper, we propose a novel clock reversing tree design methodology for bitcoin mining machines. In the clock reversing tree, the clock of global tree is fed from the last pipeline stage backward to the first one, and the clock latency difference between the local clock roots of two consecutive stages maintains a constant delay. The local tree of each stage is well balanced and keeps the same clock latency. The special clock topology naturally utilizes setup time slacks to gain hold time margins. Moreover, to alleviate the incurred on-chip variations due to near-threshold computing, we maximize the common clock path shared by flip-flops of each individual stage. Finally, we perform inverter pair swap to maintain duty cycle. Experimental results show that our methodology is promising for industrial bitcoin mining designs: Compared with two variation-aware clock network synthesis approaches widely used in modern ASIC designs, our approach can reduce up to 64% clock buffer/inverter usage, 12% clock power, decrease 99% hold time violating paths, and achieve 85% area saving for timing fixing. The proposed clock design methodology is general and applicable to blockchain and other ASICs with deep pipelines and strong data flow.

Kernel Mapping Techniques for Deep Learning Neural Network Accelerators

  • Sarp Özdemir
  • Mohammad Khasawneh
  • Smriti Rao
  • Patrick H. Madden

Deep learning applications are compute intensive and naturally parallel; this has spurred the development of new processor architectures tuned for the work load. In this paper, we consider structural differences between deep learning neural networks and more conventional circuits — highlighting how this impacts strategies for mapping neural network compute kernels onto available hardware. We present an efficient mapping approach based on dynamic programming, and also a method to establish performance bounds. We also propose an architectural approach to extend the practical life time of hardware accelerators, enabling the integration of a variety of heterogenous processors into a high performance system. Experimental results using benchmarks from a recent ISPD contest are also reported.

SESSION: Session 3: Design Flow Advances with Machine Learning and Lagrangian Relaxation

Session details: Session 3: Design Flow Advances with Machine Learning and Lagrangian Relaxation

  • Ulf Schlichtmann

Design Flow Parameter Optimization with Multi-Phase Positive Nondeterministic Tuning

  • Matthew M. Ziegler
  • Lakshmi N. Reddy
  • Robert L. Franch

Synthesis and place & route tools are highly leveraged for modern digital design. But, despite continuous improvement in CAD tool performance, products in competitive markets often set PPA (performance, power, area) targets beyond what the tools can natively deliver. These aggressive targets lead to circuit designers attempting to tune a vast number of design flow parameters in search of near-optimal design specific flow recipes. Compounding the complex design flow parameter tuning problem is that many digital design tools exhibit nondeterminism, i.e., run-to-run variation. While CAD tool nondeterminism is typically considered an undesirable behavior, this paper proposes design flow tuning methodologies that take advantage of nondeterminism. We propose techniques that employ a combination of running targeted scenarios multiple times to exploit positive deviations nondeterminism can produce and leverage the best observed runs as seeds for multi-phase tuning. We introduce three seed variants for multi-phase tuning that have a spectrum of characteristics, trading off PPA improvement and reduce run-to-run variation. Our experimental analysis using high-performance industrial designs show that the proposed novel techniques outperform an existing state-of-the-art industrial design flow tuning program across all PPA metrics. Furthermore, our proposed approaches reduce run-to-run variation of the best scenarios, leading to a more predictable design flow.

Integrating LR Gate Sizing in an Industrial Place-and-Route Flow

  • David Chinnery
  • Ankur Sharma

Lagrangian relaxation (LR) based gate sizing is the state-of-the-art gate-sizing approach. Integrating it within a place-and-route (P&R) tool is difficult as LR needs multiple iterations to converge, requiring very fast timing analysis. Gate-sizing is invoked in many P&R flow steps, so it is also unclear where best to use LR sizing. We detail development of a LR gate sizer for an industrial P&R flow. Software architecture and P&R flow needs are discussed. We summarize how we sped up the LR sizer by 3x to resize a million gates per hour, and ensure multi-threaded results are deterministic. LR sizing experiments at the fast WNS/TNS optimization steps in the flow stages before and after clock tree synthesis (CTS) show excellent results: 10% to 20% setup timing total negative slack (TNS) reduction with 11% to 14% less leakage power, or 1% to 3% lower total power (dynamic power + leakage) with a total power objective, and 1% to 3% lower cell area. Worst negative slack (WNS) also improved in 2/3 of designs in pre-CTS. In the full flow, 5% lower leakage, 1% lower total power, and 0.6% lower cell area can be achieved, with roughly neutral impact on other metrics, compared to a high-effort low-power P&R flow baseline.

Machine-Learning Enabled PPA Closure for Next-Generation Designs

  • Vishal Khandelwal

Slowdown in process scaling is putting increasing pressure on EDA tools to bridge the power, performance and area (PPA) entitlement gap of Moore’s Law. State-of-the-art designs are pushing the PPA envelope to the limit, accompanied by increasing design size and complexity, and shrinking time-to-market constraints. Al/ML techniques provide a promising direction to address many of the modeling and convergence challenges seen in physical design flows. Further, the promise of intelligent design tools capable of exploring the solution space efficiently brings game-changing possibilities to next-generation design methodologies. In this talk we will discuss various challenges and opportunities in delivering best-in-class PPA closure with AI/ML augmented digital implementation tools. We will also talk about some aspects of large-scale industrial adoption of such a system and the AI capabilities needed to power these tools to minimize the need for an expert user, or endless tool iterations.

Improving Chip Design Performance and Productivity Using Machine Learning

  • Narender Hanchate

Engineering teams are always under pressure to deliver increasingly aggressive power, performance and area (PPA) goals, as fast as possible, on many concurrent projects. Chip designers often spend significant time tuning the implementation flow for each project to meet these goals. Cadence Cerebrus machine learning chip design flow optimization automates this whole process, delivering better PPA much more quickly. During this presentation Cadence will discuss Cerebrus machine learning and distributed computing techniques which enable RTL to GDS flow optimization, delivering better engineering productivity and design performance.

SESSION: Session 4: Panel on Traditional Algorithms Versus Machine Learning Approaches

Session details: Session 4: Panel on Traditional Algorithms Versus Machine Learning Approaches

  • Patrick Groeneveld

From Hard-Coded Heuristics to ML-Driven Optimization: New Frontiers for EDA

  • Patrick R. Groeneveld

The very first Design Automation Conference was held in 1964 when computers were programmed with punch cards. The initial topics were related to automated Printed Circuit Board design, cell placement, and early attempts at transient circuit analysis. The next decades saw the introduction of key graph algorithms and numerical analysis methods. Optimal algorithms and more practical heuristic methods were published. The 1980ies saw the advent of simulated annealing, a universal heuristic optimization method that found many applications. The next decade introduced powerful numerical placement methods for millions of cells. Soon after, physical synthesis was born by combining several incremental synthesis and analysis tools. Today’s commercial EDA tools run a very complex design flow that chains together hundreds of algorithms that were developed over 60 decades. Most effort is in the careful fine-tuning of parameters and addressing the complex – and often surprising – algorithmic interactions. This is a difficult trial-and-error process, driven by a small set of benchmarks. Machine Learning methods will take some of the human tuning efforts out of this loop. Some have already found their way in commercial tools. It will take a while before a Machine Learning method fully replaces a ‘traditional’ EDA algorithm. Each method in the flow has a limited sweet spot and is often run-time critical. On the other hand, conventional algorithms leave only insignificant opportunities for speed up through parallelism. Machine Learning methods may provide the only viable way to unlock the potential of massive cloud computing resources.

Embracing Machine Learning in EDA

  • Haoxing Ren

The application of machine learning (ML) in EDA is a hot research trend. To use ML in EDA, it is nature to think from the ML method point of view, i.e. supervised learning, reinforcement learning and unsupervised learning. Based on this point of view, we can roughly classify the ML applications in EDA into three categories: prediction, optimization, and generation. The prediction category applies supervised learning methods to predict design quality of result (QoR) metrics. There are two kinds of QoR metrics that benefit from the prediction. One kind of metrics are those that can be determined at the current design stage but calculating them consumes a lot of computing resources. For ex-ample, [11] [12] leverage ML to predict circuit power consumption without expensive simulations. The other kind of metrics are those that depend on future design stages. For example, [8] predicts post layout parasitics from schematic of analog circuits. The optimization category applies Bayesian Optimization (BO)and reinforcement learning (RL) to directly optimize EDA problems.BO treats the optimization objective as a blackbox function and tries to find optimal solutions by iteratively sampling the solution space. For example, [5] proposes to use BO with graph embedding and neural network-based surrogate model to size analog circuits. RL treats the optimization objective as the reward from an environment, and trains agents to maximize the reward. [7] proposes to use RL to optimize macro placement, and [9] proposes to use RL to optimize parallel prefix circuit structures. The generation category applies generative models such as generative adversarial networks (GANs) to directly generate solutions to EDA problems. Generative models can learn from previous optimized data distribution and generate solutions for a new problem instance without going through iterative processes like BO or RL. For example, [10] builds a conditional GAN model that learns to generate optical proximity correction (OPC) layout from the original mask.

What’s So Hard About (Mixed-Size) Placement?

  • Mohammad Khasawneh
  • Patrick H. Madden

For years, integrated circuit design has been a driver for algorithmic advances. The problems encountered in the design of modern circuits are often intractable — and with exponentially increasing size. Efficient heuristics and approximations have been essential to sustaining Moore’s Law growth, and now almost every aspect of the design process is heavily automated. There is, however, one notable exception: there is often substantial floor planning effort from human designers to position large macro blocks. The lack of full automation on this step has motivated the exploration of novel optimization methods, most recently with reinforcement learning. In this paper, we argue that there are multiple forces which have prevented full automation — and a lack of algorithmic methods is not the only factor. If the time has come for automation, there are a number of “traditional” methods that should be considered again. We focus on recursive bisection, and highlight key ideas from partitioning algorithms that have broader impact than one might expect. We also stress the importance of benchmarking as a way to determine which approaches may be most effective.

Scalability and Generalization of Circuit Training for Chip Floorplanning

  • Summer Yue
  • Ebrahim M. Songhori
  • Joe Wenjie Jiang
  • Toby Boyd
  • Anna Goldie
  • Azalia Mirhoseini
  • Sergio Guadarrama

Chip floorplanning is a complex task within the physical design process, with more than six decades of research dedicated to it. In a recent paper published in Nature~\citemirhoseini2021graph, a new methodology based on deep reinforcement learning was proposed that solves the floorplanning problem for advanced chip technologies with production quality results. The proposed method enables generalization, which means that the quality of placements improves as the policy is trained on a larger number of chip blocks. In this paper, we describe Circuit Training, an open-source distributed reinforcement learning framework that re-implements the proposed methodology in TensorFlow v2.x. We will explain the framework and discuss ways it can be extended to solve other important problems within physical design and more generally chip design. We also show new experimental results that demonstrate the scaling and generalization performance of Circuit Training.

SESSION: Session 5: Second Keynote

Session details: Session 5: Second Keynote

  • Louis K. Scheffer

The Cerebras CS-2: Designing an AI Accelerator around the World’s Largest 2.6 Trillion Transistor Chip

  • Jean-Philippe Fricker

The computing and memory demands from state-of-the-art neural networks have increased several orders of magnitude in just the last couple of years, and there’s no end in sight. Traditional forms of scaling chip performance are necessary but far from sufficient to run the machine learning models of the future. In this talk, Cerebras Co-Founder and Chief Systems Architect Jean-Philippe Fricker will explore the fundamental properties of neural networks and why they are not well served by traditional architectures. He will examine how co-design can relax the traditional boundaries between technologies and enable designs specialized for neural networks with new architectural capabilities and performance. Finally, Jean-Philippe will explore this rich new design space using the Cerebras architecture as a case study, highlighting design principles and tradeoffs that enable the machine learning models of the future.

SESSION: Session 6: Third Keynote

Session details: Session 6: Third Keynote

  • Chuck Alpert

Leveling Up: A Trajectory of OpenROAD, TILOS and Beyond

  • Andrew B. Kahng

Since June 2018, the OpenROAD project has developed an open-source, RTL-to-GDS EDA system within the DARPA IDEA program. The tool achieves no-human-in-loop generation of design-rule clean layout in 24 hours. This enables system innovation and design space exploration, while also democratizing hardware design by lowering barriers of cost, expertise and risk. Since November 2021, The Institute for Learning-enabled Optimization at Scale (TILOS), an NSF AI institute for advances in optimization partially supported by Intel, has begun its work toward a “new nexus” of AI, optimization, and the leading edge of practice for use domains that include IC design. This paper traces a trajectory of “leveling up” in the research enablement for IC physical design automation and EDA in general. This trajectory has OpenROAD and TILOS as waypoints, and advances themes of openness, infrastructure, and culture change.

SESSION: Session 7: Prototyping, Packaging, and Integration

Session details: Session 7: Prototyping, Packaging, and Integration

  • Tiago Reimann

3DIC Design: Challenges and Opportunities in System-of-Chips Integration

  • Ming Zhang

Technology scaling has enabled the semiconductor industry to successfully address the application performance demands over the past three decades. However, the cost, complexity and diminishing returns of the classic Moore’s Law scaling is accelerating the migration from traditional system-on-chip design to systems-of-chips design consisting of 3D heterogenous integration systems that open a new dimension to improve density, bandwidth, performance, power, and cost. Designing such 3D systems has its own challenges – to enable them, we need to look beyond piece-meal tooling to more hyperconvergent design systems that provide the comprehensive technological solution and productivity gains. This talk will outline the promise of the 3D system-of-chips design and present key design and verification challenges faced by the engineering teams associated with the development of such systems. It will discuss how a holistic design solution consisting of end-to-end design automation, integrated tools, die-to-die IP and methodologies can provide unique benefits in system-level design flow optimization and pave the way to achieving optimal power, performance and transistor volume density to drive the next wave of transformative products.

Novel Methodology for Assessing Chip-Package Interaction Effects onChip Performance

  • Armen Kteyan
  • Jun-Ho Choy
  • Valeriy Sukharev
  • Massimo Bertoletti
  • Carmelo Maiorca
  • Rossana Zadra
  • Massimo Inzaghi
  • Gabriele Gattere
  • Giancarlo Zinco
  • Paolo Valente
  • Roberto Bardelli
  • Alessandro Valerio
  • Pierluigi Rolandi
  • Mattia Monetti
  • Valentina Cuomo
  • Salvatore Santapà

The paper presents a multiscale simulation methodology and EDA tool that assesses the effect of thermal mechanical stresses arising after die assembly on chip performance. Existing non uniformities of feature geometries and composite nature of on-chip interconnect layers are addressed by developed methodology of the anisotropic effective thermomechanical material properties (EMP) that reduces complexity of FEA simulations and enhances the accuracy and performance. Physical nature of the calculated EMP makes it scalable with the simulation grid size, which enables resolution of stress/strain at different scales from package to device channel. With feature-scale resolution, the tool enables accurate calculation of stress components in the active region of each device, where the carrier mobility variation results in deviations of circuits performance. The tool’s capability of back-annotation of the hierarchic Spice netlist with the stress values allows a user to perform circuit simulation in different stress environments, by placing the circuit block in different locations in the layout characterized by different distances from the stress sources, such as die edges and C4 bumps. Both schematic and post-layout netlists can be employed for finding optimal floorplan minimizing the stress impact at early design stages, as well as for the final design sign-off. Electrical measurements on a specially designed test-package were used for validation of the methodology. Good agreement between measured and simulated variations of device characteristics has been demonstrated.

On Ensuring Congruency with Implementation During Emulation and Prototyping

  • Alex Rabinovitch

ASIC-style design implementation ensures a certain degree of determinism in design behavior when it comes to glitches in clock cones and hold violations. Emulation and prototype products must follow the same deterministic rules of behavior in order to match the behavior of the real chip. Those techniques are surveyed and shown to be inherently rooted in modelling the timeline in a manner that creates an artificial common source of synchronization between different clocks in design. Also the capability of low skew clock lines provided by FPGA vendors is leveraged. However, this overall approach could result in performance degradation and techniques are presented to compensate for the degradation. It is an open question whether these methods could potentially benefit the Implementation which is presently using a rather different method to solve similar problems.

SESSION: Session 8: 3D IC Design

Session details: Session 8: 3D IC Design

  • Lang Feng

Challenges and Solutions for 3D Fabric: A Foundry Perspective

  • Sandeep Kumar Goel

3D ICs have increasingly become popular as they provide a way to pack more functionality on a chip and reduce manufacturing cost. TSMC offers a number of packaging technologies under the umbrella of “3D Fabric” to suit different product requirements. Just like any new technology, 3D Fabric brings forward several challenges associated with system, design, thermal as well as testing that require effective and efficient solutions before 3D Fabric can be used in high volume production. In this presentation, we will give a brief introduction about various 3D Fabric offerings and discuss challenges from a semiconductor foundry perspective. Next, we present an overview of solutions along with what EDA needs to solve. Lastly, how various IEEE Standards such as 1838, and 1149.1 can help in streamlining and standardizing testing approaches for 3D Fabrics will be discussed.

Recent Advances and Future Challenges in 2.5D/3D Heterogeneous Integration

  • Tanay Karnik

In this presentation, we will review the recent advances in chiplet-based commercial products and prototypes [2,3,4,5]. Most chiplet usage has been confined to integrating die designed by the same organization applied to building chips for the same product types. The right approach should be able to reduce portfolio costs, scale innovation and improve time to solution [1]. It is important to manage the associated trade-offs, such as thermal, power, I/O escapes, assembly, test, etc. We will conclude the talk by presenting the future 2.xD/3D integration opportunities becoming available [6].

ART-3D: Analytical 3D Placement with Reinforced Parameter Tuning for Monolithic 3D ICs

  • Gauthaman Murali
  • Sandra Maria Shaji
  • Anthony Agnesina
  • Guojie Luo
  • Sung Kyu Lim

In this paper, we show that true 3D placement approaches, enhanced with reinforcement learning, can offer further PPA improvements over pseudo-3D approaches. To accomplish this goal, we integrate an academic true 3D placement engine into a commercial-grade 3D physical design flow, creating ART-3D flow (Analytical 3D Placement with Reinforced Parameter Tuning-based 3D flow). We use a reinforcement learning (RL) framework to find optimized placement parameter settings of the true 3D placement engine for a given netlist and perform high-quality 3D placement. We then use an efficient 3D optimization and routing engine based on a commercial place and route (P&R) tool to maintain or improve the benefits reaped from true 3D placement till design signoff. We evaluate our 3D flow by designing several gate-only and processor benchmarks on a commercial 28nm technology node. Our proposed 3D flow involving true 3D placement offers the best PPA results compared to existing 3D P&R flows and reduces power consumption by up to 31%, improves effective frequency by up to 25%, and therefore reduces power-delay product by up to 43% compared with commercial 2D IC design flow. These improvements predominantly come from RL-based parameter tuning, as it improves the performance of the 3D placer by up to 12%.

Intelligent Design Automation for Heterogeneous Integration

  • Iris Hui-Ru Jiang
  • Yao-Wen Chang
  • Jiun-Lang Huang
  • Charlie Chung-Ping Chen

As the design complexity grows dramatically in modern circuit designs, 2.5D/3D heterogeneous integration (HI) becomes effective for system performance, power, and cost optimization, providing promising solutions to the increasing cost of more-Moore scaling. In this talk, we investigate the chip, package, and board co-design methodology with advanced packages and optical communication considering essential issues on physical design, electrical, thermal, and mechanical effects, timing, and testing, and suggest future research opportunities. Layout: A robust and vertically integrated physical design flow for HI design is needed. We address chip-, package-, and board-level component planning, package-level RDL routing, board-level routing, optical routing, and placement and routing considering warpage and thermal effects. Timing: New chip-level and cross-chip timing analysis techniques are desired. We address timing propagation under current source delay model (CSM), timing analysis and optimization for optical-electrical routing, multi-corner multi-mode analysis for HI, hierarchical MCMM analysis. Testing: The scope covers functional-like test generation, System-in-Package (SiP) online testing, photonic integrated circuits (PIC) testing and design-for-test (DfT), etc. Integration: We shall address chip, package, and board co-design considering multi-domain physics, including physical, electrical, thermal, mechanical, and optical effects and optimization.

SESSION: Session 9: Routing

Session details: Session 9: Routing

  • Jhih-Rong Gao

A Reinforcement Learning Agent for Obstacle-Avoiding Rectilinear Steiner Tree Construction

  • Po-Yan Chen
  • Bing-Ting Ke
  • Tai-Cheng Lee
  • I-Ching Tsai
  • Tai-Wei Kung
  • Li-Yi Lin
  • En-Cheng Liu
  • Yun-Chih Chang
  • Yih-Lang Li
  • Mango C.-T. Chao

This paper presents a router, which tackles a classic algorithm problem in EDA, obstacle-avoiding rectilinear Steiner minimum tree (OARSMT), with the help of an agent trained by our proposed policy-based reinforcement-learning (RL) framework. The job of the policy agent is to select an optimal set of Steiner points that can lead to an optimal OARSMT based on a given layout. Our RL framework can iteratively upgrade the policy agent by applying Monte-Carlo tree search to explore and evaluate various choices of Steiner points on various unseen layouts. As a result, our policy agent can be viewed as a self-designed OARSMT algorithm that can iteratively evolves by itself. The initial version of the agent is a sequential one, which selects one Steiner point at a time. Based on the sequential agent, a concurrent agent can then be derived to predict all required Steiner points with only one model inference. The overall training time can be further reduced by applying geometrically symmetric samples for training. The experimental results on single-layer 15×15 and 30×30 layouts demonstrate that our trained concurrent agent can outperform a state-of-the-art OARSMT router on both wire length and runtime.

LEO: Line End Optimizer for Sub-7nm Technology Nodes

  • Diwesh Pandey
  • Gustavo E. Tellez
  • James Leland

Sub-7nm technology nodes have introduced new challenges, specifically in the lower metal layers. Extreme Ultraviolet Lithography (EUV) and multi-patterning-based lithography such as Self-Aligned Double Patterning (SADP) solutions have become key choices for the manufacturing of these layers. The demand for microprocessors has increased tremendously in the last few years and this imposes another challenge to the chip manufacturers to build their products at a very rapid rate. These days a mix of different lithography solutions for the manufacturing of metal layers is quite common. We propose a first-of-its-kind routing plugin which solves design rule violations for multiple lithography technologies without making any changes in the existing routers. Our plugin consists of a practical line-end optimization (LEO) algorithm, which solves most line-end problems in a few minutes, even for very large designs. Our solution is implemented in the development of a 7nm, industrial microprocessor design.

Routing Layer Sharing: A New Opportunity for Routing Optimization in Monolithic 3D ICs

  • Sai Pentapati
  • Sung Kyu Lim

A 3D Integrated Circuit consists of two or more dies bonded to each other in the vertical direction. This allows for a high transistor density without a need for shrinking the underlying transistor dimensions. While it has been shown to improve design power, performance, and area (PPA) due to the stacked Front End Of the Line (FEOL) layers, the Back End Of the Line (BEOL) structure of the stacked IC also allows for novel routing scenarios. With the split dies in 3D, nets would need to connect cells from different tiers, across many vertical layers and multiple FEOLs. More importantly, nets connecting cells in a single tier could still use metal layers from the BEOL of other tiers to complete routing. This is referred to as routing / metal layer sharing. While such sharing creates additional 3D connections, it can also be utilized to improve several aspects of the design such as cost, routing congestion, and performance. In this paper, we analyze the nets with metal layer sharing in 3D and provide ways to control the number of 3D connections. We show that the configuration of the 3D BEOL stack helps with metal layer cost reduction with up to 1-2 fewer layers needed to complete routing without a noticeable timing impact. Sharing also allows for a better distribution of wirelength in the BEOL stack that can achieve significant reduction in metal layer congestion of top most layer by up to a 50% reduction of its track usage. Finally, we also see performance benefits of up to 16% with the help of metal layer sharing in 3D IC design.

SESSION: Session 10: Fourth Keynote

Session details: Session 10: Fourth Keynote

  • Jens Lienig

Triple-play of Hyperconvergency, Analytics, and AI Innovations in the SysMoore Era

  • Aiqun Cao

The SysMoore Era can be characterized as the widening gap between classic Moore’s Law scaling and increasing system complexity. System-on-a-chip complexity has now fallen by the wayside to systems-of-chips with the need for smaller process nodes, and multi-die integration. With engineers now handling not just larger chip designs but systems comprised of multiple chips, the focus on user productivity and design robustness becomes a major factor in getting designs to market in the fastest time and with the best possible PPA. Combining a hyperconvergent design flow with smart data analytics and AI-based solution space exploration provides a huge benefit to the engineers tasked with completing these systems. This presentation outlines the challenges and the road to a triple-play solution that gets design engineers out of their late inning jams.

SESSION: Session 11: Lifetime Achievement Commemoration for Ricardo Reis

Session details: Session 11: Lifetime Achievement Commemoration for Ricardo Reis

  • Jose Luiz Guntzel

A Lifetime of Physical Design Automation and EDA Education: ISPD 2022 Lifetime Achievement Award Bio

  • Ricardo Augusto da Luz Reis

The 2022 International Symposium on Physical Design lifetime achievement award goes to Prof. Ricardo Reis for his instrumental impact on EDA research in South America and contributions to the physical design community.

Design and Optimization of Quantum Electronic Circuits

  • Giovanni De Micheli

Quantum electronic circuits where the logic information is processed and stored in single flux quanta promise efficient computation in a performance/power metric, and thus are of utmost interest as possible replacement or enhancement of CMOS. Several electronic device families leverage superconducting materials and transitions between resistive and superconducting states. Information is coded into bits with deterministic values – as opposed to qubits used in quantum computing. As an example, information can be coded into pulses. Logic gates can be modeled as finite-state machines, that emit logic outputs in response to inputs. The most natural realization of such circuits is through synchronous implementations, where a clock stimulus is transmitted to every logic gate and where logic depth is balanced at every input to achieve full synchrony. Novel superconducting realization families try to go beyond the limitations of synchronous logic with approaches reminiscent of asynchronous design style and leveraging information coding. Moreover, some superconducting families exploit adiabatic operation, in the search for minimizing energy consumption. Design automation for quantum electronic logic families is still in its infancy, but important results have been achieved in terms of automatic balancing and fanout management. The combination of these problems with logic restructuring poses new challenges, as the overall problem is more complex as compared to CMOS and algorithms and tools cannot be just adapted. This presentation will cover recent advancement in design automation for superconducting electronic circuits as well as address future developments in the field.

Physical Design at the Transistor Level Beyond Standard-Cell Methodology

  • Renato Hentschke

This talk offers a review of possibilities to explore on VLSI layout beyond traditional standard cell methodology. Existing Physical Design tools strictly avoid any modification to the contents of Standard Cells. Here, a post-processing step based on SAT solvers is proposed to obtain optimal solutions for local transistor level layout synthesis problems. This procedure can be constrained by metrics that ensure that quality is not degraded, and an acceptable and better-quality timing model can be rebuilt for the block. These problems and techniques are open research opportunities in Physical Design as they are not sufficiently explored in the literature and can bring significant improvements to the quality of a VLSI circuit.

Physical Design Optimization, From Past to Future

  • Ricardo Augusto da Luz Reis

By the end of years 70s, microprocessors were designed by hand showing excellent layout compaction. It will be shown some highlights of the reverse engineering of the Z8000, which control part was designed by hand, showing several layout optimization strategies. The observation of the Z8000 layout inspired the research of methods to do the automatic generation of the layout of any transistor network, allowing to reduce the number of transistors to implement a circuit, and by consequence, the leakage power. Some of the layout automation tools developed by our group are briefly presented.

SESSION: Session 12: Fifth Keynote

Session details: Session 12: Fifth Keynote

  • Bei Yu

Accelerating the Design and Performance of Next Generation Computing Systems with GPUs

  • Sameer Halepete

The last few years have seen an accelerating growth in the demand for new silicon designs, even as the size and complexity of those designs has increased. However, the gains in design productivity necessary to implement these designs efficiently have not kept up. We need more than an order of magnitude increase in design productivity by the end of the decade to keep up with demand. Traditional methods for improving physical design tool capabilities are running out of steam, and there is a strong need for new approaches. Over the last two decades, we have seen other areas of computer science such as computer vision, speech recognition and natural language processing reach similar plateaus in performance, and each has been able to break out of the stall using GPU accelerated computing and machine learning. There is a similar opportunity in EDA but it will require a rethinking of the way these tools are implemented. The talk will cover where the demand for new silicon designs is coming from, what the productivity bottlenecks are, and then describe some advances in GPUs that could enable us to break through these bottlenecks with some examples.

SESSION: Session 13: Advances in Analog and Full Custom Design Automation

Session details: Session 13: Advances in Analog and Full Custom Design Automation

  • Mark Po-Hung Lin

Optimized is Not Always Optimal – The Dilemma of Analog Design Automation

  • Juergen Scheible

The vast majority of state-of-the-art integrated circuits are mixed-signal chips. While the design of the digital parts of the ICs is highly automated, the design of the analog circuitry is largely done manually; it is very time-consuming; and prone to error. Among the reasons generally listed for this is often the attitude of the analog designer. The fact is that many analog designers are convinced that human experience and intuition are needed for good analog design. This is why they distrust the automated synthesis tools. This observation is quite correct, but this is only a symptom of the real problem. This paper shows that this phenomenon is caused by very concrete technical (and thus very rational) issues. These issues lie in the mode of operation of the typical optimization processes employed for the synthesizing tasks. I will show that the dilemma that arises in analog design with these optimizers is the root cause of the low level of automation in analog design. The paper concludes with a review of proposals for automating analog design.

Analog/Mixed-Signal Layout Optimization using Optimal Well Taps

  • Ramprasath S
  • Meghna Madhusudan
  • Arvind K. Sharma
  • Jitesh Poojary
  • Soner Yaldiz
  • Ramesh Harjani
  • Steven M. Burns
  • Sachin S. Sapatnekar

Well island generation and well tap placement pose an important challenge in automated analog/mixed-signal (AMS) layout. Well taps prevent latchup within a radius of influence in a well island, and must cover all devices. Automated AMS layout flows typically perform well island generation and tap insertion as a postprocessing step after placement. However, this step is intrusive and potentially alters the placement, resulting in increased area, wire length, and performance degradation. This work develops a graph-based optimization that integrates well island generation, well tap insertion, and placement. Its efficacy is demonstrated within a stochastic placement engine. Experimental results show that this approach generates better area, wire length and performance metrics than traditional methods, at the cost of a marginal runtime degradation.

Analog Synthesis – The Deterministic Way

  • Helmut Graeb

While the majority of research in design automation for analog circuits has been relying on statistical solution approaches, deterministic approaches are an attractive alternative. This paper gives a few examples of deterministic methods for sizing, structural synthesis and layout synthesis of analog circuits, which have been developed over the past decades. It starts from the so-called characteristic boundary curve for interactive parameter optimization, and ends at recent approaches for structural synthesis of operational amplifiers based on functional block composition. A deterministic approach to analog placement and to yield optimization will also be described. The central role of structural analysis of circuit netlists in these approaches will be explained. A summary of the underlying mindset of analog design automation and an outlook on future opportunities for deterministic sizing and layout synthesis concludes the paper.

AutoCRAFT: Layout Automation for Custom Circuits in Advanced FinFET Technologies

  • Hao Chen
  • Walker J. Turner
  • Sanquan Song
  • Keren Zhu
  • George F. Kokai
  • Brian Zimmer
  • C. Thomas Gray
  • Brucek Khailany
  • David Z. Pan
  • Haoxing Ren

Despite continuous efforts in layout automation for full-custom circuits, including analog/mixed-signal (AMS) designs, automated layout tools have not yet been widely adopted in current industrial full-custom design flows due to the high circuit complexity and sensitivity to layout parasitics. Nevertheless, the strict design rules and grid-based restrictions in nanometer-scale FinFET nodes limit the degree of freedom in full-custom layout design and thus reduce the gap between automation tools and human experts. This paper presents AutoCRAFT, an automatic layout generator targeting region-based layouts for advanced FinFET-based full-custom circuits. AutoCRAFT uses specialized place-and-route (P&R) algorithms to handle various design constraints while adhering to typical FinFET layout styles. Verified by comprehensive post-layout analyses, AutoCRAFT has achieved promising preliminary results in generating sign-off quality layouts for industrial benchmarks.

SESSION: Session 14: Panel on Challenges and Approaches in VLSI Routing

Session details: Session 14: Panel on Challenges and Approaches in VLSI Routing

  • Gracieli Posser

Challenges and Approaches in VLSI Routing

  • Gracieli Posser
  • Evangeline F.Y. Young
  • Stephan Held
  • Yih-Lang Li
  • David Z. Pan

In this paper, we will first have a brief review of the ISPD 2018 and 2019 Initial Detailed Routing Contests. We will then visit a few important and interesting topics in VLSI routing that includes GPU accelerated routing, signal speed optimization in routing, PCB routing and AI-driven analog routing.

Challenges for Automating Package Routing

  • Wen-Hao Liu
  • Bing Chen
  • Hua-Yu Chang
  • Gary Lin
  • Zi-Shen Lin

Package routing is typically done by semi-auto or manual manners in order to meet several customized requests for different design styles. However, in recent years, the scale of package designs rapidly enlarges, and routing rules become more and more complicated, such that the engineering effort of the manual solution increases dramatically. Therefore, the need of full-auto solution becomes necessary and critical. In addition, in order to build an automatic design flow for 3D-IC, full-auto package routing is one of most important pieces. There are many challenges for realizing full-auto package routing solution. Some of the challenges will be introduced in this paper.

SESSION: Session 15: Global Placement, Macro Placement, and Legalization

Session details: Session 15: Global Placement, Macro Placement, and Legalization

  • Joseph Shinnerl

Congestion and Timing Aware Macro Placement Using Machine Learning Predictions from Different Data Sources: Cross-design Model Applicability and the Discerning Ensemble

  • Xiang Gao
  • Yi-Min Jiang
  • Lixin Shao
  • Pedja Raspopovic
  • Menno E. Verbeek
  • Manish Sharma
  • Vineet Rashingkar
  • Amit Jalota

Modern very large-scale integration (VLSI) designs typically use a lot of macros (RAM, ROM, IP) that occupy a large portion of the core area. Also, macro placement being an early stage of the physical design flow, followed by standard cell placement, physical synthesis (place-opt), clock tree synthesis and routing, etc., has a big impact on the final quality of result (QoR). There is a need for Electronic Design Automation (EDA) physical design tools to provide predictions for congestion, timing, and power etc., with certainty for different macro placements before running time-consuming flows. However, the diversity of IC designs that commercial EDA tools must support and the limited number of similar designs that can provide training data, make such machine learning (ML) predictions extremely hard. Because of this, ML models usually need to be completely retrained for unseen designs to work properly. However, collecting full flow macro placement ML data is time consuming and impractical. To make things worse, common ML methods, such as regression, support vector machine (SVM), random forest (RF), neural network (NN) in general, lack a good estimation of prediction accuracy or confidence and lack debuggability for cross-design applications. In this paper, we present a novel discerning ensemble technique for cross-design ML prediction for macro placement. We developed our solution based on a large number of designs with different design styles and technology nodes, and tested the solution on 8 leading-edge industry designs and achieved comparable or even better results in a few hours (per design) than manual placement results that take many engineers weeks or even months to achieve. Our method shows great promise for many ML problems in EDA applications, or even in other areas.

Global Placement Exploiting Soft 2D Regularity

  • Donghao Fang
  • Boyang Zhang
  • Hailiang Hu
  • Wuxi Li
  • Bo Yuan
  • Jiang Hu

Cell placement is such a critical step for chip physical design that it needs many kinds of efforts for improvement. Recently, designs with 2D processing element arrays have become popular primarily due to their deep neural network computing applications. The 2D array regularity is similar to but different from the regularity of conventional datapath designs. To exploit the 2D array regularity, this work develops a new global placement technique built upon RePlAce, the latest state-of-the-art placement framework. Experimental results from various designs show that the proposed technique can reduce half-perimeter wirelength and Steiner tree wirelength by about $6%$ and $12%$, respectively.

Linear-time Mixed-Cell-Height Legalization for Minimizing Maximum Displacement

  • Chung-Hsien Wu
  • Wai-Kei Mak
  • Chris Chu

Due to the aggressive scaling of advanced technology nodes, multiple-row-height cells have become more and more common in VLSI design. Consequently, the placement of cells is no longer independent among different rows, which makes the traditional row-based legalization techniques obsolete. In this work, we present a highly efficient linear-time mixed-cell-height legalization approach that optimizes both the total cell displacement and the maximum cell displacement. First, a fast window-based cell insertion technique introduced in [4] is applied to obtain a feasible initial row assignment and cell ordering which is known to be good for total displacement consideration. In the second stage, we use an iterative cell swapping algorithm to change the row assignment and the cell order of the critical cells for maximum displacement reduction. Then we develop an optimal linear time DAG-based fixed row and fixed order legalization algorithm to minimize the maximum cell displacement. Finally, we propose a cell shifting heuristic to reduce the total cell displacement without increasing the maximum cell displacement. Using the proposed approach, the quality provided by the global placement can be preserved as much as possible. Compared with the state-of-the-art work [4], experimental results show that our proposed algorithm can reduce the maximum cell displacement by more than 11% on average with similar average cell displacement.

SESSION: Session 16: Sixth Keynote

Session details: Session 16: Sixth Keynote

  • Ajay Joshi

Hardware Security: Physical Design versus Side-Channel and Fault Attacks

  • Ingrid Verbauwhede

What is “hardware” security? How can we improve trustworthiness in hardware circuits? Is there a design method for secure hardware design? To answer these questions, different communities have different expectations of trusted (expecting trustworthy) hardware components upon which they start to build a secure system. At the same time, electronics shrink: sensor nodes, IOT devices, smart electronics are becoming more and more available. In the past, adding security was only a concern for locked server rooms or now cloud servers. However, these days, our portable devices contain highly private and secure information. Adding security and cryptography to these often very resource constraint devices is a challenge. Moreover, they can be subject to physical attacks, including side-channel and fault attacks [1][2]. This presentation aims at bringing some order in the chaos of expectations by introducing the importance of a design methodology for secure design [3][5]. We will illustrate the capabilities of current side EM and laser fault passive and active attacks. In this context, we will also reflect on the role of physical design, place and route [4][6].

SESSION: Session 17: ISPD 2022 Contest Results and Closing Remarks

Session details: Session 17: ISPD 2022 Contest Results and Closing Remarks

  • David Chinnery

Benchmarking Security Closure of Physical Layouts: ISPD 2022 Contest

  • Johann Knechtel
  • Jayanth Gopinath
  • Mohammed Ashraf
  • Jitendra Bhandari
  • Ozgur Sinanoglu
  • Ramesh Karri

Computer-aided design (CAD) tools mainly optimize for power, performance, and area (PPA). However, given a large number of serious hardware-security threats that are emerging, future CAD flows must also incorporate techniques for designing secure integrated circuits (ICs). In fact, the stakes are quite high for IC vendors and design companies, as security risks that are not addressed during design time will inevitably be exploited in the field, where vulnerabilities are almost impossible to fix. However, there is currently little to no experience related to designing secure ICs available within the CAD community. For the very first time, this contest seeks to actively engage with the community to close this gap. The theme of this contest is security closure of physical layouts, that is, hardening the physical layouts at design time against threats that are executed post-design time. More specifically, this contest is focused on selected and seminal threats that, once taken in, are relatively simple to approach and mitigate through means of physical design: Trojan insertion and probing as well as fault injection. Acting as security engineers, contest participants will iteratively and proactively evaluate and fix the vulnerabilities of provided benchmark layouts. Benchmarks and submissions are based on the generic DEF format and related files. Thus, participants are free to use any physical-design tools of their choice, helping us to open up the contest to the community at large.

Can Li

April 1st, 2022

Can Li

Assistant Professor

The University of Hong Kong

Email:

canl@hku.hk

Personal webpage

http://canlab.hku.hk

Research interests

Neuromorphic computing, nanoelectronics devices, non-volatile memories, software-hardware co-optimization

Short bio

Dr. Can Li is currently an Assistant Professor at the Department of Electrical and Electronic Engineering of the University of Hong Kong, working on analog and neuromorphic computing accelerators based on post-CMOS emerging devices (e.g. memristors), for efficient machine/deep learning, network security, signal processing, etc. Before that, He spent two years at Hewlett Packard Labs in Palo Alto, California, and obtained his Ph.D. from University of Massachusetts, Amherst, and B.S./M.S. from Peking University. He is a recipient of the Early Career Award by HKSAR RGC and the Excellent Young Scientist Fund Award by NSFC.

Reasearch highlights

Can Li has made contributions to the in-memory computing technology based on non-volatile memory devices. At the device level, he fabricated and characterized different resistive switching or memristive devices with different material stacks, including Cu/SiOx/Pt, Pt/SiOx/Pt, Si/SiOx/Si, Ta/HfOx/Pt, etc. The potential of this type of device was also demonstrated by Can Li and colleagues’ work on three-dimensional (3D) stacking and integration (up to eight layers), and ultimate scaling down to 2 nm×2 nm. At the array level, he integrated memristors (2 µm×2 µm and 50 nm×50 nm) with silicon transistors from commercial foundries and demonstrated high-yield and good analog programming ability. At the circuit level, he designed and developed analog circuits for analog content addressable memory in a 6-transistor 2-memristor (6T2M) configuration. Can Li was closely involved in designing, taping out, and evaluating peripheral circuits for matrix multiplication accelerators. At the systems level, he showcased the memristor-based system in potential applications such as artificial intelligence, analog signal/image processing, pattern matching, solving optimization problems, hardware security, etc. Those studies have been documented in many high-profile publications, including Nature Electronics, Nature Machine Intelligence, Nature Nanotechnology, Nature Communications, Advanced Materials, IEDM, etc.

Read More

Johann Knechtel

April 1st, 2022

Johann Knechtel

Research Scientist

New York University Abu Dhabi, UAE

Email:

johann@nyu.edu

Personal webpage

https://wp.nyu.edu/johann/

Research interests

Hardware Security, Electronic Design Automation (EDA), 3D Integration, Emerging Technologies, Machine Learning

Short bio

Dr.-Ing. Johann Knechtel is a Research Scientist with the Design for Excellence Lab at New York University (NYU) Abu Dhabi, UAE. In this position, he is acting as Co-PI for multiple research projects and provides lecturing, training, and mentoring to PhD and undergraduate students. Johann received the Dipl.-Ing. degree (M.Sc.) in Information Systems Engineering in 2010 and the Dr.-Ing. degree (Ph.D.) in Computer Engineering (summa cum lauda, with highest honors) in 2014, both from TU Dresden (TUD), Germany. Before joining NYU Abu Dhabi in 2016, Johann was a Postdoctoral Researcher in 2015 at the Masdar Institute of Science and Technology, UAE, where he was affiliated with the Twinlab on “3D Stacked Chips”, hosted by Masdar Institute and TUD and supported by industry and government partners. In 2012 he was with the Chinese University of Hong Kong, China, and in 2010 he was with the University of Michigan, USA. In 2006, he was working as Freelance Software Engineer for Siemens IT Solutions and Services, Germany; in 2006–08 he was working as Research Assistant for Fraunhofer IWS Institute, Dresden, Germany; and in 2008–09 he was working as Embedded Systems Intern for TraceTronic GmbH, Dresden, Germany. In 2017, Johann and his team achieved the 1st place in the CSAW Applied Research Competition. Johann obtained scholarships from the German Academic Exchange Service (DAAD) in 2010, from the German Research Foundation (DFG) in 2010–14, and from the Graduate Academy of TU Dresden in 2014. Johann obtained an NYU Research Enhancement Fund in 2018–21. Johann has (co-)authored around 60 publications, including 15 highlighted and/or invited papers. Johann is an active member of the ACM, including ACM SIGDA, and IEEE. He is serving as peer reviewer for various top-tier ACM and IEEE conferences and journals.

Reasearch highlights

Johann is acting as Co-PI for multiple projects with the common goal of advancing hardware security. Johann’s work involves five PhD students and postdoctoral researchers at NYU Abu Dhabi and also covers collaborations with around 15 researchers and students at prestigious institutions worldwide. Johann’s work is currently focused on the following themes: 1. Security closure for physical design of integrated circuits (ICs); 2. Protection of IC design intellectual property, with advanced techniques proposed for split manufacturing and obfuscation utilizing interconnect fabrics as well as 2.5D and 3D integration; 3. Secure architectures and secure system integration based on chiplets and 2.5D integration; 4. Machine learning-driven security evaluation at design time of defense schemes like split manufacturing and logic locking; 5. Security evaluation of ICs and field-programmable gate arrays (FPGAs) using advanced electro-magnetic field and laser-assisted optical probing; 6. Design-time security evaluation of ICs against side-channel attacks; 7. Security promises and challenges of emerging technologies for various defense schemes; 8. Security-aware electronic design automation (EDA) flows for 2D, 2.5D, and 3D ICs. Johann has successfully published on these and other themes. Recent examples include two invited papers at ICCAD 2021 on security closure for physical design, two invited papers at ISPD 2020–21 on hardware security for and beyond CMOS devices, an invited paper at DATE 2020 on the role of EDA for secure composition of ICs, and invited papers at IOLTS and COINS 2019 on 3D integration as another dimension toward hardware security and on design IP protection, respectively. Furthermore, Johann and colleagues have recently compiled a book “The Next Era in Hardware Security: A Perspective on Emerging Technologies for Secure Electronics,” Springer, 2022, with already 1.7k full-text downloads as of today. Currently, Johann is acting as lead organizer for the first-ever international competition (co-hosted with ISPD 2022) on security closure. For that, research teams from all over the world are hardening the physical layouts of ICs at design time against selected attacks that are executed post-design time. This notion of security closure is quite complex, and besides the interest from the community with this contest and invited papers, Johann and colleagues are also in active discussion with government agencies on that challenge. Earlier, Johann and colleagues from TU Dresden, Germany, Google Inc., and Masdar Institute published a survey paper “Large-Scale 3D Chips: Challenges and Solutions for Design Automation, Testing, and Trustworthy Integration” in IPSJ Transactions on System LSI Design Methodology. Since its time of appearance in 2017, i.e., for five years already, this paper is constantly the most viewed article of that journal. In 2012, Johann published his first journal article in IEEE TCAD with colleagues from Michigan University, USA; at the time of appearance, this paper was the most popular article across all of that journal.

Read More