Mini Tutorials (Wednesday, October 19, 2022)
It will be updated soon.
Main Tutorial (Wednesday, October 19, 2022)
[Main Tutorial] 15:00~16:30
Fault and Soft-Error Tolerant DLL Design for Heterogeneous Multi-Die Clock Synchronization
Shi-Yu Huang received his B.S. and M.S. degrees from Electrical Engineering Dept., National Taiwan University, from 1988 and 1992, respectively, and his Ph.D. degree in Electrical and Computer Engineering from University of California, Santa Barbara, in 1997. Since 1999, he has joined National Tsing Hua University, Taiwan until now. His recent research is concentrated on all-digital timing circuit designs, such as all-digital phase-locked loop (PLL), all-digital delaylocked loop (DLL), time-to-digital converter (TDC), and their applications to parametric fault testing and reliability enhancement for 3D-ICs. He has published more than 160 technical papers (including 46 IEEE journal papers). Dr. Huang ever co-founded a company in 2007-2012, TinnoTek Inc., specializing a cell-based PLL compiler and system-level power estimation tools. He is a co-author receiving the best presentation award or best-paper award for 5 times, (e.g., VLSI-DAT’2006, VLSI-DAT’2013, ATS’2014, WRTLT’2017, ISOCC’2018).
Prof. Huang is a senior member of IEEE. He has been a tutorial speaker in a number of prior IEEE conferences, (e.g., ATS’20, ITC-Asia’20, ITC-India’20, ISOCC’21, ITC’21, ATS’21). The topics include “Testing Clock and Power Networks”, “Testing and Monitoring of Die-to-Die Interconnects in a 2.5D/3D IC”, and “Designing a DLL Easily Using Only Standard Cells for Clock Synchronization in A Heterogeneous Multi-Die IC”.
When we design an SoC or a multi-die IC consisting of 3rd-party IPs, heterogeneous components, or functional dice, synchronization of the clock signals across all of them could be a headache. Fortunately, Delay-Locked Loop (DLL) comes to the rescue. However, a DLL is traditionally built with some analog circuitry inside and thus making the design process complicated if not mysterious for system integrators. The emergence of cell-based DLL design style over the past two decades has alleviated this problem greatly. A cell-based DLL design is not only small, but also robust to the process and temperature variation. Also, it could lend itself to automation as a DLL compiler and so one can generate a DLL instance on the push of a button.
In this tutorial, we will take on a step-by-step journey to show you how to make your own robust and testable fault-tolerant DLL using only standard cells. In the first part, specific topics for the design of a basic DLL such as phase detector, tunable delay line, phase-locking procedure will be briefly reviewed. In the second part, Fault and soft-Error Tolerant (FET) DLL architecture, featuring static timing correction and dynamic timing correction schemes to keep the phase error small while withstanding the attack of run-time faults or soft errors. Finally in the third part, we will touch upon the online DLL monitoring schemes which are often necessary to make a FET DLL truly trustworthy throughout its entire lifecycle.
Short Tutorials (Friday, October 21, 2022)
[Short Tutorial 1] 12:40~13:20
The Turn of Moore’s Law from Space to Time – The Crisis, The Perspective and The Strategy.
Liming Xiu earned B.S. and M.S. degrees in physics from Tsinghua University, China, in 1986 and 1988, respectively. He earned an MEEE degree from Texas A&M University, USA, in 1995. From 1995 to 2009, he worked for Texas Instruments, Dallas, USA, as a senior member of the Technical Staff. From 2009 to 2012, he was the chief clock architect of Novatek Microelectronics, Taiwan. From 2012 to 2015, he was VP for research at Kairos Microsystems, Dallas, USA. Since 2015, he has worked for BOE Technology Group, Beijing, China, as chief scientist of IC technology and VP for research. He served as VP of IEEE CASS from 2009 to 2010. He is the inventor of the Flying-Adder frequency synthesis architecture and an advocate of the time-average-frequency concept and theory. He has 36 US patents. He has published numerous IEEE journal and conference papers, four books as the sole author, and three book chapters as an invited author.
A space-induced crisis is recognized as the cause of trouble that Moore’s Law is currently facing. The contemporary practice of this empirical law is considered as happening within a space-dominant paradigm. An alternative of exploiting potential in the dimension of time is identified as an emerging paradigm in microelectronics. The new practice is termed a time-oriented paradigm. It is justified as the turn of Moore’s Law from space to time. The resultant Time-Moore strategy is envisioned as the next-generation enabler for continuing Moore’s Law’s pursuit of everhigher information processing power and efficiency. It also serves as the perpetuation of the spirit that Moore’s law is nothing but a collective storied history of innovations. In the first part of this tutorial, by following Thomas Kuhn’s seminal work around the concepts of paradigm and scientific revolution, the argument for the Time-Moore strategy (Time-Moore: to use time more) and the paradigm shift from space to time is carried out through philosophical persuasion rather than technical proof due to the difficult challenge of change-of-mindset. The second part provides solid technical materials for supporting this transition from the old paradigm to the new one. The goal of this tutorial is to reevaluate the contemporary practice of microelectronics, identify the cause of the current crisis, advocate a change-of-mindset to circumvent the crisis, and ultimately point out a new route for advancing. After achieving so many unprecedented accomplishments through several decades of relentless endeavor, it’s time for the big ship of Moore’s Law to make a turn.
[Short Tutorial 2] 12:40~13:20
Designing Efficient Deep Neural Network Training Processor
2009, B.S. in electrical engineering, Seoul National University
2014, Ph.D. in electrical engineering, University of Michigan, Ann Arbor
2014 – 2015, Postdoctoral Associate, MIT
2016 – Present, Assistant/Associate Professor, Seoul National University
Deep learning algorithms gathered serious attention due to their outstanding performance in various tasks. Their application areas are fast expanding from computer vision and speech recognition to multi-modal understanding. While power-saving techniques such as quantization, network compression, and pruning have been successfully adopted in pre-trained models, they often become next to useless when applied to the training process. This talk will discuss various algorithmic and hardware optimization techniques enabling energy-efficient training processors.