Mini Tutorials (13:00~15:00, Wednesday, October 19, 2022)
[Mini Tutorial 1]
Memory-based Hardware Neural System for High-density and Low-power Applications

Prof. Min-Hwi Kim
(Assistant Professor, School of Electrical and Electronics Engineering, Chung-Ang University, Korea)
Biography Abstract

Min-Hwi Kim received the B.S. and Ph.D degrees in electrical engineering from Seoul National University (SNU), in 2013 and 2020, respectively. From 2020 to 2022, He was a staff engineer with Samsung Electronics, Hwaseong-si, South Korea, where he has worked on the design of 3D NAND Flash memory. In 2022, he joined Chung-Ang University (CAU) as an Assistant Professor at the School of Electrical and Electronics Engineering (SoEEE). His research interests include the next generation semiconductor memory devices and energy-efficient neuromorphic electronics.
Recently, the semiconductor industry and academia are facing limitations of existing computing system as the development of device scaling and process integration slows down. From this trend, new computing systems such as in-memory computing and neuro-inspired computing are emerging, and their applications to new fields are also expanding. In this presentation, we will first look into what is required for the implementation of memory-based high-density and low-power hardware neural system, and introduce the recent research achievements so far.
[Mini Tutorial 2]
Multi-carrier modulation for ultra-high-speed ADC-based SerDes

Prof. Gain Kim
(Assistant Professor, Electrical Engineering and Computer Science, DGIST, Korea)
Biography Abstract

Gain Kim received the B.S., M.S., and Ph.D. degrees in Electrical Engineering from the Ecole Polytechnique Federal de Lausanne (EPFL), Lausanne, Switzerland in 2013, 2015, and 2018 respectively. From 2016 to 2018, he was with IBM Research Zurich, working on ADC-based wireline receiver designs. From 2018 to 2020, he was with KAIST as a postdoctoral fellow, and from Nov. 2020 to Jan. 2022 he was with Samsung Research, Seoul, South Korea, as a staff engineer working on a baseband modem for 6G wireless communications. In Jan. 2022, he joined Daegu Gyeongbuk Institute of Science & Technology (DGIST), Daegu, South Korea, where he is currently an assistant professor. His current research interests include the design of high-speed ADC, ultra-high-speed SerDes design, modulation techniques for ADC-based serial links, as well as multi-chip computing systems with energy-efficient interfaces.
With the increasing data rate to 112Gb/s per lane, PAM-4 with ADC-based RX has become the most commonly employed modulation for ultra-high-speed serial links. To keep the data-rate increasing beyond 200Gb/s/lane, modulation techniques exhibiting high bandwidth efficiency have been investigated for multiple reasons, such as reduced attenuation and lower required DAC/ADC conversion rate. With a particular emphasis on orthogonal frequency division multiplexing (OFDM), this talk covers link modeling with OFDM, design-space exploration, and implementation challenges for enabling a data rate of 200Gb/s and beyond in wireline transceivers.
[Mini Tutorial 3]
Trends of Modern Processors for AI Acceleration

Prof. Kyuho Lee
(Assistant Professor, Dept. of Electrical Engineering / Graduate School of AI, UNIST, Korea)
Biography Abstract

Kyuho Lee received B.S., M.S., and Ph. D. degrees in the School of Electrical Engineering from Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea in 2012, 2014, and 2017, respectively. Now he is an Associate Professor at the Department of Electrical Engineering and the Graduate School of Artificial Intelligence, Ulsan National Institute of Science and Technology (UNIST). He is serving as a TPC member of IEEE Asian Solid-State Circuits Conference and ACM/IEEE Design, Automation and Test in Europe since 2018. Before joining UNIST as a faculty member, he had worked for Samsung Research America, Richardson, TX, USA as a hardware designer in 2016. From 2017 to 2018, he was a postdoctoral researcher in the Information Engineering and Electronics Research Institute, KAIST, Daejeon, Korea. His research interests include mixed-mode neuromorphic SoC, deep learning processor, Network-on-Chip architectures, and intelligent computer vision processor for mobile devices and autonomous vehicles.
Machine learning and artificial intelligence technology are playing the key role in the 4th industrial revolution and tremendous amount of researches are actively conducted to blend the technologies into our daily lives with practical applications such as autonomous vehicles/robots/drones, AI speaker, smart surveillance, etc. Most of current works rely on GPU that is not a practical solution to embedded systems and mobile platforms due to its large form factor and power consumption. Instead, low-power hardware accelerators are essential for feasible implementation and they have been investigated recently with different aspects and architectures. In this talk, I will review the technological challenges and trends in latest AI accelerators as well as introducing practical systems on AI applications.
[Mini Tutorial 4]
A miniaturized wireless neural implant with body-coupled power delivery and data transmission

Prof. Joonsung Bae
(Assistant Professor, Electrical and Electronics Engineering, Kangwon National University, Korea)
Biography Abstract

Joonsung Bae graduated from the Electrical Engineering Department of Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2007 and received the M.S. and Ph.D. degrees in electrical engineering from the KAIST in 2009 and 2013, respectively. His Ph.D. work concerned the Wireless Body Area Network (WBAN) circuits and systems.
Since 2017, he has been with the Department of Electrical and Electronics Engineering, Kangwon National University, where he is currently an Associate Professor. Before joining Kangwon National University, he was an Analog Circuit Designer with IMEC, Belgium, where he investigated ultra-low-power biomedical circuits. His current research interests are energy-efficient mixed-signal circuits and systems, wireless neural interfaces, bio-medical integrated sensors, and body area networks.
This talk presents the design, implementation, and validation of a wireless neural implant that uses body-coupled power delivery and data transmission, considering closed-loop multichannel wireless neural interfaces. The scheme is applicable to a central nervous system based on: 1) its use of bidirectional communications combined with wireless reception without recourse to customized and dedicated antennas or transducers and 2) its exploitation of an undemanding electrode interface and the conductive properties of the body. From the details of body-coupled channel characteristics, the implementation of the power receiver and data transceivers to the prototyped system using integrated circuits are introduced, demonstrating its feasibility in miniaturized wireless neural implant applications.
[Mini Tutorial 5]
Augmented Reality 3D Head-up Display Systems

Prof. Dongwoo Kang
(Assistant Professor, Electronic and Electrical Engineering, Hongik University, Korea)
Biography Abstract

Dongwoo Kang received the B.S. degree in electrical engineering from Seoul National University,
Seoul, South Korea, in 2007, and the M.S. and Ph.D. degrees in electrical engineering from University of Southern California, Los Angeles, CA, in 2009 and 2013, respectively. He was a Senior Researcher at at Samsung Advanced Institute of Technology, Suwon, South Korea, from 2013 to 2021. In 2021, he joined the faculty of the department of electronic and electrical engineering at Hongik University, Seoul, South Korea, where he is currently an Assistant Professor. His research interests include image processing and computer vision algorithm including detection, tracking, segmentation, image enhancement for augmented reality 3D displays and medical images.
Eye pupil tracking is important for augmented reality (AR) three-dimensional (3D) head-up displays (HUDs). Accurate and fast eye tracking is still challenging due to multiple driving conditions with eye occlusions, such as wearing sunglasses. We presents a AR 3D HUD system for commercial use that can handle practical driving conditions. Our system classifies human faces into bare faces and sunglasses faces, which are treated differently. Experiments show that our method achieves high accuracy and speed, approximately 1.5 and 6.5 mm error for bare and sunglasses faces, respectively, at less than 10 ms on a 2.0GHz CPU. The proposed method, combined with AR 3D HUDs, shows promising results for commercialization with low crosstalk 3D images.


Main Tutorial (Wednesday, October 19, 2022)
[Main Tutorial] 15:00~16:30
Fault and Soft-Error Tolerant DLL Design for Heterogeneous Multi-Die Clock Synchronization

Prof. Shi-Yu Huang
(Electrical Engineering, National Tsing Hua University, Taiwan)
Biography Abstract

Shi-Yu Huang received his B.S. and M.S. degrees from Electrical Engineering Dept., National Taiwan University, from 1988 and 1992, respectively, and his Ph.D. degree in Electrical and Computer Engineering from University of California, Santa Barbara, in 1997. Since 1999, he has joined National Tsing Hua University, Taiwan until now. His recent research is concentrated on all-digital timing circuit designs, such as all-digital phase-locked loop (PLL), all-digital delaylocked loop (DLL), time-to-digital converter (TDC), and their applications to parametric fault testing and reliability enhancement for 3D-ICs. He has published more than 160 technical papers (including 46 IEEE journal papers). Dr. Huang ever co-founded a company in 2007-2012, TinnoTek Inc., specializing a cell-based PLL compiler and system-level power estimation tools. He is a co-author receiving the best presentation award or best-paper award for 5 times, (e.g., VLSI-DAT’2006, VLSI-DAT’2013, ATS’2014, WRTLT’2017, ISOCC’2018).
Prof. Huang is a senior member of IEEE. He has been a tutorial speaker in a number of prior IEEE conferences, (e.g., ATS’20, ITC-Asia’20, ITC-India’20, ISOCC’21, ITC’21, ATS’21). The topics include “Testing Clock and Power Networks”, “Testing and Monitoring of Die-to-Die Interconnects in a 2.5D/3D IC”, and “Designing a DLL Easily Using Only Standard Cells for Clock Synchronization in A Heterogeneous Multi-Die IC”.

When we design an SoC or a multi-die IC consisting of 3rd-party IPs, heterogeneous components, or functional dice, synchronization of the clock signals across all of them could be a headache. Fortunately, Delay-Locked Loop (DLL) comes to the rescue. However, a DLL is traditionally built with some analog circuitry inside and thus making the design process complicated if not mysterious for system integrators. The emergence of cell-based DLL design style over the past two decades has alleviated this problem greatly. A cell-based DLL design is not only small, but also robust to the process and temperature variation. Also, it could lend itself to automation as a DLL compiler and so one can generate a DLL instance on the push of a button.
In this tutorial, we will take on a step-by-step journey to show you how to make your own robust and testable fault-tolerant DLL using only standard cells. In the first part, specific topics for the design of a basic DLL such as phase detector, tunable delay line, phase-locking procedure will be briefly reviewed. In the second part, Fault and soft-Error Tolerant (FET) DLL architecture, featuring static timing correction and dynamic timing correction schemes to keep the phase error small while withstanding the attack of run-time faults or soft errors. Finally in the third part, we will touch upon the online DLL monitoring schemes which are often necessary to make a FET DLL truly trustworthy throughout its entire lifecycle.

Short Tutorials (Friday, October 21, 2022)
[Short Tutorial 1] 12:40~13:20
The Turn of Moore’s Law from Space to Time – The Crisis, The Perspective and The Strategy.

Liming Xiu
(Chief Scientist of IC Technology and VP of Research, BOE Technology Group, China)
Biography Abstract

Liming Xiu earned B.S. and M.S. degrees in physics from Tsinghua University, China, in 1986 and 1988, respectively. He earned an MEEE degree from Texas A&M University, USA, in 1995. From 1995 to 2009, he worked for Texas Instruments, Dallas, USA, as a senior member of the Technical Staff. From 2009 to 2012, he was the chief clock architect of Novatek Microelectronics, Taiwan. From 2012 to 2015, he was VP for research at Kairos Microsystems, Dallas, USA. Since 2015, he has worked for BOE Technology Group, Beijing, China, as chief scientist of IC technology and VP for research. He served as VP of IEEE CASS from 2009 to 2010. He is the inventor of the Flying-Adder frequency synthesis architecture and an advocate of the time-average-frequency concept and theory. He has 36 US patents. He has published numerous IEEE journal and conference papers, four books as the sole author, and three book chapters as an invited author.
A space-induced crisis is recognized as the cause of trouble that Moore’s Law is currently facing. The contemporary practice of this empirical law is considered as happening within a space-dominant paradigm. An alternative of exploiting potential in the dimension of time is identified as an emerging paradigm in microelectronics. The new practice is termed a time-oriented paradigm. It is justified as the turn of Moore’s Law from space to time. The resultant Time-Moore strategy is envisioned as the next-generation enabler for continuing Moore’s Law’s pursuit of everhigher information processing power and efficiency. It also serves as the perpetuation of the spirit that Moore’s law is nothing but a collective storied history of innovations. In the first part of this tutorial, by following Thomas Kuhn’s seminal work around the concepts of paradigm and scientific revolution, the argument for the Time-Moore strategy (Time-Moore: to use time more) and the paradigm shift from space to time is carried out through philosophical persuasion rather than technical proof due to the difficult challenge of change-of-mindset. The second part provides solid technical materials for supporting this transition from the old paradigm to the new one. The goal of this tutorial is to reevaluate the contemporary practice of microelectronics, identify the cause of the current crisis, advocate a change-of-mindset to circumvent the crisis, and ultimately point out a new route for advancing. After achieving so many unprecedented accomplishments through several decades of relentless endeavor, it’s time for the big ship of Moore’s Law to make a turn.
[Short Tutorial 2] 12:40~13:20
Designing Efficient Deep Neural Network Training Processor

Prof. Dongsuk Jeon
(Seoul National University, Korea)
Biography Abstract

2009, B.S. in electrical engineering, Seoul National University
2014, Ph.D. in electrical engineering, University of Michigan, Ann Arbor
2014 – 2015, Postdoctoral Associate, MIT
2016 – Present, Assistant/Associate Professor, Seoul National University
Deep learning algorithms gathered serious attention due to their outstanding performance in various tasks. Their application areas are fast expanding from computer vision and speech recognition to multi-modal understanding. While power-saving techniques such as quantization, network compression, and pruning have been successfully adopted in pre-trained models, they often become next to useless when applied to the training process. This talk will discuss various algorithmic and hardware optimization techniques enabling energy-efficient training processors.