Only ISOCC 2024 Tutorial will be conducted as an online hybrid.
It will be streamed through Zoom, and the access link will be sent to the email you registered a week before.
If you have not received the access link or have any inquiries, please contact the ISOCC2024 secretary (secretary@isocc.org).
Review and Comparisons of Recent SRAM-based In-Memory Computing HardwareProf. Mingoo Seok
Biography
Mingoo Seok is an Associate Professor of Electrical Engineering at Columbia University. He received a B.S. (summa cum laude) from Seoul National University, South Korea, in 2005, an M.S. in 2007, and a Ph.D. degree in 2011 from the University of Michigan, all in electrical engineering. His research interest covers various aspects of VLSI computing hardware, including low-power variation-tolerant hardware, machine-learning hardware, and on-chip power management. He won the 2015 NSF CAREER award and the 2019 Qualcomm Faculty Award. He is the technical program committee member for several conferences, including the IEEE International Solid-State Circuits Conference and the ACM/IEEE Design Automation Conference. He serves/served as a (guest) Associate Editor for IEEE Transactions on Circuits and Systems I: Regular Papers (2013-2015), IEEE Transactions on Very Large Scale Integration Systems (2015-2023), IEEE Solid-State Circuits Letter (2017-2022), and the IEEE Journal of Solid-State Circuits (2021). He was selected as the Solid-State Circuits Society (SSCS) distinguished lecturer (2023-2025).
Abstract
In the last decade, SRAM-based in-memory computing (IMC) hardware has received significant research attention for its massive energy efficiency and performance boost. This tutorial introduces the fundamentals and recent design cases of SRAM-based IMC hardware. After the brief overview of SRAM circuit design, we will review several recent macro prototypes that employ analog-mixed-signal (AMS) computing mechanisms: resistive division, capacitive division, and charge sharing. We will then review several other macro prototypes that employ digital computing techniques, namely fully parallel architecture, approximate arithmetic, and hardware reuse. We will also provide high-level comparisons among those different macros. Finally, we will present a recent microprocessor prototype that employs IMC-based accelerators, which can perform on-chip inferences at high energy efficiency and low latency. Motivation and focus |
Neuromorphic processing at the sensor edge: System-on-Chip and application perspectiveProf. Amir Zjajo
Biography
Amir Zjajo is co-founder of Innatera Nanosystems B.V., and serves as its Chief Scientist. Prior to that, he was a member of research staff in the Mixed Signal Circuits and Systems Group at Philips Research Laboratories between 2000 and 2006, and subsequently, with Corporate Research at NXP Semiconductors until 2009. He joined the Delft University of Technology the same year, and was responsible for leading research into intelligent systems within a range of EU-funded research projects. Dr. Zjajo has published 3 books, more than 90 papers in referenced journals and conference proceedings in the areas of mixed-signal VLSI design, and neuromorphic circuits and systems, and holds more than 20 US patents or patent pending. He served as a TPC member of ISQED, DATE, VLSI Symposium, ISCAS, and BioCAS, among others. He received the M.Sc. and DIC degrees from the Imperial College London, London, U.K., in 2000, and the PhD. degree from Eindhoven University of Technology, Eindhoven, The Netherlands in 2010, all in electrical engineering. His research interests include energy-efficient circuit and system design for on-chip machine learning and inference, and bionic electronic circuits for autonomous cognitive systems. Dr. Zjajo won best/excellence paper award at BioDevices’15, LifeTech’19 and AICAS’23. He is a senior member of IEEE.
Abstract
Brain-inspired, neuromorphic spiking neural network (SNN) accelerators enable sensor systems to deliver actionable, domain-specific information instead of raw data, bringing intelligence capabilities to resource-constrained devices. Key to these capabilities is the inherent notion of time built into the SNN computational elements, i.e. the time-varying states of neurosynaptic fabric enable powerful temporal processing to be carried out even with small models, with sparse and efficient event-based communication between computing elements. In this tutorial, we formulate requirements for modular neuromorphic SNN framework that enables optimal hardware-software co-design in next-generation, smart sensing system-on-chip. In particular, i) we assess similarities between biological, and artificially reconstructed and silicon-proven macro- and micro-circuits in terms of temporal characteristics and information transfer from a computational perspective, (10 mins) In addition, we highlight the performance of the world’s first ultra-low power neuromorphic MCU for sensor data processing, Innatera’s Spiking Neural Processor T1. The overall system incorporates a spiking compute engine for SNNs, an accelerator for CNNs, and a light-weight RISC-V CPU, enabling the capabilities of SNNs to be combined with conventional non-spiking neural networks to realize a broad range of application capabilities within the same device. As a comprehensive companion to sensors, in addition to pattern recognition/data inference capabilities, featured MCU facilitates handling of multiple sensors, ordering of data, as well as conditioning and pre-/post-processing of sensor data, and inference results. (10 mins) |
Basics and Trends in ReRAM-based Compute-In-Memory for Edge ComputingProf. Tony Tae-Hyoung Kim
Biography
Tony Tae-Hyoung Kim (Senior Member, IEEE) received the B.S. and M.S. degrees in electrical engineering from Korea University, Seoul, South Korea, in 1999 and 2001, respectively, and the Ph.D. degree in electrical and computer engineering from the University of Minnesota, Minneapolis, MN, USA, in 2009. From 2001 to 2005, he was with Samsung Electronics, Hwasung, South Korea. In 2009, he joined Nanyang Technological University, Singapore, where he is currently an Associate Professor. He has published over 200 papers in journals and conferences and holds 20 U.S. and Korean patents registered. His current research interests include computing-in-memory for machine learning, ultra-low power circuits and systems for smart edge computing, low-power and high-performance digital, mixed-mode, and memory circuit design, variation-tolerant circuits and systems, and emerging memory circuits for neural networks. Dr. Kim received IEEE ISSCC Student Travel Grant Award in 2022 and 2019, Best Paper Award (Gold Prize) in IEEE/IEIE ICCE-Asia2021, Korean Federation of Science and Technology (KOFST) Award in 2021, Best Demo Award at APCCAS2016, Low Power Design Contest Award at ISLPED2016, Best Paper Awards at 2014 and 2011 ISOCC, AMD/CICC Student Scholarship Award at IEEE CICC2008, DAC/ISSCC Student Design Contest Award in 2008, Samsung Humantech Thesis Award in 2008, 2001, and 1999, and ETRI Journal Paper of the Year Award in 2005. He was the Chair of the IEEE Solid-State Circuits Society Singapore Chapter in 2015-2016 and is Chair-Elect/Secretary of the IEEE Circuits and Systems Society VSATC. He has served on numerous IEEE conferences as a Committee Member. He serves as a Corresponding Guest Editor for the IEEE JOURNAL on EMERGING and SELECTED TOPICS in CIRCUITS and SYSTEMS (JETCAS), a Guest Editor for the IEEE TRANSACTIONS on BIOMEDICAL CIRCUITS and SYSTEMS (TBioCAS), an Associate Editor for the IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS and IEEE ACCESS.
Abstract
Recently, artificial intelligence (AI) and machine learning (ML) have emerged opening up a new domain of integrated circuits design. However, these applications have faced unprecedented challenges in executing required computing tasks with high energy efficiency. Traditional von-Neumann computing architecture suffers from high energy consumption and large latency because of heavily repeated data transfer between memory and arithmetic-logic units (ALUs). Compute-In-Memory (CIM) has gained huge attention from the research society for tackling the above issues by merging memory and ALUs in energy efficient ways. The minimization of data transfer in CIM improves the overall computing energy efficiency substantially (e.g. >100). However, CIM has various challenging issues such as PVT variations, linearity, precision, etc. In this tutorial, the speaker will discuss the basics and trends in ReRAM-based CIM for edge computing. The 1st part of the tutorial will discuss ReRAM devices and design basics. Challenges and limitations in ReRAM CIM will be introduced in the 2nd part of the tutorial. Several state-of-the-art ReRAM CIM works will be presented in the 3rd part of the tutorial. Motivation and Focus |
Introduction on VLSI and Electronic Design AutomationProf. Heechun Park
Biography
Prof. Heechun Park is an Assistant Professor with the Department of Electrical Engineering, Ulsan National Institute of Science and Technology (UNIST), South Korea. He received the B.S. degree from the Department of Electrical Engineering, Seoul National University, Seoul, South Korea, in 2011, and the Ph.D. degree from the Department of Electrical and Computer Engineering, Seoul National University in 2018. Prior to joining UNIST, he was a Postdoctoral Fellow with the School of Electrical Engineering, Georgia Institute of Technology, a Senior Researcher with the Inter-university Semiconductor Research Center (ISRC), Seoul National University, a BK Assistant Professor with Seoul National University, and an Assistant Professor with Kookmin University. He has contributed to over 40 publications in international journals and conferences. His research interests cover the physical design of VLSI and SoC with computer-aided design (CAD), including vertically stacked 3-D and 2.5-D ICs, design under advanced technology nodes, machine learning for CAD, and physical design for AI.
Abstract
This tutorial will first present the basics of very large scale integrated circuits (VLSI) and a common electronic design automation (EDA) flow for designing VLSI. It will focus on the RTL-to-GDSII design flow, starting from language-based register-transistor level (RTL) to generate graphic design system (GDSII) layout, which mainly consists of logic synthesis and place-and-route (P&R). This tutorial will introduce some basic computer-aided design (CAD) algorithms applied at each design step (e.g., synthesis, floorplan, placement, clock tree synthesis (CTS), routing, and timing closure), as well as the commercial tools used in the industry for the fully automated VLSI design flow. |
Recent Research and Development Trends of High-Bandwidth Memory InterfacesProf. Joo-Hyung Chae
Biography
Joo-Hyung Chae received his B.S. and Ph.D. degrees in Electrical Engineering from Seoul National University, Seoul, South Korea, in 2012 and 2019, respectively. In 2013, he joined SK hynix, Icheon, South Korea, as an intern at the Department of LPDDR Memory Design. From 2019 to 2021, he was with SK hynix, Icheon, South Korea, where his work focused on GDDR memory design. In 2021, he joined Kwangwoon University, Seoul, South Korea, where he is currently an Assistant Professor of Electronics and Communications Engineering. His research interests include the design of high-speed and low-power I/O circuits, clocking circuits, memory interfaces, and mixed-signal in-memory computing. Dr. Chae received the Doyeon Academic Paper Award from the Inter-University Semiconductor Center (ISRC), Seoul National University, in 2020.
Abstract
The demand for large amounts of data communication has increased across various data-centric applications. However, it necessitates frequent data transfers between processing units and off-chip memories, constrained by memory I/O bandwidth limitations. Such limitations lead to decreased system throughput and deteriorated energy efficiency. To solve these problems, a high-bandwidth memory interface is essential. There are two ways to increase the memory bandwidth. The first method is raising the number of input and output (I/O) pins. A representative memory device of the first method is a high-bandwidth memory (HBM), which uses a relatively low data rate but extremely increases the I/O pin count up to 1,024. The second method to increase the memory bandwidth is raising the data rate/pin. Recent double-data-rate (DDR), low-power DDR (LPDDR), and graphic DDR (GDDR) memories have continued increasing the data rate/pin. To break through the data rate/pin limitations, recent memory devices have adopted PAM-4 and PAM-3 signaling as a standard. In this tutorial, to provide valuable insights, we will focus on the necessity for high-bandwidth memory interfaces in the recent data-centric era and address design challenges and future prospects. |