Skip to content


Only ISOCC 2024 Tutorial will be conducted as an online hybrid.
It will be streamed through Zoom, and the access link will be sent to the email you registered a week before.
If you have not received the access link or have any inquiries, please contact the ISOCC2024 secretary (
Main Tutorials (Wednesday, August 19, 2024)
[Main Tutorial 1] 13:00~14:30
Review and Comparisons of Recent SRAM-based In-Memory Computing Hardware

Prof. Mingoo Seok
(Associate Professor, Department of Electrical Engineering, Columbia University, USA)
Biography Abstract

Mingoo Seok is an Associate Professor of Electrical Engineering at Columbia University. He received a B.S. (summa cum laude) from Seoul National University, South Korea, in 2005, an M.S. in 2007, and a Ph.D. degree in 2011 from the University of Michigan, all in electrical engineering. His research interest covers various aspects of VLSI computing hardware, including low-power variation-tolerant hardware, machine-learning hardware, and on-chip power management. He won the 2015 NSF CAREER award and the 2019 Qualcomm Faculty Award. He is the technical program committee member for several conferences, including the IEEE International Solid-State Circuits Conference and the ACM/IEEE Design Automation Conference. He serves/served as a (guest) Associate Editor for IEEE Transactions on Circuits and Systems I: Regular Papers (2013-2015), IEEE Transactions on Very Large Scale Integration Systems (2015-2023), IEEE Solid-State Circuits Letter (2017-2022), and the IEEE Journal of Solid-State Circuits (2021). He was selected as the Solid-State Circuits Society (SSCS) distinguished lecturer (2023-2025).
In the last decade, SRAM-based in-memory computing (IMC) hardware has received significant research attention for its massive energy efficiency and performance boost. This tutorial introduces the fundamentals and recent design cases of SRAM-based IMC hardware. After the brief overview of SRAM circuit design, we will review several recent macro prototypes that employ analog-mixed-signal (AMS) computing mechanisms: resistive division, capacitive division, and charge sharing. We will then review several other macro prototypes that employ digital computing techniques, namely fully parallel architecture, approximate arithmetic, and hardware reuse. We will also provide high-level comparisons among those different macros. Finally, we will present a recent microprocessor prototype that employs IMC-based accelerators, which can perform on-chip inferences at high energy efficiency and low latency.

Motivation and focus
In the past decade, SRAM-based in-memory computing (IMC) hardware received significant attention for its massive energy efficiency and performance boost. Researchers proposed various circuit techniques and architectures, but the pros and cons of each approach are neither well-articulated nor compared. The existing literature also provides limited know-how to extend the macro-level design to accelerator and processor-level design. This tutorial aims to close those knowledge gaps in the SRAM-based IMC hardware design.

[Main Tutorial 2] 14:30~15:30
Neuromorphic processing at the sensor edge: System-on-Chip and application perspective

Prof. Amir Zjajo
(Chief Scientific Officer, Innatera Nanosystems B.V., The Neterlands)
Biography Abstract

Amir Zjajo is co-founder of Innatera Nanosystems B.V., and serves as its Chief Scientist. Prior to that, he was a member of research staff in the Mixed Signal Circuits and Systems Group at Philips Research Laboratories between 2000 and 2006, and subsequently, with Corporate Research at NXP Semiconductors until 2009. He joined the Delft University of Technology the same year, and was responsible for leading research into intelligent systems within a range of EU-funded research projects. Dr. Zjajo has published 3 books, more than 90 papers in referenced journals and conference proceedings in the areas of mixed-signal VLSI design, and neuromorphic circuits and systems, and holds more than 20 US patents or patent pending. He served as a TPC member of ISQED, DATE, VLSI Symposium, ISCAS, and BioCAS, among others. He received the M.Sc. and DIC degrees from the Imperial College London, London, U.K., in 2000, and the PhD. degree from Eindhoven University of Technology, Eindhoven, The Netherlands in 2010, all in electrical engineering. His research interests include energy-efficient circuit and system design for on-chip machine learning and inference, and bionic electronic circuits for autonomous cognitive systems. Dr. Zjajo won best/excellence paper award at BioDevices’15, LifeTech’19 and AICAS’23. He is a senior member of IEEE.
Brain-inspired, neuromorphic spiking neural network (SNN) accelerators enable sensor systems to deliver actionable, domain-specific information instead of raw data, bringing intelligence capabilities to resource-constrained devices. Key to these capabilities is the inherent notion of time built into the SNN computational elements, i.e. the time-varying states of neurosynaptic fabric enable powerful temporal processing to be carried out even with small models, with sparse and efficient event-based communication between computing elements.

In this tutorial, we formulate requirements for modular neuromorphic SNN framework that enables optimal hardware-software co-design in next-generation, smart sensing system-on-chip. In particular,

i) we assess similarities between biological, and artificially reconstructed and silicon-proven macro- and micro-circuits in terms of temporal characteristics and information transfer from a computational perspective, (10 mins)
ii) we focus on the methodology to leverage the SNN advantages at different levels of time-granularity and hierarchy, and consequently, maximize energy-efficiency, latency, flexibility and scalability, (10 mins)
iii) we postulate the need for dedicated sensor data-handling engine enhanced with SNN accelerators, the most pronounced in power-limited and latency-critical devices, (10 mins)
iv) we examine software tools requirements for seamless interaction with SNN accelerators, and validate their programmability capabilities and easy-of-use (10 mins), and
v) we provide quantitative validation and survey competitive performance of SNN accelerators across several application cases. (10 mins)

In addition, we highlight the performance of the world’s first ultra-low power neuromorphic MCU for sensor data processing, Innatera’s Spiking Neural Processor T1. The overall system incorporates a spiking compute engine for SNNs, an accelerator for CNNs, and a light-weight RISC-V CPU, enabling the capabilities of SNNs to be combined with conventional non-spiking neural networks to realize a broad range of application capabilities within the same device. As a comprehensive companion to sensors, in addition to pattern recognition/data inference capabilities, featured MCU facilitates handling of multiple sensors, ordering of data, as well as conditioning and pre-/post-processing of sensor data, and inference results. (10 mins)

[Main Tutorial 3] 13:00~14:30
Basics and Trends in ReRAM-based Compute-In-Memory for Edge Computing

Prof. Tony Tae-Hyoung Kim
(Associate Professor, Nanyang Technological University, Singapore)
Biography Abstract

Tony Tae-Hyoung Kim (Senior Member, IEEE) received the B.S. and M.S. degrees in electrical engineering from Korea University, Seoul, South Korea, in 1999 and 2001, respectively, and the Ph.D. degree in electrical and computer engineering from the University of Minnesota, Minneapolis, MN, USA, in 2009. From 2001 to 2005, he was with Samsung Electronics, Hwasung, South Korea. In 2009, he joined Nanyang Technological University, Singapore, where he is currently an Associate Professor.
He has published over 200 papers in journals and conferences and holds 20 U.S. and Korean patents registered. His current research interests include computing-in-memory for machine learning, ultra-low power circuits and systems for smart edge computing, low-power and high-performance digital, mixed-mode, and memory circuit design, variation-tolerant circuits and systems, and emerging memory circuits for neural networks.
Dr. Kim received IEEE ISSCC Student Travel Grant Award in 2022 and 2019, Best Paper Award (Gold Prize) in IEEE/IEIE ICCE-Asia2021, Korean Federation of Science and Technology (KOFST) Award in 2021, Best Demo Award at APCCAS2016, Low Power Design Contest Award at ISLPED2016, Best Paper Awards at 2014 and 2011 ISOCC, AMD/CICC Student Scholarship Award at IEEE CICC2008, DAC/ISSCC Student Design Contest Award in 2008, Samsung Humantech Thesis Award in 2008, 2001, and 1999, and ETRI Journal Paper of the Year Award in 2005. He was the Chair of the IEEE Solid-State Circuits Society Singapore Chapter in 2015-2016 and is Chair-Elect/Secretary of the IEEE Circuits and Systems Society VSATC. He has served on numerous IEEE conferences as a Committee Member. He serves as a Corresponding Guest Editor for the IEEE JOURNAL on EMERGING and SELECTED TOPICS in CIRCUITS and SYSTEMS (JETCAS), a Guest Editor for the IEEE TRANSACTIONS on BIOMEDICAL CIRCUITS and SYSTEMS (TBioCAS), an Associate Editor for the IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS and IEEE ACCESS.
Recently, artificial intelligence (AI) and machine learning (ML) have emerged opening up a new domain of integrated circuits design. However, these applications have faced unprecedented challenges in executing required computing tasks with high energy efficiency. Traditional von-Neumann computing architecture suffers from high energy consumption and large latency because of heavily repeated data transfer between memory and arithmetic-logic units (ALUs). Compute-In-Memory (CIM) has gained huge attention from the research society for tackling the above issues by merging memory and ALUs in energy efficient ways. The minimization of data transfer in CIM improves the overall computing energy efficiency substantially (e.g. >100). However, CIM has various challenging issues such as PVT variations, linearity, precision, etc.
In this tutorial, the speaker will discuss the basics and trends in ReRAM-based CIM for edge computing. The 1st part of the tutorial will discuss ReRAM devices and design basics. Challenges and limitations in ReRAM CIM will be introduced in the 2nd part of the tutorial. Several state-of-the-art ReRAM CIM works will be presented in the 3rd part of the tutorial.

Motivation and Focus
Compute-In-Memory (CIM) has attracted researchers’ attention because it can improve the energy efficiency dramatically in next generation non-von Neumann computing architecture. While the benefit of CIM is well accepted in the research community, the limitations and the challenges of it have not been discussed thoroughly. Besides, various CIM architectures indicate that CIM is immature and needs more thorough investigation. This tutorial will explain the recent CIM development utilizing SRAMs and RRAM. Important design challenges for CIM will also be discussed so that circuit designers can clearly understand the key CIM design aspects compared to normal SRAM and RRAMs. This tutorial will also cover various state-of-the-art design techniques so that the attendees can understand the major design challenges and their solutions. In addition, this tutorial will also introduce emerging applications of CIMs focusing on neural network and machine learning.
Following is the brief syllabus of the tutorial. The target audience of this tutorial is for those who understand the very basic operation of memory such as SRAMs, DRAMs, RRAMs, etc. Anyone who took undergraduate level digital electronics and are interested in memory, CIM, neural networks, and machine learning are welcome to take this tutorial.


Short Tutorial (Wednesday, August 19, 2024)
[Short Tutorial 1] 14:30~15:10
Introduction on VLSI and Electronic Design Automation

Prof. Heechun Park
(Associate Professor, Ulsan National Institute of Science and Technology (UNIST), Korea)
Biography Abstract

This tutorial will first present the basics of very large scale integrated circuits (VLSI) and a common electronic design automation (EDA) flow for designing VLSI. It will focus on the RTL-to-GDSII design flow, starting from language-based register-transistor level (RTL) to generate graphic design system (GDSII) layout, which mainly consists of logic synthesis and place-and-route (P&R). This tutorial will introduce some basic computer-aided design (CAD) algorithms applied at each design step (e.g., synthesis, floorplan, placement, clock tree synthesis (CTS), routing, and timing closure), as well as the commercial tools used in the industry for the fully automated VLSI design flow.
[Short Tutorial 2] 15:10~15:50
Recent Research and Development Trends of High-Bandwidth Memory Interfaces

Prof. Joo-Hyung Chae
(Associate Professor, Kwangwoon University, Korea)
Biography Abstract

The demand for large amounts of data communication has increased across various data-centric applications. However, it necessitates frequent data transfers between processing units and off-chip memories, constrained by memory I/O bandwidth limitations. Such limitations lead to decreased system throughput and deteriorated energy efficiency. To solve these problems, a high-bandwidth memory interface is essential. There are two ways to increase the memory bandwidth. The first method is raising the number of input and output (I/O) pins. A representative memory device of the first method is a high-bandwidth memory (HBM), which uses a relatively low data rate but extremely increases the I/O pin count up to 1,024. The second method to increase the memory bandwidth is raising the data rate/pin. Recent double-data-rate (DDR), low-power DDR (LPDDR), and graphic DDR (GDDR) memories have continued increasing the data rate/pin. To break through the data rate/pin limitations, recent memory devices have adopted PAM-4 and PAM-3 signaling as a standard. In this tutorial, to provide valuable insights, we will focus on the necessity for high-bandwidth memory interfaces in the recent data-centric era and address design challenges and future prospects.