Keynote Speakers

Thursday, October 20, 2022
[Keynote Speech 1] 09:45~10:35
Ferroelectric-based Logic and Memory Architecturesn

Prof. Vijaykrishnan Narayanan
(Professor, Electrical Engineering and Computer Science, The Pennsylvania State University, USA)

Biography Abstract

Vijaykrishnan Narayanan is the Robert Noll Chair of Computer Science and Engineering at The Pennsylvania State University. His research interests are in computer architecture, design using emerging technologies, and embedded systems. He is a recipient of the 2021 IEEE Computer Society Edward McCluskey Technical Achievement Award, and 2021 IEEECS TCVLSI Distinguished Research Award. He serves as the Associate Director of the DoE 3DFeM Center and a thrust leader for the DARPA/SRC Center for Brain Inspired Computing. He is a Fellow of the IEEE, ACM and National Academy of Inventors.
In the last decade, there have been major changes in the families of ferroelectric materials available for integration with CMOS electronics. This talk will discuss the possibility of exploiting the 3rd dimension in microelectronics for functions beyond interconnect optimization, enabling 3D non-von Neumann computer architectures exploiting ferroelectrics for local memory, logic in memory, digital/analog computation, and neuromorphic/reconfigurable functionality. This approach circumvents the end of Moore’s law in 2D scaling, while simultaneously overcoming the “von Neumann bottleneck” in moving instructions and data between separate logic and memory circuits. The talk will cover circuit and architectural features leveraging the non-volatile properties of ferro-electric FETs for hardware obfuscation, accelerator designs and in-memory compute structures.
[Keynote Speech 2] 10:35~11:25
Emerging trends and opportunities in Automotive Semiconductors

Dr. Haechang Lee
(Executive Vice President, Automotive Sensor Team, System LSI Business, Samsung Electronics, Korea)

Biography Abstract

Haechang Lee is currently EVP of Engineering at Samsung Electronics where he oversees the automotive sensor development and business. He received the B.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA. He is an expert in semiconductor design and his experience spans sensors, MEMS, high speed data communications, and precision mixed signal systems. Prior to Samsung, he held leadership positions at Google, Altera, and SiTime.
Four major trends – electrification, autonomous driving, connectivity, and centralized compute – are transforming automobile design and the semiconductors that enable them. As a result the automotive semiconductor market is expected to grow by more than 10% annually from $56 billion in 2022 to more than $140 billion in 2030. We will survey the industry and highlight the areas ripe for strong growth resulting from these trends. The second part of the talk will focus in on automotive sensors, critical to autonomous driving, and the technologies that are critical to this application.

Friday, October 21, 2022
[Keynote Speech 3] 10:00~10:50

It will be updated soon.

[Keynote Speech 4] 10:50~11:40
Memory based Accelerator solution in AI era

Dr. Eui-cheol Lim
(Fellow, Memory Solution Product Design Team, SK Hynix, Korea)

Biography Abstract

Eui-cheol Lim is a Research Fellow and leader of Memory Solution Product Design team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory.
Accordingly, it requires more computing performance and more memory capacity. Technically, as you can see in the Go match between AlphaGo and Lee Sedol, the energy efficiency of AI computer system is fairly poor comparing with that of human brain. As a countermeasure against it, in this talk, Processing in Memory will be presented as one of the solutions. PIM architecture basically enables higher performance and lower energy consumption when performing memory intensive workloads. The current trending transformer based generative deep learning model, such as GPT2/3 shows memory intensive characteristics. The data analytics pipeline that pre-process and supplies data to the AI model also has a memory intensive feature. So, It is expected that PIM technology can be applied to the overall AI service computing system. In this talk, we’d like to introduce not only SK hynix’s 1st PIM product, GDDR6-AiM, but also CXL memory card based PIM solution and storage level PIM solution. And finally, the concept of `data hierarchy’ that applies PIM to all memory layers will be introduced as well.