| Page 158 | Kisaco Research

As AI adoption accelerates, securing sensitive data during inference has become a critical challenge, particularly in regulated and privacy-sensitive environments. Fully Homomorphic Encryption (FHE) enables computation on encrypted data without ever exposing the underlying information, making it a breakthrough for secure, private LLM deployments. This session explores how FHE can be practically applied to scale large language models (LLMs) while maintaining strict data confidentiality. Dr. Walden "Wally" Rhines will cover recent advancements, performance innovations, and real-world use cases demonstrating how FHE unlocks scalable, privacy-preserving AI.

Data Privacy & Governance

Author:

Dr. Walden “Wally” Rhines

Board of Directors
Cornami

WALDEN C. RHINES is President & CEO of Cornami. He is also CEO Emeritus of Mentor, a Siemens business, focusing on external communications and customer relations. He was previously CEO of Mentor Graphics for 23 years and Chairman of the Board for 17 years. During his tenure at Mentor, revenue nearly quadrupled and market value of the company increased 10X.

Prior to joining Mentor Graphics, Dr. Rhines was Executive Vice President, Semiconductor Group, responsible for TI’s worldwide semiconductor business. During his 21 years at TI, he was President of the Data Systems Group and held numerous other semiconductor executive management positions.

Dr. Rhines has served on the boards of Cirrus Logic, QORVO, TriQuint Semiconductor, Global Logic and as Chairman of the Electronic Design Automation Consortium (five two-year terms) and is currently a director. He is also a board member of the Semiconductor Research Corporation and First Growth Children & Family Charities. He is a Lifetime Fellow of the IEEE and has served on the Board of Trustees of Lewis and Clark College, the National Advisory Board of the University of Michigan and Industrial Committees advising Stanford University and the University of Florida.

Dr. Rhines holds a Bachelor of Science degree in engineering from the University of Michigan, a Master of Science and PhD in materials science and engineering from Stanford University, a master of Business Administration from Southern Methodist University and Honorary Doctor of Technology degrees from the University of Florida and Nottingham Trent University.

Dr. Walden “Wally” Rhines

Board of Directors
Cornami

WALDEN C. RHINES is President & CEO of Cornami. He is also CEO Emeritus of Mentor, a Siemens business, focusing on external communications and customer relations. He was previously CEO of Mentor Graphics for 23 years and Chairman of the Board for 17 years. During his tenure at Mentor, revenue nearly quadrupled and market value of the company increased 10X.

Prior to joining Mentor Graphics, Dr. Rhines was Executive Vice President, Semiconductor Group, responsible for TI’s worldwide semiconductor business. During his 21 years at TI, he was President of the Data Systems Group and held numerous other semiconductor executive management positions.

Dr. Rhines has served on the boards of Cirrus Logic, QORVO, TriQuint Semiconductor, Global Logic and as Chairman of the Electronic Design Automation Consortium (five two-year terms) and is currently a director. He is also a board member of the Semiconductor Research Corporation and First Growth Children & Family Charities. He is a Lifetime Fellow of the IEEE and has served on the Board of Trustees of Lewis and Clark College, the National Advisory Board of the University of Michigan and Industrial Committees advising Stanford University and the University of Florida.

Dr. Rhines holds a Bachelor of Science degree in engineering from the University of Michigan, a Master of Science and PhD in materials science and engineering from Stanford University, a master of Business Administration from Southern Methodist University and Honorary Doctor of Technology degrees from the University of Florida and Nottingham Trent University.

As the adoption of generative and agentic AI accelerates, the challenges for memory as a key enabler of AI/ML processing architectures continue to grow. Balancing the demands for ever greater bandwidth and capacity with the needs of power efficiency, thermal management and increased reliability is increasingly difficult. Continued advances in high performance HBM and GDDR memories, and mainstream DDR and LPDDR memories, remains a strategic industry imperative. In addition, a suite of new technologies including multiplexed modules (MRDIMM), CXL and processing in memory are needed to meet upcoming AI requirements. In this panel, we’ll discuss the evolution of memory technologies and the challenges the industry faces on the road ahead for future AI chips and systems.

 

Memory

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Author:

Taeksang Song

CVP
Samsung Electronics

Taeksang is a Corporate VP at Samsung Electronics where he is leading a team dedicated to pioneering cutting-edge technologies including CAMM, MRDIMM, CXL memory expander, fabric attached memory solution and processing near memory to meet the evolving demands of next-generation data-centric AI architecture. He has 20 years' professional experience in memory and sub-system architecture, interconnect protocols, system-on-chip design and collaborating with CSPs to enable heterogeneous computing infrastructure. Prior to joining Samsung Electronics, he worked at Rambus Inc., Micron Technology and SK hynix in lead architect roles for the emerging memory controllers and systems. 

Taeksang received his Ph.D. degree from KAIST, South Korea, in 2006. Dr. Song has authored and co-authored over 20 technical papers and holds over 50 U.S. patents.

 

Taeksang Song

CVP
Samsung Electronics

Taeksang is a Corporate VP at Samsung Electronics where he is leading a team dedicated to pioneering cutting-edge technologies including CAMM, MRDIMM, CXL memory expander, fabric attached memory solution and processing near memory to meet the evolving demands of next-generation data-centric AI architecture. He has 20 years' professional experience in memory and sub-system architecture, interconnect protocols, system-on-chip design and collaborating with CSPs to enable heterogeneous computing infrastructure. Prior to joining Samsung Electronics, he worked at Rambus Inc., Micron Technology and SK hynix in lead architect roles for the emerging memory controllers and systems. 

Taeksang received his Ph.D. degree from KAIST, South Korea, in 2006. Dr. Song has authored and co-authored over 20 technical papers and holds over 50 U.S. patents.

 

Author:

Shreya Singhal

Applied Generative AI Research Scientist
Claritev

Shreya Singhal is an Applied Generative AI Research Scientist at Claritev, where she works on building and optimizing large-scale AI systems with a focus on LLMs, multimodal models, and AI agents. She holds a Master’s in Computer Science from the University of Texas at Austin and has prior experience across industry and research roles at organizations such as Dell Technologies, Charles Schwab, and IIIT Hyderabad. Her work spans retrieval-augmented generation, prompt engineering, and deploying production-grade AI pipelines, with a passion for advancing the infrastructure that powers generative AI.

Shreya Singhal

Applied Generative AI Research Scientist
Claritev

Shreya Singhal is an Applied Generative AI Research Scientist at Claritev, where she works on building and optimizing large-scale AI systems with a focus on LLMs, multimodal models, and AI agents. She holds a Master’s in Computer Science from the University of Texas at Austin and has prior experience across industry and research roles at organizations such as Dell Technologies, Charles Schwab, and IIIT Hyderabad. Her work spans retrieval-augmented generation, prompt engineering, and deploying production-grade AI pipelines, with a passion for advancing the infrastructure that powers generative AI.

Software Infra

Author:

Aswini Atibudhi

Distinguished Architect
Walmart

Aswini is a Distinguished Architect at Walmart Global Tech, with over 22 years of IT experience in designing scalable AI/ML, micro frontend, microservices, and cloud applications. His professional expertise encompasses diverse domains including  e-commerce, finance, telecom and healthcare meticulously developed through his tenure at Walmart, Cisco, Equinix, Finastra, and TCS. Over seven years at Walmart, he has been a founding member of critical platforms like Last Mile Delivery, Fleet Management, MerchOne, Supplier Portal and several others. As a recognized expert in generative AI, Aswini specializes in leveraging machine learning and large language models to create transformative digital experiences, including personalized content generation and AI-driven customer engagement.

He has received numerous awards including Walmart’s Innovation Award, Equinix’s Top Performer Award, and Cisco’s Group Race Award. With many certifications in AI, machine learning, and cloud technologies, he stays at the forefront of innovation. Known for his strategic insights, Aswini has a proven ability to deliver transformative AI solutions across industries. 

Aswini Atibudhi

Distinguished Architect
Walmart

Aswini is a Distinguished Architect at Walmart Global Tech, with over 22 years of IT experience in designing scalable AI/ML, micro frontend, microservices, and cloud applications. His professional expertise encompasses diverse domains including  e-commerce, finance, telecom and healthcare meticulously developed through his tenure at Walmart, Cisco, Equinix, Finastra, and TCS. Over seven years at Walmart, he has been a founding member of critical platforms like Last Mile Delivery, Fleet Management, MerchOne, Supplier Portal and several others. As a recognized expert in generative AI, Aswini specializes in leveraging machine learning and large language models to create transformative digital experiences, including personalized content generation and AI-driven customer engagement.

He has received numerous awards including Walmart’s Innovation Award, Equinix’s Top Performer Award, and Cisco’s Group Race Award. With many certifications in AI, machine learning, and cloud technologies, he stays at the forefront of innovation. Known for his strategic insights, Aswini has a proven ability to deliver transformative AI solutions across industries. 

As AI infrastructure outgrows tightly coupled systems, we’re witnessing a shift toward openness and modularity in designing full-stack solutions for AI. In this session, we’ll examine the rise of Software-Driven Fabrics (SDF) , a programmable, vendor-neutral control plane for modern AI networking. SDF makes real-time data coordination across heterogeneous accelerators and fabrics possible, offering a new, democratized model for GPU scalability and network resiliency.

Software Infra

Author:

Prashanth Thinakaran

Distinguished AI Infrastructure Engineer
Clockwork Systems

Prashanth Thinakaran is a Distinguished AI Infrastructure Engineer at Clockwork Systems, a pioneer in nano-second precise network telemetry and software-driven resilience that addresses the unprecedented scale, performance, and reliability Modern AI workloads demand from GPU clusters. In this role, he partners with AI Infrastructure teams at enterprises, hyperscalers and neoclouds to increase their visibility into issues impacting cluster uptime and optimize their availability and utilization with Clockwork’s solution. 

Previously, he helped AI-native companies build on cloud-based GPU platforms, providing deep technical guidance on distributed training and inference, multi-node scaling, and performance tuning across complex infrastructure stacks. His Neocloud experience bridged the gap between product engineering and customer enablement, helping fast-moving teams adopt best practices in massive-scale model deployment and operations. Prior to that, Prashanth played a pivotal role at Cerebras Systems -  a market leader in high-speed inferencing, - in the design, and deployment of Condor Galaxy 1, a Wafer-scale supercomputer. His work enabled rapid deployment timelines and seamless scaling of AI infrastructure across globally distributed data centers designed for both Inference and Training.

Prashanth also holds a Ph.D. in Computer Science and Engineering from Penn State, where his research focused on high-performance computing and cloud systems. His academic work has been published in top-tier venues including USENIX NSDI, ACM Middleware, ICDCS, and ACM SoCC. He has authored over a dozen peer-reviewed papers and a book chapter, and served as a reviewer for journals such as IEEE TPDS and TCC. During his Ph.D., he held teaching roles in systems programming, and computer architecture, and collaborated with Intel, VMware, and Adobe Research through internships, solving systems challenges at the intersection of academia and industry.



Prashanth Thinakaran

Distinguished AI Infrastructure Engineer
Clockwork Systems

Prashanth Thinakaran is a Distinguished AI Infrastructure Engineer at Clockwork Systems, a pioneer in nano-second precise network telemetry and software-driven resilience that addresses the unprecedented scale, performance, and reliability Modern AI workloads demand from GPU clusters. In this role, he partners with AI Infrastructure teams at enterprises, hyperscalers and neoclouds to increase their visibility into issues impacting cluster uptime and optimize their availability and utilization with Clockwork’s solution. 

Previously, he helped AI-native companies build on cloud-based GPU platforms, providing deep technical guidance on distributed training and inference, multi-node scaling, and performance tuning across complex infrastructure stacks. His Neocloud experience bridged the gap between product engineering and customer enablement, helping fast-moving teams adopt best practices in massive-scale model deployment and operations. Prior to that, Prashanth played a pivotal role at Cerebras Systems -  a market leader in high-speed inferencing, - in the design, and deployment of Condor Galaxy 1, a Wafer-scale supercomputer. His work enabled rapid deployment timelines and seamless scaling of AI infrastructure across globally distributed data centers designed for both Inference and Training.

Prashanth also holds a Ph.D. in Computer Science and Engineering from Penn State, where his research focused on high-performance computing and cloud systems. His academic work has been published in top-tier venues including USENIX NSDI, ACM Middleware, ICDCS, and ACM SoCC. He has authored over a dozen peer-reviewed papers and a book chapter, and served as a reviewer for journals such as IEEE TPDS and TCC. During his Ph.D., he held teaching roles in systems programming, and computer architecture, and collaborated with Intel, VMware, and Adobe Research through internships, solving systems challenges at the intersection of academia and industry.



Systems Optimization
Memory

Author:

Puneet Kumar

CEO
Rivos

Puneet Kumar, CEO and co-founder of Rivos Inc., which he established in May 2021. His impressive career includes leading the ChromeOS Platform Engineering team at Google as a Senior Director, where he oversaw the program's growth to over 100 million users. Puneet's entrepreneurial journey includes co-founding Agnilux (acquired by Google), where he served as VP of Engineering, and PASemi Inc. (acquired by Apple), where he was an Engineering Director in the Platform Architecture team. He also held senior leadership positions at SiByte Inc. (acquired by Broadcom) and was a Systems Researcher at Digital Equipment Corporation's Systems Research Center. Puneet holds a PhD in Computer Science from Carnegie Mellon University.

Puneet Kumar

CEO
Rivos

Puneet Kumar, CEO and co-founder of Rivos Inc., which he established in May 2021. His impressive career includes leading the ChromeOS Platform Engineering team at Google as a Senior Director, where he oversaw the program's growth to over 100 million users. Puneet's entrepreneurial journey includes co-founding Agnilux (acquired by Google), where he served as VP of Engineering, and PASemi Inc. (acquired by Apple), where he was an Engineering Director in the Platform Architecture team. He also held senior leadership positions at SiByte Inc. (acquired by Broadcom) and was a Systems Researcher at Digital Equipment Corporation's Systems Research Center. Puneet holds a PhD in Computer Science from Carnegie Mellon University.