AI Hardware & Edge AI Summit 2023 Agenda | Kisaco Research

The 2024 agenda will be released soon. To be the first to receive a copy, register your interest here


Tuesday, 12 Sep, 2023
DAY 1 - SERVER TO EDGE: TRAINING + HARDWARE & SYSTEMS DESIGN
09:00 AM
PRE-EVENT WORKSHOP

Running generative AI models on AI-enabled devices offer several advantages over running them on the cloud. Local processing reduces latency, ensures real-time responsiveness, and enhances privacy by keeping sensitive data on-device, mitigating concerns associated with transmitting potentially confidential information to external servers. In this workshop, you will learn how to optimize and run a LLM on the Ryzen AI enabled processor without cloud connectivity. 

If you'd like to attend, you can indicate your interest upon registration. All workshop attendees must be registered for the event. 

09:00 AM
REGISTRATION & MORNING NETWORKING
10:00 AM
LUMINARY KEYNOTE

Abstract coming soon...

Author:

Andrew Ng

Founder & CEO
LandingAI

Dr. Andrew Ng is a globally recognized leader in AI (Artificial Intelligence). He is Founder of DeepLearning.AI, Founder & CEO of Landing AI, General Partner at AI Fund, Chairman & Co-Founder of Coursera and an Adjunct Professor at Stanford University’s Computer Science Department.

In 2011, he led the development of Stanford University's main MOOC (Massive Open Online Courses) platform and taught an online Machine Learning course that was offered to over 100,000 students leading to the founding of Coursera where he is currently Chairman and Co-founder.

Previously, he was Chief Scientist at Baidu, where he led the company’s ~1300 person AI Group and was responsible for driving the company’s global AI strategy and infrastructure. He was also the founding lead of the Google Brain team.

As a pioneer in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, and has authored or co-authored over 200 research papers in machine learning, robotics and related fields. In 2013, he was named to the Time 100 list of the most influential persons in the world. He holds degrees from Carnegie Mellon University, MIT and the University of California, Berkeley.

Andrew Ng

Founder & CEO
LandingAI

Dr. Andrew Ng is a globally recognized leader in AI (Artificial Intelligence). He is Founder of DeepLearning.AI, Founder & CEO of Landing AI, General Partner at AI Fund, Chairman & Co-Founder of Coursera and an Adjunct Professor at Stanford University’s Computer Science Department.

In 2011, he led the development of Stanford University's main MOOC (Massive Open Online Courses) platform and taught an online Machine Learning course that was offered to over 100,000 students leading to the founding of Coursera where he is currently Chairman and Co-founder.

Previously, he was Chief Scientist at Baidu, where he led the company’s ~1300 person AI Group and was responsible for driving the company’s global AI strategy and infrastructure. He was also the founding lead of the Google Brain team.

As a pioneer in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, and has authored or co-authored over 200 research papers in machine learning, robotics and related fields. In 2013, he was named to the Time 100 list of the most influential persons in the world. He holds degrees from Carnegie Mellon University, MIT and the University of California, Berkeley.

10:45 AM
KEYNOTE

Abstract coming soon...

Author:

Marc Tremblay

Technical Fellow
Microsoft

Marc is a Distinguished Engineer and VP in the Office of the CTO (OCTO) at Microsoft. His current role is to drive the strategic and technical direction of the company on silicon and hardware systems from a cross-divisional standpoint. This includes Artificial Intelligence, from supercomputer to client devices to Xbox, etc., and general-purpose computing. Throughout his career, Marc has demonstrated a passion for translating high-level application requirements into optimizations up and down the stack, all the way to silicon. AI has been his focus for the past several years, but his interests also encompass accelerators for the cloud, scale-out systems, and process technology. He has given multiple keynotes on AI Hardware, published many papers on throughput computing, multi-cores, multithreading, transactional memory, speculative multi-threading, Java computing, etc. and he is an inventor of over 300 patents on those topics.

Prior to Microsoft, Marc was the CTO of Microelectronics at Sun Microsystems. As a Sun Fellow and SVP, he was responsible for the technical leadership of 1200 engineers. Throughout his career, he has started, architected, led, defined and shipped a variety of microprocessors such as superscalar RISC processors (UltraSPARC I/II), bytecode engines (picoJava), VLIW, media and Java-focused (MAJC), and the first processor to implement speculative multithreading and transactional memory (ROCK – first silicon). He received his M.S. and Ph.D. degrees in Computer Sciences from UCLA and his Physics Engineering degree from Laval University in Canada. Marc is on the board of directors of QuantalRF.

Marc Tremblay

Technical Fellow
Microsoft

Marc is a Distinguished Engineer and VP in the Office of the CTO (OCTO) at Microsoft. His current role is to drive the strategic and technical direction of the company on silicon and hardware systems from a cross-divisional standpoint. This includes Artificial Intelligence, from supercomputer to client devices to Xbox, etc., and general-purpose computing. Throughout his career, Marc has demonstrated a passion for translating high-level application requirements into optimizations up and down the stack, all the way to silicon. AI has been his focus for the past several years, but his interests also encompass accelerators for the cloud, scale-out systems, and process technology. He has given multiple keynotes on AI Hardware, published many papers on throughput computing, multi-cores, multithreading, transactional memory, speculative multi-threading, Java computing, etc. and he is an inventor of over 300 patents on those topics.

Prior to Microsoft, Marc was the CTO of Microelectronics at Sun Microsystems. As a Sun Fellow and SVP, he was responsible for the technical leadership of 1200 engineers. Throughout his career, he has started, architected, led, defined and shipped a variety of microprocessors such as superscalar RISC processors (UltraSPARC I/II), bytecode engines (picoJava), VLIW, media and Java-focused (MAJC), and the first processor to implement speculative multithreading and transactional memory (ROCK – first silicon). He received his M.S. and Ph.D. degrees in Computer Sciences from UCLA and his Physics Engineering degree from Laval University in Canada. Marc is on the board of directors of QuantalRF.

11:15 AM
HEADLINE PARTNER KEYNOTE

The world today is experiencing an AI revolution. We haven’t seen productivity transformations like this since the dawn of the computer age and the industrial revolution before that. Companies from every market segment are feeling the effects that AI brings to the table including the semiconductor industry. McKinsey & Company reports that design complexity and process complexity will double with every new process node generation leading to a dramatic increase in design and labor costs. Add to this, the engineering shortfall hitting the semiconductor industry and it has become clear that how chips are designed needs to dramatically change.  

 

Artificial intelligence has inherent traits that make it the perfect solution to embrace these challenges and infuse automation throughout the chip design and development flow. But what limitations does AI have and how is it evolving so that it can keep pace with the productivity and quality demands of the market? The application of AI also goes far beyond the scope of design. The massive amount of data that AI engines harvest and AI itself can be used to understand design trends, monitor silicon life cycles, and improve yield. Data will play a key role in the next evolution to enable chip design using generative AI. In this keynote, Thomas Andersen will explore how this transformative technology impacts innovation and optimization for chip design and beyond. 

Author:

Thomas Andersen

VP, AI and Machine Learning
Synopsys

Dr. Andersen heads the artificial intelligence and machine learning design group at Synopsys, where he focuses on developing new technologies in the AI and ML space to automate the future of chip design. He has more than 20 years of experience in the semiconductor and EDA industry. Dr. Andersen started his career at IBM’s TJ Watson Research Center in Yorktown Heights, New York, followed by managing synthesis/place-and-route engineering at Magma Design Automation and Synopsys. He holds a Master’s degree from the University of Stuttgart and a Ph.D. in Computer Engineering from the University of Kaiserslautern in Germany.

Thomas Andersen

VP, AI and Machine Learning
Synopsys

Dr. Andersen heads the artificial intelligence and machine learning design group at Synopsys, where he focuses on developing new technologies in the AI and ML space to automate the future of chip design. He has more than 20 years of experience in the semiconductor and EDA industry. Dr. Andersen started his career at IBM’s TJ Watson Research Center in Yorktown Heights, New York, followed by managing synthesis/place-and-route engineering at Magma Design Automation and Synopsys. He holds a Master’s degree from the University of Stuttgart and a Ph.D. in Computer Engineering from the University of Kaiserslautern in Germany.

11:45 AM
COMFORT BREAK
11:55 AM
LUMINARY KEYNOTE

Abstract coming soon...

Author:

Jim Keller

CEO
Tenstorrent

Jim Keller is the CEO of Tenstorrent and a veteran hardware engineer. Prior to joining Tenstorrent, he served two years as Senior Vice President of Intel's Silicon Engineering Group. He has held roles as Tesla's Vice President of Autopilot and Low Voltage Hardware, Corporate Vice President and Chief Cores Architect at AMD, and Vice President of Engineering and Chief Architect at P.A. Semi, which was acquired by Apple Inc. Jim has led multiple successful silicon designs over the decades, from the DEC Alpha processors, to AMD K7/K8/K12, HyperTransport and the AMD Zen family, the Apple A4/A5 processors, and Tesla's self-driving car chip.

Jim Keller

CEO
Tenstorrent

Jim Keller is the CEO of Tenstorrent and a veteran hardware engineer. Prior to joining Tenstorrent, he served two years as Senior Vice President of Intel's Silicon Engineering Group. He has held roles as Tesla's Vice President of Autopilot and Low Voltage Hardware, Corporate Vice President and Chief Cores Architect at AMD, and Vice President of Engineering and Chief Architect at P.A. Semi, which was acquired by Apple Inc. Jim has led multiple successful silicon designs over the decades, from the DEC Alpha processors, to AMD K7/K8/K12, HyperTransport and the AMD Zen family, the Apple A4/A5 processors, and Tesla's self-driving car chip.

12:25 PM
KEYNOTE

AI is defining the next era of computing, and this is just the beginning. Innovation is occurring at an exponential rate by large players, startups, and especially in the open-source community. From extremely large-language-model processing in the cloud, to low-precision inferencing on endpoints, to diverse, often real-time, constraints in the middle, AI is driving unique and demanding compute requirements. In this session, Vamsi will provide insights into these challenges and discuss how AMD is enabling AI adoption pervasively, across cloud to edge and endpoints -- with compelling, scalable, compute architectures that deliver exceptional power efficiency and TCO -- and with open software and deep ecosystem partnerships to empower innovators.

Author:

Vamsi Boppana

SVP, AI
AMD

Vamsi Boppana is responsible for AMD’s AI strategy, driving the AI roadmap across the client, edge and cloud for AMD’s AI software stack and ecosystem efforts. Until 2022, he was Senior Vice President of the Central Products Group (CPG), responsible for developing and marketing Xilinx’s Adaptive and AI product portfolio. He also served as executive sponsor for the Xilinx integration into AMD. 

At Xilinx, Boppana led the silicon development of leading products such as Versal™ and Zynq™ UltraScale™+ MPSoC. Before joining the company in 2008, he held engineering management roles at Open-Silicon and Zenasis Technologies, a company he co-founded. Boppana began his career at Fujitsu Laboratories. Caring deeply about the benefits of the technology he creates, Boppana aspires both to achieve commercial success and improve lives through the products he builds. 

Vamsi Boppana

SVP, AI
AMD

Vamsi Boppana is responsible for AMD’s AI strategy, driving the AI roadmap across the client, edge and cloud for AMD’s AI software stack and ecosystem efforts. Until 2022, he was Senior Vice President of the Central Products Group (CPG), responsible for developing and marketing Xilinx’s Adaptive and AI product portfolio. He also served as executive sponsor for the Xilinx integration into AMD. 

At Xilinx, Boppana led the silicon development of leading products such as Versal™ and Zynq™ UltraScale™+ MPSoC. Before joining the company in 2008, he held engineering management roles at Open-Silicon and Zenasis Technologies, a company he co-founded. Boppana began his career at Fujitsu Laboratories. Caring deeply about the benefits of the technology he creates, Boppana aspires both to achieve commercial success and improve lives through the products he builds. 

12:55 PM
NETWORKING LUNCH
2:10 PM
PRESENTATION

Abstract coming soon...

Author:

Soojung Ryu

CEO
SAPEON

As a well-known expert in AI processors, Soojung Ryu is in charge of SAPEON in order to accelerate the company’s growth in the global AI market. She brings more than 25 years of extensive experience in leading various projects related to NPU and GPU.

Before she joined SK Telecom as the head of the AI accelerator office, Ryu was a University-Industry Collaboration Professor at Seoul National University, where she conducted R&D in the NPU and PIM. When she served as the Vice President of Samsung Group's R&D hub, she undertook diverse projects related to GPU. Ryu received her Ph.D. degree in Electrical & Computer Engineering from Georgia Institute of Technology.

Soojung Ryu

CEO
SAPEON

As a well-known expert in AI processors, Soojung Ryu is in charge of SAPEON in order to accelerate the company’s growth in the global AI market. She brings more than 25 years of extensive experience in leading various projects related to NPU and GPU.

Before she joined SK Telecom as the head of the AI accelerator office, Ryu was a University-Industry Collaboration Professor at Seoul National University, where she conducted R&D in the NPU and PIM. When she served as the Vice President of Samsung Group's R&D hub, she undertook diverse projects related to GPU. Ryu received her Ph.D. degree in Electrical & Computer Engineering from Georgia Institute of Technology.

Abstract coming soon...

Author:

Jeremy Roberson

Director of Inference Software
FlexLogix

Director of Inference Software at Flex Logix. Jeremy earned his BSEE, MSEE, and PhD EE degrees from UC Davis specializing in Signal Processing Algorithms. Jeremy has worked on algorithms and hardware accelerator architectures for machine learning and signal processing in domains such as automatic speech recognition, object detection for biomedicine, capacitive sensing systems, and more. He has several patents and publications within these areas. He has spent the last 6 years working on inference SW for AI accelerators, first at Intel, and now at Flex Logix. 

Jeremy Roberson

Director of Inference Software
FlexLogix

Director of Inference Software at Flex Logix. Jeremy earned his BSEE, MSEE, and PhD EE degrees from UC Davis specializing in Signal Processing Algorithms. Jeremy has worked on algorithms and hardware accelerator architectures for machine learning and signal processing in domains such as automatic speech recognition, object detection for biomedicine, capacitive sensing systems, and more. He has several patents and publications within these areas. He has spent the last 6 years working on inference SW for AI accelerators, first at Intel, and now at Flex Logix. 

2:35 PM
PRESENTATION

Abstract coming soon...

Author:

Jia Li

Co-Founder, Chief AI Officer & President
LiveX AI

Jia is Co-founder, Chief AI Officer and President of a Stealth Generative AI Startup. She is elected as IEEE Fellow for Leadership in Large Scale AI. She is co-teaching the inaugural course of Generative AI and Medicine at Stanford University, where she has served multiple roles including Advisory Board Committee to Nourish, Chief AI Fellow, RWE for Sleep Health and Adjunct Professor at the School of Medicine in the past. She was the Founding Head of R&D at Google Cloud AI. At Google, she oversaw the development of the full stack of AI products on Google Cloud to power solutions for diverse industries. With the passion to make more impact to our everyday life, she later became an entrepreneur, building and advising companies with award-winning platforms to solve today's greatest challenges in life. She has served as Mentor and Professor-in-Residence at StartX, advising founders/companies from Stanford/Alumni. She is the Co-founder and Chairperson of HealthUnity Corporation, a 501(c)3 nonprofit organization. She served briefly at Accenture as a part-time Chief AI Follow for the Generative AI strategy. She also serves as an advisor to the United Nations Children's Fund (UNICEF). She is a board member of the Children's Discovery Museum of San Jose. She was selected as a World Economic Forum Young Global Leader, a recognition bestowed on 100 of the world’s most promising business leaders, artists, public servants, technologists, and social entrepreneurs in 2018. Before joining Google, She was the Head of Research at Snap, leading the AI/AR innovation effort. She received her Ph.D. degree from the Computer Science Department at Stanford University.

Jia Li

Co-Founder, Chief AI Officer & President
LiveX AI

Jia is Co-founder, Chief AI Officer and President of a Stealth Generative AI Startup. She is elected as IEEE Fellow for Leadership in Large Scale AI. She is co-teaching the inaugural course of Generative AI and Medicine at Stanford University, where she has served multiple roles including Advisory Board Committee to Nourish, Chief AI Fellow, RWE for Sleep Health and Adjunct Professor at the School of Medicine in the past. She was the Founding Head of R&D at Google Cloud AI. At Google, she oversaw the development of the full stack of AI products on Google Cloud to power solutions for diverse industries. With the passion to make more impact to our everyday life, she later became an entrepreneur, building and advising companies with award-winning platforms to solve today's greatest challenges in life. She has served as Mentor and Professor-in-Residence at StartX, advising founders/companies from Stanford/Alumni. She is the Co-founder and Chairperson of HealthUnity Corporation, a 501(c)3 nonprofit organization. She served briefly at Accenture as a part-time Chief AI Follow for the Generative AI strategy. She also serves as an advisor to the United Nations Children's Fund (UNICEF). She is a board member of the Children's Discovery Museum of San Jose. She was selected as a World Economic Forum Young Global Leader, a recognition bestowed on 100 of the world’s most promising business leaders, artists, public servants, technologists, and social entrepreneurs in 2018. Before joining Google, She was the Head of Research at Snap, leading the AI/AR innovation effort. She received her Ph.D. degree from the Computer Science Department at Stanford University.

Author:

Krishna Rangasayee

Founder & CEO
SiMa.ai

Krishna is founder and CEO of SiMa.ai™, a machine learning company enabling effortless ML for the Embedded Edge.

Previously, he was the COO of Groq, a machine learning startup. He was with Xilinx for 18 years, where he was Senior Vice President and GM of Xilinx’s overall business prior to his most recent role as Executive Vice President, Global Sales. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents. He has also served on the board of directors of public and private companies.

Krishna Rangasayee

Founder & CEO
SiMa.ai

Krishna is founder and CEO of SiMa.ai™, a machine learning company enabling effortless ML for the Embedded Edge.

Previously, he was the COO of Groq, a machine learning startup. He was with Xilinx for 18 years, where he was Senior Vice President and GM of Xilinx’s overall business prior to his most recent role as Executive Vice President, Global Sales. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents. He has also served on the board of directors of public and private companies.

Author:

Marshall Choy

SVP, Product
SambaNova Systems

Marshall Choy is Senior Vice President of Product at SambaNova Systems and is responsible for product management and go-to-market operations.  Marshall has extensive experience leading global organizations to bring breakthrough products to market, establish new market presences, and grow new and existing lines of business.  Marshall was previously Vice President of Product Management at Oracle until 2018.  He was responsible for the portfolio and strategy for Oracle Systems products and solutions.  He led teams that delivered comprehensive end-to-end hardware and software solutions and product management operations.  Prior to joining Oracle in 2010 when it acquired Sun Microsystems, he served as Director of Engineered Solutions at Sun.  During his 11 years there, Marshall held various positions in development, information technology, and marketing. 

Marshall Choy

SVP, Product
SambaNova Systems

Marshall Choy is Senior Vice President of Product at SambaNova Systems and is responsible for product management and go-to-market operations.  Marshall has extensive experience leading global organizations to bring breakthrough products to market, establish new market presences, and grow new and existing lines of business.  Marshall was previously Vice President of Product Management at Oracle until 2018.  He was responsible for the portfolio and strategy for Oracle Systems products and solutions.  He led teams that delivered comprehensive end-to-end hardware and software solutions and product management operations.  Prior to joining Oracle in 2010 when it acquired Sun Microsystems, he served as Director of Engineered Solutions at Sun.  During his 11 years there, Marshall held various positions in development, information technology, and marketing. 

3:00 PM
PANEL

Pre-training Foundation Models is prohibitively expensive and therefore impossible for many companies. This is especially true if the models are Large Language Models (LLMs). However, people hope that Foundation Models will live up to the promise of learning more generally than classical Artificial Intelligence (AI) models. The dream is that if you provide just a few examples to Foundation Models, they could extrapolate the high-level, abstract representation of the problem and learn how to accomplish tasks that they have never been trained to execute before. So, the question is, how can you lower the cost of fine-tuning pre-trained Foundation Models for your needs? This is what we will discuss in this panel. We make available to you our personal experience, synthetized in a set of principles, so that you can discover how we found ways to lower the cost of fine-tuning pre-trained Foundational Models across multiple domains. 

Moderator

Author:

Fausto Artico

Head of Innovation and Data Science
GSK

Fausto has two PhDs (Information & Computer Science respectively), earning his second master’s and PhD at the University of California, Irvine. Fausto also holds multiple certifications from MIT, Columbia University, London School of Economics and Political Science, Kellogg School of Management, University of Cambridge and soon also from the University of California, Berkeley. He has worked in multi-disciplinary teams and has over 20 years of experience in academia and industry.

As a Physicist, Mathematician, Engineer, Computer Scientist, and High-Performance Computing (HPC) and Data Science expert, Fausto has worked on key projects at European and American government institutions and with key individuals, like Nobel Prize winner Michael J. Prather. After his time at NVIDIA corporation in Silicon Valley, Fausto worked at the IBM T J Watson Center in New York on Exascale Supercomputing Systems for the US government (e.g., Livermore and Oak Ridge Labs).

Fausto Artico

Head of Innovation and Data Science
GSK

Fausto has two PhDs (Information & Computer Science respectively), earning his second master’s and PhD at the University of California, Irvine. Fausto also holds multiple certifications from MIT, Columbia University, London School of Economics and Political Science, Kellogg School of Management, University of Cambridge and soon also from the University of California, Berkeley. He has worked in multi-disciplinary teams and has over 20 years of experience in academia and industry.

As a Physicist, Mathematician, Engineer, Computer Scientist, and High-Performance Computing (HPC) and Data Science expert, Fausto has worked on key projects at European and American government institutions and with key individuals, like Nobel Prize winner Michael J. Prather. After his time at NVIDIA corporation in Silicon Valley, Fausto worked at the IBM T J Watson Center in New York on Exascale Supercomputing Systems for the US government (e.g., Livermore and Oak Ridge Labs).

Panellists

Author:

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

Author:

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Jeff Boudier

Product Director
Hugging Face

Jeff Boudier is a product director at Hugging Face, creator of Transformers, the leading open-source NLP library. Previously Jeff was a co-founder of Stupeflix, acquired by GoPro, where he served as director of Product Management, Product Marketing, Business Development and Corporate Development.

Author:

Helen Byrne

VP, Solution Architect
Graphcore

Helen leads the Solution Architects team at Graphcore, helping innovators build their AI solutions using Graphcore’s Intelligence Processing Units (IPUs). She has been at Graphcore for more than 5 years, previously leading AI Field Engineering and working in AI Research, working on problems in Distributed Machine Learning. Before landing in the technology industry, she worked in Investment Banking. Her background is in Mathematics and she has a MSc in Artificial Intelligence.

Helen Byrne

VP, Solution Architect
Graphcore

Helen leads the Solution Architects team at Graphcore, helping innovators build their AI solutions using Graphcore’s Intelligence Processing Units (IPUs). She has been at Graphcore for more than 5 years, previously leading AI Field Engineering and working in AI Research, working on problems in Distributed Machine Learning. Before landing in the technology industry, she worked in Investment Banking. Her background is in Mathematics and she has a MSc in Artificial Intelligence.

 Memory continues to be a critical bottleneck for AI/ML systems, and keeping the processing pipeline in balance requires continued advances in high performance memories like HBM and GDDR, as well as mainstream memories like DDR. Emerging memories and new technologies like CXL offer additional possibilities for improving the memory hierarchy. In this panel, we’ll discuss important enabling technologies and key challenges the industry needs to address for memory systems going forward.

Moderator

Author:

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Steven Woo

Fellow and Distinguished Inventor
Rambus

I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.

As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.

For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.

I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.

After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.

Education

  • Ph.D., Electrical Engineering, Stanford University
  • M.S. Electrical Engineering, Stanford University
  • Master of Engineering, Harvey Mudd College
  • B.S. Engineering, Harvey Mudd College

Panellists

Author:

David Kanter

Founder & Executive Director
MLCommons

David co-founded and is the Head of MLPerf for MLCommons, the world leader in building benchmarks for AI. MLCommons is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

David Kanter

Founder & Executive Director
MLCommons

David co-founded and is the Head of MLPerf for MLCommons, the world leader in building benchmarks for AI. MLCommons is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

Author:

Brett Dodds

Senior Director, Azure Memory Devices
Microsoft

Brett Dodds

Senior Director, Azure Memory Devices
Microsoft

Author:

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

3:40 PM
PANEL

Abstract coming soon...

Moderator

Author:

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

Karl Freund

Founder & Principal Analyst
Cambrian AI Research

Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.

 

Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.

Accomplishments during his career include:

  • Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
  • Created an industry-wide thought leadership position for Calxeda in the ARM Server market
  • Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
  • Built the IBM Power Server brand from 14% market share to over 50% share
  • Integrated the Tivoli brand into the IBM company’s branding and marketing organization
  • Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition

 

Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).

 

Panellists

Author:

Zairah Mustahsan

Senior Data Scientist
You.com

Zairah Mustahsan is a Staff Data Scientist at You.com, an AI chatbot for search, where she leverages her expertise in statistical and machine-learning techniques to build analytics and experimentation platforms. Previously, Zairah was a Data Scientist at IBM Research, researching Natural Language Processing (NLP) and AI Fairness topics. Zairah obtained her M.S. in Computer Science from the University of Pennsylvania, where she researched scikit-learn model performance. Her findings have since been used as guidelines for machine learning. Zairah is a regular speaker at AI conferences such as NeurIPS, AI4, AI Hardware & Edge AI Summit, and ODSC. Zairah has published her work in top AI conferences such AAAI and has over 300 citations. Aside from work, Zairah enjoys adventure sports and poetry.

Zairah Mustahsan

Senior Data Scientist
You.com

Zairah Mustahsan is a Staff Data Scientist at You.com, an AI chatbot for search, where she leverages her expertise in statistical and machine-learning techniques to build analytics and experimentation platforms. Previously, Zairah was a Data Scientist at IBM Research, researching Natural Language Processing (NLP) and AI Fairness topics. Zairah obtained her M.S. in Computer Science from the University of Pennsylvania, where she researched scikit-learn model performance. Her findings have since been used as guidelines for machine learning. Zairah is a regular speaker at AI conferences such as NeurIPS, AI4, AI Hardware & Edge AI Summit, and ODSC. Zairah has published her work in top AI conferences such AAAI and has over 300 citations. Aside from work, Zairah enjoys adventure sports and poetry.

Author:

Sravanthi Rajanala

Director, Machine Learning & Search
Walmart Tech

Sravanthi Rajanala is the Director of Data Science and Machine Learning in Walmart's Search Technologies. She began her career in telecom and worked for Microsoft and Nokia before joining Bing Search in 2011 to work in machine learning and search. Sravanthi has led initiatives in query and document understanding, ranking, and question answering. In 2021, she joined Walmart and now leads the Search Core Algorithms, Machine Translation, and Metrics Science. Sravanthi holds a Master's degree in Computational Science from the Indian Institute of Science and a Bachelor's degree in Computer Science from Osmania University.

Sravanthi Rajanala

Director, Machine Learning & Search
Walmart Tech

Sravanthi Rajanala is the Director of Data Science and Machine Learning in Walmart's Search Technologies. She began her career in telecom and worked for Microsoft and Nokia before joining Bing Search in 2011 to work in machine learning and search. Sravanthi has led initiatives in query and document understanding, ranking, and question answering. In 2021, she joined Walmart and now leads the Search Core Algorithms, Machine Translation, and Metrics Science. Sravanthi holds a Master's degree in Computational Science from the Indian Institute of Science and a Bachelor's degree in Computer Science from Osmania University.

Author:

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

Selcuk Kopru

Director, Engineering & Research, Search
eBay

Selcuk Kopru is Head of ML & NLP at eBay and is an experienced AI leader with proven expertise in creating and deploying cutting edge NLP and AI technologies and systems. He is experienced in developing scalable Machine Learning solutions to solve big data problems that involve text and multimodal data. He is also skilled in Python, Java, C++, Machine Translation and Pattern Recognition. Selcuk is also a strong research professional with a Doctor of Philosophy (PhD) in NLP in Computer Science from Middle East Technical University.

As the era of high-performance computing (HPC) and artificial intelligence (AI) ushers in unprecedented advancements, the reliance on cloud strategies becomes vital. As cloud infrastructure becomes increasingly integral to supporting demanding computational workloads, maintaining the availability and robustness of these systems becomes paramount.

This panel will delve into the critical intersection of HPC/AI and cloud technology, spotlighting strategies for ensuring uninterrupted operations in the face of emerging challenges. The session brings together leading experts to examine architectural design paradigms that foster robustness, redundancy trade-offs, load balancing, and intelligent fault detection and predictive monitoring mechanisms. Experts will share insights on best practices for optimizing resource allocation, orchestrating seamless workload migrations, and deploying resilient cloud-native solutions. By exploring real-world cases, emerging trends, and practical insights, this discussion aims to equip data center and cloud professionals with insights to elevate their resiliency strategies amidst evolving computational demands.

Moderator

Author:

Alam Akbar

Director, Product Marketing
proteanTecs

Alam Akbar is a veteran of the semiconductor industry with experience spanning multiple engineering, product management, and product marketing roles. He holds a Bachelors of Science degree in Electrical Engineering from Texas A&M,  and an MBA from Santa Clara University.

 

Alam began his career at Synopsys as an Application Consultant where he helped grow their market share in the signoff domain. He then joined the business management team at Cadence where he helped launch a new physical verification solution. After Cadence, Alam joined  Intel Foundry services as a design kit program manager, and then moved into the client compute group as director of product marketing. There, he helped scale Intel's storage business, and developed product strategy for new memory solutions for the PC market.

At ProteanTecs, he's part of a team that’s bringing greater insight into the health and performance of semiconductors across the value chain, from the design stage to in field operation, and all the steps in the middle. 

Alam Akbar

Director, Product Marketing
proteanTecs

Alam Akbar is a veteran of the semiconductor industry with experience spanning multiple engineering, product management, and product marketing roles. He holds a Bachelors of Science degree in Electrical Engineering from Texas A&M,  and an MBA from Santa Clara University.

 

Alam began his career at Synopsys as an Application Consultant where he helped grow their market share in the signoff domain. He then joined the business management team at Cadence where he helped launch a new physical verification solution. After Cadence, Alam joined  Intel Foundry services as a design kit program manager, and then moved into the client compute group as director of product marketing. There, he helped scale Intel's storage business, and developed product strategy for new memory solutions for the PC market.

At ProteanTecs, he's part of a team that’s bringing greater insight into the health and performance of semiconductors across the value chain, from the design stage to in field operation, and all the steps in the middle. 

Panellists

Author:

Venkat Ramesh

Hardware Systems Engineer
Meta

Venkat Ramesh is a Hardware Systems Engineer in Meta's Infrastructure Org. 

 

As a Technical Lead in the Release-to-Production team, Venkat has been at the helm of pivotal initiatives aimed at bringing various AI/ML Accelerator, Compute and Storage platforms into the Meta fleet. His multifaceted technical background spans roles across software development, performance engineering, NPI and hardware health telemetry across hyper-scalers and hardware providers.

 

Deeply passionate about the topic of AI hardware resiliency, Venkat's current focus is on building tools and methodologies to enhance hardware reliability, performance and efficiencies for the rapidly evolving AI workloads and technologies.

Venkat Ramesh

Hardware Systems Engineer
Meta

Venkat Ramesh is a Hardware Systems Engineer in Meta's Infrastructure Org. 

 

As a Technical Lead in the Release-to-Production team, Venkat has been at the helm of pivotal initiatives aimed at bringing various AI/ML Accelerator, Compute and Storage platforms into the Meta fleet. His multifaceted technical background spans roles across software development, performance engineering, NPI and hardware health telemetry across hyper-scalers and hardware providers.

 

Deeply passionate about the topic of AI hardware resiliency, Venkat's current focus is on building tools and methodologies to enhance hardware reliability, performance and efficiencies for the rapidly evolving AI workloads and technologies.

Author:

Yun Jin

Engineering Director
Meta

Yun Jin currently works as Engineering Director of Infrastructure in Meta Inc where he leads the Meta's strategy of private cloud capacity and efficiency. Before Meta, Yun has been engineering leadership roles for PPLive, Alibaba Cloud, and Microsoft. Yun has worked on large scale distributed systems, cloud and big data area for 20 years.

Yun Jin

Engineering Director
Meta

Yun Jin currently works as Engineering Director of Infrastructure in Meta Inc where he leads the Meta's strategy of private cloud capacity and efficiency. Before Meta, Yun has been engineering leadership roles for PPLive, Alibaba Cloud, and Microsoft. Yun has worked on large scale distributed systems, cloud and big data area for 20 years.

Author:

Paolo Faraboschi

Vice President and HPE Fellow; Director, AI Research Lab
Hewlett Packard Labs, HPE

Paolo Faraboschi is a Vice President and HPE Fellow and directs the Artificial Intelligence Research Lab at Hewlett Packard Labs. Paolo has been at HP/HPE for three decades, and worked on a broad range of technologies, from embedded printer processors to exascale supercomputers. He previously led exascale computing research (2017-2020), and the hardware architecture of “The Machine” project (2014-2016), pioneered low-energy servers with HP’s project Moonshot (2010-2014), drove scalable system-level simulation research (2004-2009), and was the principal architect of a family of embedded VLIW cores (1994-2003), widely used in video SoCs and HP’s printers. Paolo is an IEEE Fellow (2014) for “contributions to embedded processor architecture and system-on-chip technology”, author of over 100 publications, 70 granted patents, and the book “Embedded Computing: a VLIW approach”. He received a Ph.D. in EECS from the University of Genoa, Italy.

Paolo Faraboschi

Vice President and HPE Fellow; Director, AI Research Lab
Hewlett Packard Labs, HPE

Paolo Faraboschi is a Vice President and HPE Fellow and directs the Artificial Intelligence Research Lab at Hewlett Packard Labs. Paolo has been at HP/HPE for three decades, and worked on a broad range of technologies, from embedded printer processors to exascale supercomputers. He previously led exascale computing research (2017-2020), and the hardware architecture of “The Machine” project (2014-2016), pioneered low-energy servers with HP’s project Moonshot (2010-2014), drove scalable system-level simulation research (2004-2009), and was the principal architect of a family of embedded VLIW cores (1994-2003), widely used in video SoCs and HP’s printers. Paolo is an IEEE Fellow (2014) for “contributions to embedded processor architecture and system-on-chip technology”, author of over 100 publications, 70 granted patents, and the book “Embedded Computing: a VLIW approach”. He received a Ph.D. in EECS from the University of Genoa, Italy.

4:20 PM
PRESENTATION

Author:

Tony Chan Carusone

CTO
Alphawave Semi

Tony Chan Carusone was appointed Chief Technology Officer in January 2022.  Tony has been a professor of Electrical and Computer Engineering at the University of Toronto since 2001.  He has well over 100 publications, including 8 award-winning best papers, focused on integrated circuits for digital communication.  Tony has served as a Distinguished Lecturer for the IEEE Solid-State Circuits Society and on the Technical Program Committees of world’s leading circuits conferences.  He co-authored the classic textbooks “Analog Integrated Circuit Design” and “Microelectronic Circuits” and he is a Fellow of the IEEE.  Tony has also been a consultant to the semiconductor industry for over 20 years, working with both startups and some of the largest technology companies in the world.

Tony holds a B.A.Sc. in Engineering Science and a Ph.D. in Electrical Engineering from the University of Toronto.

Tony Chan Carusone

CTO
Alphawave Semi

Tony Chan Carusone was appointed Chief Technology Officer in January 2022.  Tony has been a professor of Electrical and Computer Engineering at the University of Toronto since 2001.  He has well over 100 publications, including 8 award-winning best papers, focused on integrated circuits for digital communication.  Tony has served as a Distinguished Lecturer for the IEEE Solid-State Circuits Society and on the Technical Program Committees of world’s leading circuits conferences.  He co-authored the classic textbooks “Analog Integrated Circuit Design” and “Microelectronic Circuits” and he is a Fellow of the IEEE.  Tony has also been a consultant to the semiconductor industry for over 20 years, working with both startups and some of the largest technology companies in the world.

Tony holds a B.A.Sc. in Engineering Science and a Ph.D. in Electrical Engineering from the University of Toronto.

4:45 PM
NETWORKING BREAK
5:15 PM
PRESENTATION

Abstract coming soon...

Author:

Martin Ruskowski

Chairman, Department of Machine Tools and Control Systems
RPTU Kaiserslautern-Landau

Professor Dr. Martin Ruskowski took over the position as Head of the renamed Institute of Machine Tools and System Controls (WSKL) on June 1, 2017. His major research focus is on industrial robots as machine tools, artificial intelligence in automation technology, and the development of innovative control concepts for automation.

All equipment and machinery in the factories of tomorrow will be networked: Machines will have the ability to communicate and exchange data among themselves. Robots will continue to play an ever greater role in the world of Industrie 4.0. In the future, they may even replace traditional machine tools is some application situations, for example, in the milling of special components. "A priority of my work at TU Kaiserslautern and DFKI will be to improve the fitness of robots for demanding mechanical processing tasks. The new technologies that result from our research will provide more flexibility to companies and, ultimately, serve as a jobs motor in Germany," said Ruskowski in describing his new responsibilities.

Ruskowski is an expert in the fields of robotics and Industry 4.0. At DFKI and RPTU, his aim will be to develop solutions for the digitalization of production plants while also working on new control systems and robot mechanics to increase the efficiency of future generations of industrial robots. He will also study the question of how to make self-optimizing machines. A major focus is on Human-Machine Interaction in automated production plants. "In the context of the digitalization of production, we need new engineering techniques that will allow humans to more closely integrate the production processes," he added. "We can achieve this in cooperation with Technologie-Initiative SmartFactory KL e.V." This unique research lab located at DFKI provides ideal conditions for the practical evaluation of ambitious research projects.In addition, Ruskowski will hold a series of lectures at the department of Mechanical and Process Engineering on the subjects of machine tools and industrial robotics.

He studied electrical engineering at Leibniz University Hannover and also received his doctorate in mechanical engineering there. His doctoral thesis was a study of the dynamics of machine tools and the use active magnet guides for damping vibrations. Prior to his relocation to Kaiserslautern, Ruskowski held several management positions at industrial firms, most recently since 2015 as Vice President for Global Research and Development at KUKA Industries.

 

Martin Ruskowski

Chairman, Department of Machine Tools and Control Systems
RPTU Kaiserslautern-Landau

Professor Dr. Martin Ruskowski took over the position as Head of the renamed Institute of Machine Tools and System Controls (WSKL) on June 1, 2017. His major research focus is on industrial robots as machine tools, artificial intelligence in automation technology, and the development of innovative control concepts for automation.

All equipment and machinery in the factories of tomorrow will be networked: Machines will have the ability to communicate and exchange data among themselves. Robots will continue to play an ever greater role in the world of Industrie 4.0. In the future, they may even replace traditional machine tools is some application situations, for example, in the milling of special components. "A priority of my work at TU Kaiserslautern and DFKI will be to improve the fitness of robots for demanding mechanical processing tasks. The new technologies that result from our research will provide more flexibility to companies and, ultimately, serve as a jobs motor in Germany," said Ruskowski in describing his new responsibilities.

Ruskowski is an expert in the fields of robotics and Industry 4.0. At DFKI and RPTU, his aim will be to develop solutions for the digitalization of production plants while also working on new control systems and robot mechanics to increase the efficiency of future generations of industrial robots. He will also study the question of how to make self-optimizing machines. A major focus is on Human-Machine Interaction in automated production plants. "In the context of the digitalization of production, we need new engineering techniques that will allow humans to more closely integrate the production processes," he added. "We can achieve this in cooperation with Technologie-Initiative SmartFactory KL e.V." This unique research lab located at DFKI provides ideal conditions for the practical evaluation of ambitious research projects.In addition, Ruskowski will hold a series of lectures at the department of Mechanical and Process Engineering on the subjects of machine tools and industrial robotics.

He studied electrical engineering at Leibniz University Hannover and also received his doctorate in mechanical engineering there. His doctoral thesis was a study of the dynamics of machine tools and the use active magnet guides for damping vibrations. Prior to his relocation to Kaiserslautern, Ruskowski held several management positions at industrial firms, most recently since 2015 as Vice President for Global Research and Development at KUKA Industries.

 

Author:

Tatjana Legler

Deputy Head of Department
RPTU Kaiserslautern-Landau

Tatjana Legler studied mechanical engineering at the Technical University of Kaiserslautern. She wrote her master thesis on "Optimization of automated visual inspection of common rails using neural networks". She has been working at the Department of Machine Tools and Control Systems since November 2017.

Research Fields

Tatjana Legler deals with the use of artificial intelligence in the production environment. This includes, for example, the analysis of process data for the prediction of product quality and federated learning.

Tatjana Legler

Deputy Head of Department
RPTU Kaiserslautern-Landau

Tatjana Legler studied mechanical engineering at the Technical University of Kaiserslautern. She wrote her master thesis on "Optimization of automated visual inspection of common rails using neural networks". She has been working at the Department of Machine Tools and Control Systems since November 2017.

Research Fields

Tatjana Legler deals with the use of artificial intelligence in the production environment. This includes, for example, the analysis of process data for the prediction of product quality and federated learning.

Abstract coming soon...

Author:

Matthew Burns

Technical Marketing Manager
Samtec

Matthew Burns develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 20+ years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. Mr. Burns holds a B.S. in Electrical Engineering from Penn State University.

Matthew Burns

Technical Marketing Manager
Samtec

Matthew Burns develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 20+ years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. Mr. Burns holds a B.S. in Electrical Engineering from Penn State University.

5:40 PM
PRESENTATION / PANEL

Author:

Akhil Vaid

Instructor, Division of Data-Driven and Digital Medicine
Icahn School of Medicine Mt. Sinai

Akhil Vaid, MD, is a distinguished Instructor at the Division of Data Driven and Digital Medicine (D3M), Department of Medicine at the Icahn School of Medicine at Mount Sinai. Renowned for his expertise as a physician-scientist, Dr. Vaid's work navigates the intriguing intersection of medicine and technology, with a resolute commitment to foster democratized healthcare through the power of machine learning.

 

After obtaining his medical degree from one of India's eminent medical colleges, Dr. Vaid served patients across diverse socio-economic landscapes. This unique exposure catalyzed his conviction that true healthcare equity could only be achieved through machine learning and artificial intelligence. Consequently, he ventured into the intricate domains of multi-modal machine learning, specializing in deep learning with ECGs, federated learning, Natural Language Processing, and deriving valuable insights from the Electronic Healthcare Record.

 

Before his current role at the Icahn School of Medicine at Mount Sinai, Dr. Vaid honed his clinical skills and amassed a wealth of experience in the Indian healthcare system. His medical journey is punctuated by his relentless quest for innovation, illustrated by his extensive contributions to the rapidly evolving field of digital medicine.

 

Dr. Vaid is the author of 54 scientific publications, esteemed contributions to esteemed medical journals, including Nature Medicine, the Annals of Internal Medicine, and NPJ Digital Medicine. His work is reflective of his profound understanding of medicine and technology and their potential in transforming patient care. His projects, backed by significant grants, encompass multiple facets of informatics, data science, and machine learning in medicine.

Akhil Vaid

Instructor, Division of Data-Driven and Digital Medicine
Icahn School of Medicine Mt. Sinai

Akhil Vaid, MD, is a distinguished Instructor at the Division of Data Driven and Digital Medicine (D3M), Department of Medicine at the Icahn School of Medicine at Mount Sinai. Renowned for his expertise as a physician-scientist, Dr. Vaid's work navigates the intriguing intersection of medicine and technology, with a resolute commitment to foster democratized healthcare through the power of machine learning.

 

After obtaining his medical degree from one of India's eminent medical colleges, Dr. Vaid served patients across diverse socio-economic landscapes. This unique exposure catalyzed his conviction that true healthcare equity could only be achieved through machine learning and artificial intelligence. Consequently, he ventured into the intricate domains of multi-modal machine learning, specializing in deep learning with ECGs, federated learning, Natural Language Processing, and deriving valuable insights from the Electronic Healthcare Record.

 

Before his current role at the Icahn School of Medicine at Mount Sinai, Dr. Vaid honed his clinical skills and amassed a wealth of experience in the Indian healthcare system. His medical journey is punctuated by his relentless quest for innovation, illustrated by his extensive contributions to the rapidly evolving field of digital medicine.

 

Dr. Vaid is the author of 54 scientific publications, esteemed contributions to esteemed medical journals, including Nature Medicine, the Annals of Internal Medicine, and NPJ Digital Medicine. His work is reflective of his profound understanding of medicine and technology and their potential in transforming patient care. His projects, backed by significant grants, encompass multiple facets of informatics, data science, and machine learning in medicine.

Trends in cloud and HPC systems design are converging in the field of ML. As demands for ML compute performance continue to grow, certain trends are dictating systems design choices. Increasing server and rack density is a tried-and-tested tool for driving performance, but results in extreme heat, while packing GPUs and ASICs into AI servers is an inefficient long-term solution when memory bandwidth limits the total amount of FLOPS available at any moment. Some fairly fundamental re-designs are needed in the ML systems space, and this panel will examine what the next generation of systems will look like, what benefits they will bring, and how to get there.

Moderator

Author:

Drew Matter

President & CEO
Mikros Technologies

Drew Matter leads Mikros Technologies, a designer and manufacturer of best-in-class direct liquid cold plates for AI/HPC, semiconductor testing, laser & optics, and power electronics.  Mikros provides leading microchannel thermal solutions in single-phase, 2-phase, DLC and immersion systems to leading companies around the world. 

Drew Matter

President & CEO
Mikros Technologies

Drew Matter leads Mikros Technologies, a designer and manufacturer of best-in-class direct liquid cold plates for AI/HPC, semiconductor testing, laser & optics, and power electronics.  Mikros provides leading microchannel thermal solutions in single-phase, 2-phase, DLC and immersion systems to leading companies around the world. 

Panellists

Author:

Greg Stover

Global Director, Hi-Tech Development
Vertiv

With more than 30 years of experience in data center efficiency optimization with large data center enterprise operators and industry leading VARs/Resellers, Greg champions the successful leveraging and utilization of Vertiv’s amazing and constantly evolving portfolio of thermal, power, monitoring & management solutions for the hyperscale, colocation, on-prem, DR and edge IoT ecosystems.

As a data center efficiency optimization enthusiast, Greg has a proven track record of bringing leading and bleeding edge cooling, power, monitoring and DCIM solutions and tools through introduction, implementation and successful execution, while staying keenly focused and aligned with client/enterprise/edge operator’s goals & objectives. Greg is a frequent presenter at industry conferences, trade shows and Integrator/VAR/Partner training events.

Greg Stover

Global Director, Hi-Tech Development
Vertiv

With more than 30 years of experience in data center efficiency optimization with large data center enterprise operators and industry leading VARs/Resellers, Greg champions the successful leveraging and utilization of Vertiv’s amazing and constantly evolving portfolio of thermal, power, monitoring & management solutions for the hyperscale, colocation, on-prem, DR and edge IoT ecosystems.

As a data center efficiency optimization enthusiast, Greg has a proven track record of bringing leading and bleeding edge cooling, power, monitoring and DCIM solutions and tools through introduction, implementation and successful execution, while staying keenly focused and aligned with client/enterprise/edge operator’s goals & objectives. Greg is a frequent presenter at industry conferences, trade shows and Integrator/VAR/Partner training events.

Author:

Dudy Cohen

VP Product Marketing
Drivenets

Dudy is a qualified manager and technology expert, with more than 30 years of experience in the networking industry. As a senior AI networking expert, he partners closely with the product and engineering teams to shape DriveNets’ vision for AI Networking, helping to deliver the high performance of a proprietary solution with a standards-based Ethernet implementation that provides unrivaled performance. Previously, Dudy served as the VP of Product Marketing at Ceragon. He also served as a Director of Solutions Engineering at Alvarion Ltd. Dudy holds an M.Sc.-E.E degree from the Tel Aviv University.

Dudy Cohen

VP Product Marketing
Drivenets

Dudy is a qualified manager and technology expert, with more than 30 years of experience in the networking industry. As a senior AI networking expert, he partners closely with the product and engineering teams to shape DriveNets’ vision for AI Networking, helping to deliver the high performance of a proprietary solution with a standards-based Ethernet implementation that provides unrivaled performance. Previously, Dudy served as the VP of Product Marketing at Ceragon. He also served as a Director of Solutions Engineering at Alvarion Ltd. Dudy holds an M.Sc.-E.E degree from the Tel Aviv University.

Author:

Albert Chen

Solutions Architect
Amphenol

Albert Chen

Solutions Architect
Amphenol
6:10 PM
PANEL / PRESENTATION

Abstract coming soon...

Moderator

Author:

Sally Ward-Foxton

Senior Reporter
EETimes

Sally Ward-Foxton has been writing about the electronics industry for more than a decade. As a freelance journalist she has published articles in EE Times, Electronic Design Europe, Microwaves & RF, ECN, Electronic Specifier: Design, IoT Embedded Systems, Electropages, Components in Electronics and many more. She also supplies technical writing and ghostwriting services to several of Europe's leading PR agencies. She holds a Masters' degree in Electrical and Electronic Engineering from the University of Cambridge, UK.

Sally Ward-Foxton

Senior Reporter
EETimes

Sally Ward-Foxton has been writing about the electronics industry for more than a decade. As a freelance journalist she has published articles in EE Times, Electronic Design Europe, Microwaves & RF, ECN, Electronic Specifier: Design, IoT Embedded Systems, Electropages, Components in Electronics and many more. She also supplies technical writing and ghostwriting services to several of Europe's leading PR agencies. She holds a Masters' degree in Electrical and Electronic Engineering from the University of Cambridge, UK.

Panellists:

Author:

Jim Keller

CEO
Tenstorrent

Jim Keller is the CEO of Tenstorrent and a veteran hardware engineer. Prior to joining Tenstorrent, he served two years as Senior Vice President of Intel's Silicon Engineering Group. He has held roles as Tesla's Vice President of Autopilot and Low Voltage Hardware, Corporate Vice President and Chief Cores Architect at AMD, and Vice President of Engineering and Chief Architect at P.A. Semi, which was acquired by Apple Inc. Jim has led multiple successful silicon designs over the decades, from the DEC Alpha processors, to AMD K7/K8/K12, HyperTransport and the AMD Zen family, the Apple A4/A5 processors, and Tesla's self-driving car chip.

Jim Keller

CEO
Tenstorrent

Jim Keller is the CEO of Tenstorrent and a veteran hardware engineer. Prior to joining Tenstorrent, he served two years as Senior Vice President of Intel's Silicon Engineering Group. He has held roles as Tesla's Vice President of Autopilot and Low Voltage Hardware, Corporate Vice President and Chief Cores Architect at AMD, and Vice President of Engineering and Chief Architect at P.A. Semi, which was acquired by Apple Inc. Jim has led multiple successful silicon designs over the decades, from the DEC Alpha processors, to AMD K7/K8/K12, HyperTransport and the AMD Zen family, the Apple A4/A5 processors, and Tesla's self-driving car chip.

Author:

Raja Koduri

Board Member
Tenstorrent

Raja Koduri

Board Member
Tenstorrent

Author:

Bing Yu

Senior Technical Director
Andes Technology

Bing Yu is a Sr. Technical Director at Andes Technology. He has over 30 years of experience in technical leadership and management, specializing in machine learning hardware, high performance CPUs and system architecture. In his current role, he is responsible for processor roadmap, architecture, and product design. Bing received his BS degree in Electrical Engineering from San Jose State University and completed the Stanford Executive Program (SEP) at the Stanford Graduate School of Business.

Bing Yu

Senior Technical Director
Andes Technology

Bing Yu is a Sr. Technical Director at Andes Technology. He has over 30 years of experience in technical leadership and management, specializing in machine learning hardware, high performance CPUs and system architecture. In his current role, he is responsible for processor roadmap, architecture, and product design. Bing received his BS degree in Electrical Engineering from San Jose State University and completed the Stanford Executive Program (SEP) at the Stanford Graduate School of Business.

Author:

Laurent Moll

Chief Operating Officer
Arteris

Dr. Laurent Moll most recently served as Vice President of Engineering at Qualcomm where he led a 500-person team creating infrastructure IP for Qualcomm’s chips, including NoC interconnects, memory subsystems, cache coherency subsystems and more. Laurent has led a storied career for over two decades, performing key technical roles at industry leaders such as Digital Equipment Corporation, Compaq Computer Corporation, SiByte, Broadcom, Montalvo Systems and NVIDIA. Prior to his nearly 8-year tenure at Qualcomm, he was the Chief Technology Officer at Arteris Inc, a predecessor company of Arteris. Throughout his career, he has played an influential role in inventing the system-on-chip architectures, IP subsystems, and methodologies that are today the foundation of modern semiconductor design. Laurent earned his PhD in Computer Science at École Polytechnique and holds over 60 patents on various aspects of SoC technology.

 

Laurent Moll

Chief Operating Officer
Arteris

Dr. Laurent Moll most recently served as Vice President of Engineering at Qualcomm where he led a 500-person team creating infrastructure IP for Qualcomm’s chips, including NoC interconnects, memory subsystems, cache coherency subsystems and more. Laurent has led a storied career for over two decades, performing key technical roles at industry leaders such as Digital Equipment Corporation, Compaq Computer Corporation, SiByte, Broadcom, Montalvo Systems and NVIDIA. Prior to his nearly 8-year tenure at Qualcomm, he was the Chief Technology Officer at Arteris Inc, a predecessor company of Arteris. Throughout his career, he has played an influential role in inventing the system-on-chip architectures, IP subsystems, and methodologies that are today the foundation of modern semiconductor design. Laurent earned his PhD in Computer Science at École Polytechnique and holds over 60 patents on various aspects of SoC technology.

 

AI and security workloads are clearly driving next-generation SoC architecture innovations. These architectures need higher performance, and more memory per processing element as technology process nodes advance.  However, memories are scaling at smaller rates than the processing elements but the workloads are demanding more memory per processing element leading to a memory wall -- there must be technology disruptions.  Off-chip memory offers performance gains, but AI workloads require more efficient and higher density memories per processing element. One clear solution has been multi-die systems, leveraging more on-chip memories at higher bandwidths and improved densities.  This presentation will explore these memory and IO innovations and will showcase several real-world case studies on the development of multi-die systems to meet the AI performance and memory challenges.

 

Author:

Ron Lowman

AI Strategic Marketing Manager
Synopsys

Ron Lowman joined Synopsys in 2014 and is currently the AI Strategic Marketing Manager for the Solutions Group. Ron is responsible for driving Synopsys’ Artificial Intelligence market IP initiatives, including strategic business and market trend analysis.

Prior to joining Synopsys, Lowman spent 16 years at Motorola/Freescale in Controls Engineering, Automotive Product & Test Engineering, Product Management, Business Development, Operations, and Strategy Roles.

Ron holds a Bachelor of Science in Electrical Engineering from Colorado School of Mines and an MBA from the University of Texas in Austin.

Ron Lowman

AI Strategic Marketing Manager
Synopsys

Ron Lowman joined Synopsys in 2014 and is currently the AI Strategic Marketing Manager for the Solutions Group. Ron is responsible for driving Synopsys’ Artificial Intelligence market IP initiatives, including strategic business and market trend analysis.

Prior to joining Synopsys, Lowman spent 16 years at Motorola/Freescale in Controls Engineering, Automotive Product & Test Engineering, Product Management, Business Development, Operations, and Strategy Roles.

Ron holds a Bachelor of Science in Electrical Engineering from Colorado School of Mines and an MBA from the University of Texas in Austin.

6:30 PM
PANEL / PRESENTATION

LLMs are driving the frontiers in computer performance today. This talk explores the MLPerf LLM benchmark landscape, the unique challenges of building LLM benchmarks for training and inference, and the challenges for submitters.

Author:

David Kanter

Founder & Executive Director
MLCommons

David co-founded and is the Head of MLPerf for MLCommons, the world leader in building benchmarks for AI. MLCommons is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

David Kanter

Founder & Executive Director
MLCommons

David co-founded and is the Head of MLPerf for MLCommons, the world leader in building benchmarks for AI. MLCommons is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire AI industry through benchmarks and metrics, public datasets, and measurements for AI Safety. Our software projects are generally available under the Apache 2.0 license and our datasets generally use CC-BY 4.0.

6:50 PM
CLOSING KEYNOTE

In his forthcoming keynote, Lip Bu Tan delves into the transformative impact of Generative AI on today's rapidly evolving intelligent systems. This critical conversation sheds light on the symbiotic relationship between AI semiconductors and systemic hardware growth drivers, showcasing how they are steering the future of purpose-built intelligent systems.

 

As we navigate this unprecedented era of technological advancement, Mr. Tan will explore how Generative AI platforms are accelerating design productivity and enabling innovative AI chip design tools, and define what’s needed to deliver a full generative AI system stack. The keynote will also touch upon the extension of these advancements, highlighting how computational software initially developed for AI can be adapted and applied to other domains, thereby broadening the impact and utility of intelligent systems.

 

Don't miss this enlightening session that promises to redefine our understanding of AI's role in shaping intelligent systems and expanding the frontiers of what's possible across multiple sectors.

Author:

Lip-Bu Tan

Chairman & Founder
Walden International

Lip-Bu Tan is Founder and Chairman of Walden International (“WI”), and Founding Managing Partner of Celesta Capital and Walden Catalyst Ventures, with over $5 billion under management.  He formerly served as Chief Executive Officer and Executive Chairman of Cadence Design Systems, Inc.  He currently serves on the Board of Schneider Electric SE (SU: FP), Intel Corporation (NASDAQ: INTC), and Credo Semiconductor (NASDAQ: CRDO).

 

Lip-Bu focuses on semiconductor/components, cloud/edge infrastructure, data management and security, and AI/machine learning.

 

Lip-Bu received his B.S. from Nanyang University in Singapore, his M.S. in Nuclear Engineering from the Massachusetts Institute of Technology, and his MBA from the University of San Francisco. He also received his honorary degree for Doctor of Humane Letters from the University of San Francisco.  Lip-Bu currently serves on Carnegie Mellon University (CMU)’s Board of Trustees and the School of Engineering Dean’s Council, Massachusetts Institute of Technology (MIT)’s School of Engineering Dean’s Advisory Council, University of California Berkeley (UCB)’s College of Engineering Advisory Board and their Computing, Data Science, and Society Advisory Board, and University of California San Francisco (UCSF)’s Executive Council. He’s also a member of the Global Advisory Board of METI Japan, The Business Council, and Committee 100. He also served on the board of the Board of Global Semiconductor Alliance (GSA) from 2009 to 2021, and as a Trustee of Nanyang Technological University (NTU) in Singapore from 2006 to 2011.  Lip-Bu has been named one of the Top 10 Venture Capitalists in China by Zero2ipo and was listed as one of the Top 50 Venture Capitalists on the Forbes Midas List. He’s the recipient of imec’s 2023 Lifetime of Innovation Award, the Semiconductor Industry Association (SIA) 2022 Robert N. Noyce Award, and GSA’s 2016 Dr. Morris Chang's Exemplary Leadership Award.  In 2017, he was ranked #1 of the most well-connected executives in the technology industry by the analytics firm Relationship Science. 

Lip-Bu Tan

Chairman & Founder
Walden International

Lip-Bu Tan is Founder and Chairman of Walden International (“WI”), and Founding Managing Partner of Celesta Capital and Walden Catalyst Ventures, with over $5 billion under management.  He formerly served as Chief Executive Officer and Executive Chairman of Cadence Design Systems, Inc.  He currently serves on the Board of Schneider Electric SE (SU: FP), Intel Corporation (NASDAQ: INTC), and Credo Semiconductor (NASDAQ: CRDO).

 

Lip-Bu focuses on semiconductor/components, cloud/edge infrastructure, data management and security, and AI/machine learning.

 

Lip-Bu received his B.S. from Nanyang University in Singapore, his M.S. in Nuclear Engineering from the Massachusetts Institute of Technology, and his MBA from the University of San Francisco. He also received his honorary degree for Doctor of Humane Letters from the University of San Francisco.  Lip-Bu currently serves on Carnegie Mellon University (CMU)’s Board of Trustees and the School of Engineering Dean’s Council, Massachusetts Institute of Technology (MIT)’s School of Engineering Dean’s Advisory Council, University of California Berkeley (UCB)’s College of Engineering Advisory Board and their Computing, Data Science, and Society Advisory Board, and University of California San Francisco (UCSF)’s Executive Council. He’s also a member of the Global Advisory Board of METI Japan, The Business Council, and Committee 100. He also served on the board of the Board of Global Semiconductor Alliance (GSA) from 2009 to 2021, and as a Trustee of Nanyang Technological University (NTU) in Singapore from 2006 to 2011.  Lip-Bu has been named one of the Top 10 Venture Capitalists in China by Zero2ipo and was listed as one of the Top 50 Venture Capitalists on the Forbes Midas List. He’s the recipient of imec’s 2023 Lifetime of Innovation Award, the Semiconductor Industry Association (SIA) 2022 Robert N. Noyce Award, and GSA’s 2016 Dr. Morris Chang's Exemplary Leadership Award.  In 2017, he was ranked #1 of the most well-connected executives in the technology industry by the analytics firm Relationship Science. 

7:15 PM
Wednesday, 13 Sep, 2023
DAY 2 - SERVER TO EDGE: DEPLOYMENT AND INFERENCE/SERVING
REGISTRATION & MORNING NETWORKING
10:00 AM
KEYNOTE

Abstract coming soon...

Author:

Alexis Black Bjorlin

VP, Infrastructure Hardware
Meta

Dr. Alexis Black Bjorlin is VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

Alexis Black Bjorlin

VP, Infrastructure Hardware
Meta

Dr. Alexis Black Bjorlin is VP, Infrastructure Hardware Engineering at Meta. She also serves on the board of directors at Digital Realty and Celestial AI. Prior to Meta, Dr. Bjorlin was Senior Vice President and General Manager of Broadcom’s Optical Systems Division and previously Corporate Vice President of the Data Center Group and General Manager of the Connectivity Group at Intel. Prior to Intel, she spent eight years as President of Source Photonics, where she also served on the board of directors. She earned a B.S. in Materials Science and Engineering from Massachusetts Institute of Technology and a Ph.D. in Materials Science from the University of California at Santa Barbara.

Author:

Petr Lapukhov

Network Engineer
NVIDIA

Petr Lapukhov is a Network Engineer at Meta. He has 20+ years in the networking industry, designing and operating large scale networks. He has a depth of experience in developing and operating software for network control and monitoring. His past experience includes CCIE/CCDE training and UNIX system administration.

Petr Lapukhov

Network Engineer
NVIDIA

Petr Lapukhov is a Network Engineer at Meta. He has 20+ years in the networking industry, designing and operating large scale networks. He has a depth of experience in developing and operating software for network control and monitoring. His past experience includes CCIE/CCDE training and UNIX system administration.

10.30 AM
KEYNOTE

The hybrid AI approach is applicable to virtually all generative AI applications and device segments. This approach is crucial for generative AI to meet enterprise and consumer needs globally.

This keynote shows how, over time, advances in model optimization combined with increased on-device AI processing capabilities will allow many generative AI applications to run on the edge. In a hybrid AI solution, distributing AI processing between the cloud and devices will allow generative AI to scale and reach its full potential.

As the on-device AI leader, Qualcomm Technologies is uniquely positioned to scale hybrid AI with industry-leading hardware and software solutions for edge devices, spanning across billions of phones, vehicles, extended reality headsets and glasses, personal computers, the internet of things, and more.

Author:

Vinesh Sukumar

Head of AI Product Management
Qualcomm

Vinesh Sukumar currently serves as Senior Director – Head of AI/ML product management at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

•He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

Vinesh Sukumar

Head of AI Product Management
Qualcomm

Vinesh Sukumar currently serves as Senior Director – Head of AI/ML product management at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

•He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

10:55 AM
KEYNOTE

Every electronic system you know is either going to get smarter or get replaced. AI is allowing us to solve a new set of problems just recently thought of as impossible. The challenge that we have is to make AI work, not just in the data center, but in all the systems we use and interact with daily. These systems vary from a Falcon Heavy Rocket to a smart contact lens. Some have kilowatts of power available, others not even a microwatt. The AI systems we deliver must meet a vast range of requirements and work in all kinds of environments.   

 

Because AI is computationally very complex, using an average off-the-shelf MPU just isn’t going to get the job done. Russell Klein will describe how you can design the next generation of intelligent systems to surpass these challenges.  

 

At the same time, these systems are often placed in situations where they must work all the time, with no disruptions in service. Ankur Gupta will describe how you can use embedded analytics to design AI systems at the edge that operate reliably, safely, and securely.  

Author:

Ankur Gupta

Senior Vice President and General Manager
Siemens EDA

Ankur Gupta is Senior Vice President and General Manager of Digital Design Creation at Siemens EDA. This includes Test, Embedded Analytics, Digital IC design, Power Optimization, and Power Integrity Analysis. Formerly he was head of Product Management and Applications at Ansys, Semiconductor and Head of Applications Engineering for Digital Implementation & Signoff at Cadence Design Systems.

Ankur has 20+ years of experience in EDA, working on some of the industry’s most innovative Test, Digital Design, Implementation and Signoff products. He holds a Master’s Degree in Electrical and Computer Engineering, from Iowa State University.

Ankur Gupta

Senior Vice President and General Manager
Siemens EDA

Ankur Gupta is Senior Vice President and General Manager of Digital Design Creation at Siemens EDA. This includes Test, Embedded Analytics, Digital IC design, Power Optimization, and Power Integrity Analysis. Formerly he was head of Product Management and Applications at Ansys, Semiconductor and Head of Applications Engineering for Digital Implementation & Signoff at Cadence Design Systems.

Ankur has 20+ years of experience in EDA, working on some of the industry’s most innovative Test, Digital Design, Implementation and Signoff products. He holds a Master’s Degree in Electrical and Computer Engineering, from Iowa State University.

Author:

Russell Klein

Program Director, CSD Division
Siemens

Russell Klein is a Program Director at Siemens EDA’s (formerly Mentor Graphics) High-Level Synthesis Division focused on processor platforms. He is currently working on algorithm acceleration through the offloading of complex algorithms running as software on embedded CPUs into hardware accelerators using High-Level Synthesis. He has been with Mentor for over 25 years, holding a variety of engineering, marketing and management positions, primarily focused on the boundary between hardware and software. He holds six patents in the area of hardware/software verification and optimization. Prior to joining Mentor he worked for Synopsys, Logic Modeling, and Fairchild Semiconductor. 

Russell Klein

Program Director, CSD Division
Siemens

Russell Klein is a Program Director at Siemens EDA’s (formerly Mentor Graphics) High-Level Synthesis Division focused on processor platforms. He is currently working on algorithm acceleration through the offloading of complex algorithms running as software on embedded CPUs into hardware accelerators using High-Level Synthesis. He has been with Mentor for over 25 years, holding a variety of engineering, marketing and management positions, primarily focused on the boundary between hardware and software. He holds six patents in the area of hardware/software verification and optimization. Prior to joining Mentor he worked for Synopsys, Logic Modeling, and Fairchild Semiconductor. 

11:20 AM
KEYNOTE

Enabling a solution for on-device and edge AI processing is about more than providing raw TOPS in an SoC. In the fast-evolving world of AI, solutions must provide both high performance and high utilization while handling many more “irregular” operations and not just matrix multiplies (transformers, LSTM, etc.), do so within a low-power and small-area profile with minimal accesses to memory, and be easy to use by developers for the networks of today and of the future.

 

In this presentation, we will discuss Cadence’s AI IP products enabling ultra-low-energy, battery-powered devices up to high-end applications requiring many hundreds of TOPs, supported by powerful software tools that enable a no-code environment for mapping networks to target executables.

Author:

Sriraman Chari

Fellow & Head of AI Accelerator IP Solution
Cadence Design Systems

Sriraman Chari

Fellow & Head of AI Accelerator IP Solution
Cadence Design Systems
11:45 AM
KEYNOTE

Throughout the AI Hardware & Edge AI Summit, we ALL will be talking about the wonders of large language models and why not! They offer tremendous advantage to enterprises and organizations, offloading work and resources to deliver enhanced value, efficiencies and end-user experiences. During this session, Sree Ganesan, head of software product with Habana, an Intel company, and Vasudev Lal, AI/ML research scientist with Intel Labs, will share their first-hand experiences in training and deploying large language models on Gaudi2 accelerators. They’ll introduce a variety of approaches they’ve taken to tame the LLM process, from training to fine-tuning to inference. To make it all real, we’ll focus on high-value ecosystem partners and share LLM demos that show our latest innovations.  

Author:

Sree Ganesan

VP of Product
d-Matrix

Sree Ganesan, VP of Product, d-Matrix: Sree is responsible for product management functions and business development efforts across the company. She manages the product lifecycle, definition and translation of customer needs to the product development function, acting as the voice of the customer. Prior, Sree led the Software Product Management effort at Habana Labs/Intel, delivering state-of-the-art deep learning capabilities of the Habana SynapseAI® software suite to the market. Previously, she was Engineering Director in Intel’s AI Products Group, where she was responsible for AI software strategy and deep learning framework integration for Nervana NNP AI accelerators. Sree earned a bachelor’s degree in electrical engineering from the Indian Institute of Technology Madras and a PhD in computer engineering from the University of Cincinnati, Ohio.

Sree Ganesan

VP of Product
d-Matrix

Sree Ganesan, VP of Product, d-Matrix: Sree is responsible for product management functions and business development efforts across the company. She manages the product lifecycle, definition and translation of customer needs to the product development function, acting as the voice of the customer. Prior, Sree led the Software Product Management effort at Habana Labs/Intel, delivering state-of-the-art deep learning capabilities of the Habana SynapseAI® software suite to the market. Previously, she was Engineering Director in Intel’s AI Products Group, where she was responsible for AI software strategy and deep learning framework integration for Nervana NNP AI accelerators. Sree earned a bachelor’s degree in electrical engineering from the Indian Institute of Technology Madras and a PhD in computer engineering from the University of Cincinnati, Ohio.

Author:

Vasudev Lal

AI/ML Research Scientist
Intel Labs

Vasudev Lal is an AI Research Scientist at Intel Labs where he leads the Multimodal Cognitive AI team. His team develops AI systems that can synthesize concept-level understanding from multiple modalities: vision, language, video and audio. His current research interests include equipping deep learning with mechanisms to inject external knowledge; self-supervised training at scale for continuous and high dimensional modalities like images, video and audio; mechanisms to combine deep learning with symbolic compute.  Prior to joining Intel, Vasudev obtained his PhD in Electrical and Computer Engineering from the University of Michigan, Ann Arbor.

Vasudev Lal

AI/ML Research Scientist
Intel Labs

Vasudev Lal is an AI Research Scientist at Intel Labs where he leads the Multimodal Cognitive AI team. His team develops AI systems that can synthesize concept-level understanding from multiple modalities: vision, language, video and audio. His current research interests include equipping deep learning with mechanisms to inject external knowledge; self-supervised training at scale for continuous and high dimensional modalities like images, video and audio; mechanisms to combine deep learning with symbolic compute.  Prior to joining Intel, Vasudev obtained his PhD in Electrical and Computer Engineering from the University of Michigan, Ann Arbor.

12:10 PM
KEYNOTE
12:30 PM
NETWORKING LUNCH
1:45 PM
PRESENTATION

AI/ML has become an integral part of today's technology landscape, but what often goes unnoticed is the underlying Machine Learning Infrastructure. 

This 25-minute talk will peel back the curtain on this critical yet overlooked component and elucidate the evolution of Machine Learning Infrastructure considering the new GenAI wave. 

We'll start by highlighting the 'hidden' efforts and technical debt involved in transitioning machine learning models from prototype to production, referencing the rise of Machine learning Infrastructure from frontier tech companies.
Then, we'll introduce the evolving concept of 'Gen AI', the next frontier of AI, emphasizing the increasing role of Foundation Models, landscape value proposition, and focus on the challenges of domain-specific fine tuners. 

After a comparative lens between traditional machine learning and emerging Generative AI technologies, we'll explore the early thoughts on Generative AI infrastructure and how it's setting the stage for the future of AI.

Take the chance to understand the infrastructure that makes AI possible.

Author:

Suqiang Song

Engineering Director, Data Platform & ML Infrastructure
Airbnb

As engineering director, Suqiang leads multiple teams of ML infrastructure engineers, driving machine learning platforms and infrastructure solutions for all product and engineering teams in Airbnb.

As a senior AI leader, he works closely with senior partners in product and engineering to shape Airbnb’s vision in AI and ML, streamline innovations, and ensure Airbnb has a complete set of AI infrastructure that meets long-term needs.

Previously, Suqiang served as Vice President, Data Platforms and Engineering Services at Mastercard, as one of the Data / AI commit board members to identify strategies and directions for Data Enablement, Data and ML platforms across multiple product lines and multiple deployment infrastructures. He has led worldwide engineering teams of data engineers, Machine Learning engineers, and data analysts to build unified data and ML platforms both on-premise and on-cloud for Mastercard

Suqiang Song

Engineering Director, Data Platform & ML Infrastructure
Airbnb

As engineering director, Suqiang leads multiple teams of ML infrastructure engineers, driving machine learning platforms and infrastructure solutions for all product and engineering teams in Airbnb.

As a senior AI leader, he works closely with senior partners in product and engineering to shape Airbnb’s vision in AI and ML, streamline innovations, and ensure Airbnb has a complete set of AI infrastructure that meets long-term needs.

Previously, Suqiang served as Vice President, Data Platforms and Engineering Services at Mastercard, as one of the Data / AI commit board members to identify strategies and directions for Data Enablement, Data and ML platforms across multiple product lines and multiple deployment infrastructures. He has led worldwide engineering teams of data engineers, Machine Learning engineers, and data analysts to build unified data and ML platforms both on-premise and on-cloud for Mastercard

Author:

Prasad Saripalli

Distinguished Engineer
Capital One

Prasad Saripalli serves as a Distinguished Engineer at Capital One, a technology driven bank on the Fortune 100 list, redefining Fintech and Banking using data, technology, AI and ML in unprecedented ways. Most recently, Prasad served as the Vice President of AIML and Distinguished Engineer at MindBody Inc - a portfolio company of Vista which manages the world's fourth-largest enterprise software company after Microsoft, Oracle, and SAP. Earlier, he served as VP Data Science at Edifecs, an industry premier healthcare information technology partnership platform and software provider, building Smart Decisions ML & AI Platform with Ml Apps Front. Prior to this, Prasad served as CTO and VP Engineering at Secrata.com, provider of Military grade Security and Privacy solutions developed and deployed over the past 15 years at Topia Technology for the Federal Government and the Enterprise, and as CTO & EVP at ClipCard, a SaaS based Hierarchical Analytics and Visualization platform.

At IBM, Prasad served as the Chief Architect for IBM's SmartCloud Enterprise (http://www.ibm.com/cloud-computing/us/en/). At Runaware, he served as the Vice President of Product Development. As a Principal Group Manager at Microsoft, Prasad co-led the development of virtualization stack on Windows 7 responsible for shipping Virtual PC7 and Windows XP Mode on Windows 7.


Prasad teaches Machine Learning, AI, NLP, Distributed Systems, Cloud Engineering and Robotics at Northeastern University and the University of Washington Continuum College.

Prasad Saripalli

Distinguished Engineer
Capital One

Prasad Saripalli serves as a Distinguished Engineer at Capital One, a technology driven bank on the Fortune 100 list, redefining Fintech and Banking using data, technology, AI and ML in unprecedented ways. Most recently, Prasad served as the Vice President of AIML and Distinguished Engineer at MindBody Inc - a portfolio company of Vista which manages the world's fourth-largest enterprise software company after Microsoft, Oracle, and SAP. Earlier, he served as VP Data Science at Edifecs, an industry premier healthcare information technology partnership platform and software provider, building Smart Decisions ML & AI Platform with Ml Apps Front. Prior to this, Prasad served as CTO and VP Engineering at Secrata.com, provider of Military grade Security and Privacy solutions developed and deployed over the past 15 years at Topia Technology for the Federal Government and the Enterprise, and as CTO & EVP at ClipCard, a SaaS based Hierarchical Analytics and Visualization platform.

At IBM, Prasad served as the Chief Architect for IBM's SmartCloud Enterprise (http://www.ibm.com/cloud-computing/us/en/). At Runaware, he served as the Vice President of Product Development. As a Principal Group Manager at Microsoft, Prasad co-led the development of virtualization stack on Windows 7 responsible for shipping Virtual PC7 and Windows XP Mode on Windows 7.


Prasad teaches Machine Learning, AI, NLP, Distributed Systems, Cloud Engineering and Robotics at Northeastern University and the University of Washington Continuum College.

2:10 PM
PRESENTATION

Abstract coming soon...

Author:

Pushpak Pujari

Head of Product - Camera Software and Video Products
Verkada

Pushpak leads Product Management at Verkada where he runs their Cloud Connected Security Camera product lines. He is responsible for using AI and Computer Vision on the camera to improve video and analytics capabilities and reduce incidence response time by surfacing only meaningful events in real-time, with minimal impact on bandwidth.

Before Verkada, Pushpak led Product Management at Amazon where he built the end-to-end privacy-preserving ML platform at Amazon Alexa, and launched a no-code platform to design and deploy IoT automation workflows on edge devices at Amazon Web Services (AWS). Previous to Amazon, he spent 4 years at Sony in Japan building Sony’s flagship mirrorless cameras.

Pushpak has extensive experience of starting, running and growing multi-million dollar products used by millions of users at the fastest growing companies in the US and the world. He holds an MBA from Wharton and Bachelors in Electrical Engineering from IIT Delhi, India

Pushpak Pujari

Head of Product - Camera Software and Video Products
Verkada

Pushpak leads Product Management at Verkada where he runs their Cloud Connected Security Camera product lines. He is responsible for using AI and Computer Vision on the camera to improve video and analytics capabilities and reduce incidence response time by surfacing only meaningful events in real-time, with minimal impact on bandwidth.

Before Verkada, Pushpak led Product Management at Amazon where he built the end-to-end privacy-preserving ML platform at Amazon Alexa, and launched a no-code platform to design and deploy IoT automation workflows on edge devices at Amazon Web Services (AWS). Previous to Amazon, he spent 4 years at Sony in Japan building Sony’s flagship mirrorless cameras.

Pushpak has extensive experience of starting, running and growing multi-million dollar products used by millions of users at the fastest growing companies in the US and the world. He holds an MBA from Wharton and Bachelors in Electrical Engineering from IIT Delhi, India

Author:

Sakyasingha Dasgupta

Founder & CEO
EdgeCortix

Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals.  He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations. 

Sakya holds a PhD. in Physics of Complex Systems from the Max Planck Institute in Germany, along with Masters in Artificial Intelligence from The University of Edinburgh and a Bachelors of Computer Engineering. Prior to founding EdgeCortix he completed his entrepreneurship studies from the MIT Sloan School of Management.

Sakyasingha Dasgupta

Founder & CEO
EdgeCortix

Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals.  He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations. 

Sakya holds a PhD. in Physics of Complex Systems from the Max Planck Institute in Germany, along with Masters in Artificial Intelligence from The University of Edinburgh and a Bachelors of Computer Engineering. Prior to founding EdgeCortix he completed his entrepreneurship studies from the MIT Sloan School of Management.

2:35 PM
PRESENTATION

Author:

Gopal Hegde

SVP, Engineering & Operations
SiMa.ai

Gopal Hegde

SVP, Engineering & Operations
SiMa.ai
3:00 PM

Author:

Fabrizio Del Maffeo

Co-Founder & CEO
Axelera AI

Fabrizio Del Maffeo is the CEO and co-founder of Axelera AI, the Netherlands-based startup building game-changing, scalable hardware for AI at the edge. Axelera AI was incubated by the Bitfury Group, a globally recognised emerging technologies company, where Fabrizio previously served as Head of AI. In his role at Axelera AI, Fabriozo leads a world-class executive team, board of directors and advisors from top AI Fortune 500 companies.

 

Prior to joining Bitfury, Fabrizio was Vice President and Managing Director of AAEON Technology Europe, the AI and internet of things (IoT) computing company within the ASUS Group. During his time at AAEON, Fabrizio founded “UP Bridge the Gap,” a product line for professionals and innovators, now regarded as a leading reference solution in AI and IoT for Intel. In 2018, Fabrizio, alongside Intel, launched AAEON’s “AI in Production” program. He also previously served as the Country Manager for France and Sales Director for Northern, Southern and Eastern Europe at Advantech, the largest industrial IoT computing company. In this role, he also led the intelligent retail division. Fabrizio graduated with a master’s degree in telecommunication engineering from Milan Politecnico University.

 

Fabrizio Del Maffeo

Co-Founder & CEO
Axelera AI

Fabrizio Del Maffeo is the CEO and co-founder of Axelera AI, the Netherlands-based startup building game-changing, scalable hardware for AI at the edge. Axelera AI was incubated by the Bitfury Group, a globally recognised emerging technologies company, where Fabrizio previously served as Head of AI. In his role at Axelera AI, Fabriozo leads a world-class executive team, board of directors and advisors from top AI Fortune 500 companies.

 

Prior to joining Bitfury, Fabrizio was Vice President and Managing Director of AAEON Technology Europe, the AI and internet of things (IoT) computing company within the ASUS Group. During his time at AAEON, Fabrizio founded “UP Bridge the Gap,” a product line for professionals and innovators, now regarded as a leading reference solution in AI and IoT for Intel. In 2018, Fabrizio, alongside Intel, launched AAEON’s “AI in Production” program. He also previously served as the Country Manager for France and Sales Director for Northern, Southern and Eastern Europe at Advantech, the largest industrial IoT computing company. In this role, he also led the intelligent retail division. Fabrizio graduated with a master’s degree in telecommunication engineering from Milan Politecnico University.

 

Author:

Arun Iyengar

CEO
Untether AI

Arun Iyengar is the CEO of Untether AI. He brings to Untether AI extensive operational and general management experience across a variety of markets from automotive, cloud, to wired and wireless infrastructure. Prior to Untether AI, Iyengar held leadership roles at Xilinx, AMD, and Altera where he set and executed strategies to grow revenues in each of the targeted markets.

Arun Iyengar

CEO
Untether AI

Arun Iyengar is the CEO of Untether AI. He brings to Untether AI extensive operational and general management experience across a variety of markets from automotive, cloud, to wired and wireless infrastructure. Prior to Untether AI, Iyengar held leadership roles at Xilinx, AMD, and Altera where he set and executed strategies to grow revenues in each of the targeted markets.

3:25 PM
PRESENTATION

Unleashing the transformative power of AI across industries requires overcoming critical barriers to adoption. While AI has shown immense potential for various edge applications, the widespread edge device deployment of vision AI solutions remains elusive. Privacy, reliability, cost, and processing latency concerns demand the integration of edge AI solutions for real-world applications.

 

In this engaging session, join DEEPX’s CEO Lokwon Kim as he charts a path towards scalable edge AI deployment. Dr. Kim will dive into the challenges hindering widespread adoption and reveal his company’s groundbreaking strategies that empower industries to harness the full potential of Edge AI. Discover how DEEPX revolutionizes affordability, usability, and performance, delivering unrivaled accuracy and flexibility. From cutting-edge algorithms to optimized Edge AI processors, DEEPX’s vision AI processing solutions pave the way for seamless integration and exceptional outcomes across diverse sectors.

 

Prepare to be inspired as Lokwon Kim present the future of AI, where the power of edge computing converges with industry demands. Gain actionable insights and join the Edge AI revolution with DEEPX, as we propel your organization into a new era of innovation and success.

 

Author:

Lokwon Kim

CEO
DeepX


Dr. Lokwon Kim is CEO of DEEPX. Prior to DEEPX, Dr. Kim spent over a decade in leading research labs and companies such as Apple, Cisco Systems, the IBM Thomas J. Watson Research Center, Broadcom, the Korea Electronics Technology Institute (KETI) and Hynix Semiconductor (now SK hynix), where he played a key role in the design and verification of computer hardware systems used widely around the world. At Apple, Lokwon led design and verification of application processors used in the iPhone, iPad, Apple Watch and more. At Cisco, he played a key role in modeling, design and verification of network router core chipsets. He was the recipient of the Cisco Achievement Program Award in recognition of outstanding employee efforts and achievements. Lokwon earned a PhD in Electrical Engineering at UCLA.

Lokwon Kim

CEO
DeepX


Dr. Lokwon Kim is CEO of DEEPX. Prior to DEEPX, Dr. Kim spent over a decade in leading research labs and companies such as Apple, Cisco Systems, the IBM Thomas J. Watson Research Center, Broadcom, the Korea Electronics Technology Institute (KETI) and Hynix Semiconductor (now SK hynix), where he played a key role in the design and verification of computer hardware systems used widely around the world. At Apple, Lokwon led design and verification of application processors used in the iPhone, iPad, Apple Watch and more. At Cisco, he played a key role in modeling, design and verification of network router core chipsets. He was the recipient of the Cisco Achievement Program Award in recognition of outstanding employee efforts and achievements. Lokwon earned a PhD in Electrical Engineering at UCLA.

AI chatbot service has been opening up the mainstream market for AI services. But problems seem to exist with considerably higher operating costs and substantially longer service latency. As the generative AI model size continued to increase, memory intensive function takes up most of the service operation. That’s why even latest GPU system does not provide sufficient performance and energy efficiency. To resolve it, we are introducing shorter latency and operating cost effective generative AI accelerator using AiM (SK hynix’s PIM) We’d like to introduce how to reduce service latency and decrease energy consumption through AiM, as well as explain the architecture of AiMX, an accelerator using AiM. Please come and see for yourself that AiM is no longer a future technology, but can be deployed to the existing system right now.

Author:

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

Euicheol Lim

Research Fellow, System Architect
SK Hynix

Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.

3:50 PM
NETWORKING BREAK
4:20 PM
PANEL

Conway's Law suggests software architecture shapes the structures and dynamics of the organization that produced it. In this panel we will dive into the collaborative process of bringing AI from experimental POCs to real-world applications, involving data scientists, DevOps/MLOps engineers, frontend engineers, and product managers. This session explores challenges spanning infrastructure, model accuracy, user interface design, and business alignment, highlighting successful strategies and fostering interdisciplinary communication. Together, we will try to uncover insights that enhance AI implementation, resulting in technically sound solutions that align with user needs and business goals.

Moderator

Author:

Uri Rosenberg

Specialist Technical Manager, AI/ML
Amazon Web Services

Uri Rosenberg is the Specialist Technical Manager of AI & ML services within enterprise support at Amazon Web Services (AWS) EMEA. Uri works to empower enterprise customers on all things ML: from underwater computer vision models that monitor fish to training models on satellite images in space; from optimizing costs to strategic discussions on deep learning and ethics. Uri brings his extensive experience to drive success of customers at all stages of ML adoption.

Before AWS, Uri led the ML projects at AT&T innovation center in Israel, working on deep learning models with extreme security and privacy constraints.

Uri is also an AWS certified Lead Machine learning subject matter expert and holds an MsC in Computer Science from Tel-Aviv Academic College, where his research focused on large scale deep learning models.

Uri Rosenberg

Specialist Technical Manager, AI/ML
Amazon Web Services

Uri Rosenberg is the Specialist Technical Manager of AI & ML services within enterprise support at Amazon Web Services (AWS) EMEA. Uri works to empower enterprise customers on all things ML: from underwater computer vision models that monitor fish to training models on satellite images in space; from optimizing costs to strategic discussions on deep learning and ethics. Uri brings his extensive experience to drive success of customers at all stages of ML adoption.

Before AWS, Uri led the ML projects at AT&T innovation center in Israel, working on deep learning models with extreme security and privacy constraints.

Uri is also an AWS certified Lead Machine learning subject matter expert and holds an MsC in Computer Science from Tel-Aviv Academic College, where his research focused on large scale deep learning models.

Panellists

Author:

Lior Khermosh

CTO
NeuReality

Lior is passionate and experienced in ML, AI, DNNs and MLops.

Lior was co-founder and Chief Scientist of ParallelM, a leader ML-Ops company.

Prior to that he held the distinguished Fellow role at PMC Sierra and was on the founding team of Passave, a FTTH silicon company.

He holds MSEE & BSEE degrees from Tel Aviv university, both Cum Laude.

Lior Khermosh

CTO
NeuReality

Lior is passionate and experienced in ML, AI, DNNs and MLops.

Lior was co-founder and Chief Scientist of ParallelM, a leader ML-Ops company.

Prior to that he held the distinguished Fellow role at PMC Sierra and was on the founding team of Passave, a FTTH silicon company.

He holds MSEE & BSEE degrees from Tel Aviv university, both Cum Laude.

Author:

Nikhil Gulati

VP, Engineering & Product Strategy
Baker Hughes

Nikhil Gulati

VP, Engineering & Product Strategy
Baker Hughes

Author:

Jaya Kawale

VP of Engineering, AI/ML
Tubi

Jaya Kawale is the head of Machine Learning at Tubi, a Fox Corporation content platform. Jaya´s team works on solving various ML problems for Tubi´s product, ranging from recommendations, content understanding and acquisition, ads ML, etc. Her team also work on the application of cutting edge machine learning technologies such as contextual bandits, deep learning, computer vision and NLP to improve user experience at Tubi.

Jaya Kawale

VP of Engineering, AI/ML
Tubi

Jaya Kawale is the head of Machine Learning at Tubi, a Fox Corporation content platform. Jaya´s team works on solving various ML problems for Tubi´s product, ranging from recommendations, content understanding and acquisition, ads ML, etc. Her team also work on the application of cutting edge machine learning technologies such as contextual bandits, deep learning, computer vision and NLP to improve user experience at Tubi.

Real-time personalized recommendations (RTPRec) have become increasingly prevalent in the digital realm, particularly as more users have become accustomed to using mobile apps and consuming larger amounts of digital data, videos, and engaging in e-commerce activities online following the Covid-19 pandemic.

DL-based recommender is known for its superior accuracy in handling unstructured data, often referred to as embeddings. This characteristic makes them ideal candidates for personalized recommendations. However, it's important to note that DL-based models can involve an extensive number of parameters, ranging into the billions or even trillions, which can pose significant challenges when real-time processing is crucial.

To address this challenge, various strategies such as inference optimization, model compression, and the utilization of hardware accelerators have been introduced to enhance performance and meet the stringent latency requirements of real-time applications. Additionally, this session will delve into accelerator-based distributed systems, offering insights into memory management and performance scalability from an infrastructure perspective.

Author:

CL Chen

COO
NEUCHIPS

CL Chen is an accomplished leader in the IC design industry with a remarkable career spanning over 27 years including CTO, AMTC Corp., Programme Manager TSMC, Director at Global Unichip Corp which is a TSMC subsidiary public company specialized in SOC design service. His wealth of experience and expertise has contributed significantly to the growth and innovation of the field.

As the Chief Operating Officer of NEUCHIPS, he continues to drive excellence and foster partnerships within the industry. He possesses a wealth of experience in domain specific inferencing accelerator, particularly within the burgeoning field of e-commerce. His insights and contributions have enhanced the customer experience within Taiwan's e-commerce sector, underscoring his commitment to leveraging technology for real-world impact. At NEUCHIPS, CL's role as COO signifies his dedication to advancing the company's operations, growth, and strategic partnerships. His extensive network within the industry, coupled with his proven track record of connecting eco partners, has been instrumental in propelling NEUCHIPS to new heights.

CL Chen

COO
NEUCHIPS

CL Chen is an accomplished leader in the IC design industry with a remarkable career spanning over 27 years including CTO, AMTC Corp., Programme Manager TSMC, Director at Global Unichip Corp which is a TSMC subsidiary public company specialized in SOC design service. His wealth of experience and expertise has contributed significantly to the growth and innovation of the field.

As the Chief Operating Officer of NEUCHIPS, he continues to drive excellence and foster partnerships within the industry. He possesses a wealth of experience in domain specific inferencing accelerator, particularly within the burgeoning field of e-commerce. His insights and contributions have enhanced the customer experience within Taiwan's e-commerce sector, underscoring his commitment to leveraging technology for real-world impact. At NEUCHIPS, CL's role as COO signifies his dedication to advancing the company's operations, growth, and strategic partnerships. His extensive network within the industry, coupled with his proven track record of connecting eco partners, has been instrumental in propelling NEUCHIPS to new heights.

Author:

Puja Das

Senior Director, Personalization
Warner Bros. Entertainment

Dr. Puja Das, leads the Personalization team at Warner Brothers Discovery (WBD) which includes offerings on Max, HBO, Discovery+ and many more.

Prior to WBD, she led a team of Applied ML researchers at Apple, who focused on building large scale recommendation systems to serve personalized content on the App Store, Arcade and Apple Books. Her areas of expertise include user modeling, content modeling, recommendation systems, multi-task learning, sequential learning and online convex optimization. She also led the Ads prediction team at Twitter (now X), where she focused on relevance modeling to improve App Ads personalization and monetization across all of Twitter surfaces.

She obtained her Ph.D from University of Minnesota in Machine Learning, where the focus of her dissertation was online learning algorithms, which work on streaming data. Her dissertation was the recipient of the prestigious IBM Ph D. Fellowship Award.

She is active in the research community and part of the program committee at ML and recommendation system conferences. Shas mentored several undergrad and grad students and participated in various round table discussions through Grace Hopper Conference, Women in Machine Learning Program colocated with NeurIPS, AAAI and Computing Research Association- Women’s chapter.

Puja Das

Senior Director, Personalization
Warner Bros. Entertainment

Dr. Puja Das, leads the Personalization team at Warner Brothers Discovery (WBD) which includes offerings on Max, HBO, Discovery+ and many more.

Prior to WBD, she led a team of Applied ML researchers at Apple, who focused on building large scale recommendation systems to serve personalized content on the App Store, Arcade and Apple Books. Her areas of expertise include user modeling, content modeling, recommendation systems, multi-task learning, sequential learning and online convex optimization. She also led the Ads prediction team at Twitter (now X), where she focused on relevance modeling to improve App Ads personalization and monetization across all of Twitter surfaces.

She obtained her Ph.D from University of Minnesota in Machine Learning, where the focus of her dissertation was online learning algorithms, which work on streaming data. Her dissertation was the recipient of the prestigious IBM Ph D. Fellowship Award.

She is active in the research community and part of the program committee at ML and recommendation system conferences. Shas mentored several undergrad and grad students and participated in various round table discussions through Grace Hopper Conference, Women in Machine Learning Program colocated with NeurIPS, AAAI and Computing Research Association- Women’s chapter.

Author:

Xinghai Hu

Head of US Algorithm
TikTok

Xinghai Hu is currently the head of TikTok US recommendation team. His team works on responsible recommendation system, improving general safety and trustability of content recommendations.

Xinghai Hu

Head of US Algorithm
TikTok

Xinghai Hu is currently the head of TikTok US recommendation team. His team works on responsible recommendation system, improving general safety and trustability of content recommendations.

Author:

Anlu Xing

Senior Data Scientist
Meta

Anlu Xing is a Senior Research Scientist/Machine Learning Engineer at Meta, working on LLM applications for business product (GenAI for Monetization)
and leading projects on the company's top priority product -- Short-form video (reels) recommendation, ranking and creator relevance.

Anlu Xing

Senior Data Scientist
Meta

Anlu Xing is a Senior Research Scientist/Machine Learning Engineer at Meta, working on LLM applications for business product (GenAI for Monetization)
and leading projects on the company's top priority product -- Short-form video (reels) recommendation, ranking and creator relevance.

4:55 PM

Author:

Rochan Sankar

Co-Founder & CEO
Enfabrica

Rochan is Founder, President and CEO of Enfabrica. Prior to founding Enfabrica, he was Senior Director and leader of the Data Center Ethernet switch silicon business at Broadcom, where he defined and brought to market multiple generations of Tomahawk/Trident chips and helped build industry-wide ecosystems including 25G Ethernet and disaggregated whitebox networking.

Prior, he held roles in product management, chip architecture, and applications engineering across startup and public semiconductor companies. Rochan holds a B.A.Sc. in Electrical Engineering from the University of Toronto and an MBA from the Wharton School, and has 6 issued patents.

Rochan Sankar

Co-Founder & CEO
Enfabrica

Rochan is Founder, President and CEO of Enfabrica. Prior to founding Enfabrica, he was Senior Director and leader of the Data Center Ethernet switch silicon business at Broadcom, where he defined and brought to market multiple generations of Tomahawk/Trident chips and helped build industry-wide ecosystems including 25G Ethernet and disaggregated whitebox networking.

Prior, he held roles in product management, chip architecture, and applications engineering across startup and public semiconductor companies. Rochan holds a B.A.Sc. in Electrical Engineering from the University of Toronto and an MBA from the Wharton School, and has 6 issued patents.

As AI services and applications grow exponentially, including the latest burgeoning GenAI trend, the industry is rapidly transitioning to a Hybrid AI model that splits the compute needed between the Cloud and an Edge device. The need for specialized AI processing on edge devices is becoming almost mandatory.  This is driven by a multitude of factors including speech and visual processing for improved Human Machine Interfaces, time-series, or signal processing for applications like vital signs prediction in healthcare and preventative maintenance in factory automation, just to name a few, along with the general need for privacy and security. To meet portability, responsiveness, and cost requirements, future proofed AIoT devices will need to be equipped with efficient compute for advanced sequence prediction, semantic segmentation, multimedia processing, and multi-dimensional time series processing in a very low power budget and silicon footprint.

Author:

Nandan Nayampally

Chief Marketing Officer
Brainchip

Nandan is an entrepreneurial executive with over 25 years of success in building or growing disruptive businesses with industry-wide impact.  Nandan was most recently at Amazon leading the delivery of Alexa AI tools for Echo, Fire TV and other consumer devices. Prior to that he spent more than 15 years at Arm Inc.  including roles as GM developing Arm’s CPU and broader IP portfolio into an industry leader that is built into over 100B chips. He started his career at AMD on their very successful Athlon processor program. He also helped grow product lines in startup businesses such as Silicon Metrics and Denali Software which had successful acquisitions down the road.

Nandan Nayampally

Chief Marketing Officer
Brainchip

Nandan is an entrepreneurial executive with over 25 years of success in building or growing disruptive businesses with industry-wide impact.  Nandan was most recently at Amazon leading the delivery of Alexa AI tools for Echo, Fire TV and other consumer devices. Prior to that he spent more than 15 years at Arm Inc.  including roles as GM developing Arm’s CPU and broader IP portfolio into an industry leader that is built into over 100B chips. He started his career at AMD on their very successful Athlon processor program. He also helped grow product lines in startup businesses such as Silicon Metrics and Denali Software which had successful acquisitions down the road.

5:25 PM
PANEL
Moderator

Author:

Mike Demler

Semiconductor Industry Analyst
Independent

Mike Demler is a longtime semiconductor industry veteran, technology analyst and strategic consultant. Over the last 10 years, Mike has authored numerous in-depth analyses of the innovative technologies driving advances in AI, ADAS, and autonomous vehicles. He co-authored five editions of the Linley Group Guide to Processors for Deep Learning, along with the Guide to Processors for Advanced Automotive.  He now offers his insights as an advisor to clients across a broad spectrum of the technology industry.

Mike Demler

Semiconductor Industry Analyst
Independent

Mike Demler is a longtime semiconductor industry veteran, technology analyst and strategic consultant. Over the last 10 years, Mike has authored numerous in-depth analyses of the innovative technologies driving advances in AI, ADAS, and autonomous vehicles. He co-authored five editions of the Linley Group Guide to Processors for Deep Learning, along with the Guide to Processors for Advanced Automotive.  He now offers his insights as an advisor to clients across a broad spectrum of the technology industry.

Panellists

Author:

Gayathri Radhakrishnan

Partner
Hitachi Ventures

Gayathri is currently Partner at Hitachi Ventures. Prior to that, she was with Micron Ventures, actively investing in startups that apply AI to solve critical problems in the areas of Manufacturing, Healthcare and Automotive. She brings over 20 years of multi-disciplinary experience across product management, product marketing, corporate strategy, M&A and venture investments in large Fortune 500 companies such as Dell and Corning and in startups. She has also worked as an early stage investor at Earlybird Venture Capital, a premier European venture capital fund based in Germany. She has a Masters in EE from The Ohio State University and MBA from INSEAD in France. She is also a Kauffman Fellow - Class 16.

Gayathri Radhakrishnan

Partner
Hitachi Ventures

Gayathri is currently Partner at Hitachi Ventures. Prior to that, she was with Micron Ventures, actively investing in startups that apply AI to solve critical problems in the areas of Manufacturing, Healthcare and Automotive. She brings over 20 years of multi-disciplinary experience across product management, product marketing, corporate strategy, M&A and venture investments in large Fortune 500 companies such as Dell and Corning and in startups. She has also worked as an early stage investor at Earlybird Venture Capital, a premier European venture capital fund based in Germany. She has a Masters in EE from The Ohio State University and MBA from INSEAD in France. She is also a Kauffman Fellow - Class 16.

Author:

Sakyasingha Dasgupta

Founder & CEO
EdgeCortix

Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals.  He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations. 

Sakya holds a PhD. in Physics of Complex Systems from the Max Planck Institute in Germany, along with Masters in Artificial Intelligence from The University of Edinburgh and a Bachelors of Computer Engineering. Prior to founding EdgeCortix he completed his entrepreneurship studies from the MIT Sloan School of Management.

Sakyasingha Dasgupta

Founder & CEO
EdgeCortix

Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals.  He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations. 

Sakya holds a PhD. in Physics of Complex Systems from the Max Planck Institute in Germany, along with Masters in Artificial Intelligence from The University of Edinburgh and a Bachelors of Computer Engineering. Prior to founding EdgeCortix he completed his entrepreneurship studies from the MIT Sloan School of Management.

Author:

Sailesh Chittipeddi

EVP, GM & President, Renesas USA
Renesas

Dr. Sailesh Chittipeddi became the Executive Vice President and the General Manager of the Embedded Processing, Digital Power and Signal Chain Solutions Group of Renesas in July 2019. He joined Renesas in March 2019.

Before joining Renesas, he served as IDT’s Executive Vice President of Global Operations and CTO, with an additional focus on corporate growth and differentiation. In this role, he was responsible for the company’s operations, procurement, quality, supply chain, foundry engineering, assembly engineering, product & test engineering, facilities, Design Automation, and Information Technology groups. From a product line perspective, he had responsibility for the IoT Systems Group, RapidWave Interconnect Systems, PCIe and Standard Products Group. Additionally, Dr. Chittipeddi helped IDT leverage its existing strengths to increase corporate value while driving the rapid delivery of new products.

Prior to joining IDT, Dr. Chittipeddi was President and CEO of Conexant Systems and served on its Board of Directors. He led the company in its transition from a public company to private ownership and through its debt restructuring efforts. Before that, he held several executive roles at Conexant Systems, including COO, co-President, EVP for Operations and Chief Technical Officer, with responsibility for global engineering, product development, operations, quality, facilities, IT and associated infrastructure support. Dr. Chittipeddi started his career in technology with AT&T Bell Labs and progressively managed larger engineering and operations groups with AT&T Microelectronics/Lucent and Agere Systems. Dr. Chittipeddi serves on the Board of Directors for Avalanche Technology (USA) and Tessolve (Division of Hero Electronix, India). He also serves as a Board Observer in Blu Wireless Technology (Bristol, UK), Peraso Technologies (Canada) and Anagog (Israel).

Dr. Chittipeddi holds five degrees, including an MBA from the University of Texas at Austin and a Ph.D. in physics from The Ohio State University. Dr. Chittipeddi has earned 64 U.S. patents related to semiconductor process, package and design, and has had nearly 40 technical articles published.

Sailesh Chittipeddi

EVP, GM & President, Renesas USA
Renesas

Dr. Sailesh Chittipeddi became the Executive Vice President and the General Manager of the Embedded Processing, Digital Power and Signal Chain Solutions Group of Renesas in July 2019. He joined Renesas in March 2019.

Before joining Renesas, he served as IDT’s Executive Vice President of Global Operations and CTO, with an additional focus on corporate growth and differentiation. In this role, he was responsible for the company’s operations, procurement, quality, supply chain, foundry engineering, assembly engineering, product & test engineering, facilities, Design Automation, and Information Technology groups. From a product line perspective, he had responsibility for the IoT Systems Group, RapidWave Interconnect Systems, PCIe and Standard Products Group. Additionally, Dr. Chittipeddi helped IDT leverage its existing strengths to increase corporate value while driving the rapid delivery of new products.

Prior to joining IDT, Dr. Chittipeddi was President and CEO of Conexant Systems and served on its Board of Directors. He led the company in its transition from a public company to private ownership and through its debt restructuring efforts. Before that, he held several executive roles at Conexant Systems, including COO, co-President, EVP for Operations and Chief Technical Officer, with responsibility for global engineering, product development, operations, quality, facilities, IT and associated infrastructure support. Dr. Chittipeddi started his career in technology with AT&T Bell Labs and progressively managed larger engineering and operations groups with AT&T Microelectronics/Lucent and Agere Systems. Dr. Chittipeddi serves on the Board of Directors for Avalanche Technology (USA) and Tessolve (Division of Hero Electronix, India). He also serves as a Board Observer in Blu Wireless Technology (Bristol, UK), Peraso Technologies (Canada) and Anagog (Israel).

Dr. Chittipeddi holds five degrees, including an MBA from the University of Texas at Austin and a Ph.D. in physics from The Ohio State University. Dr. Chittipeddi has earned 64 U.S. patents related to semiconductor process, package and design, and has had nearly 40 technical articles published.

Deploying interactive optimized models on the Edge while maintaining the user experience is complex and challenging. This panel explores that solving real world problems, such as wayfinding, hyper personalized advertising, and interactive troubleshooting; through Generative AI comes with its own set of challenges that necessitate the Edge. Necessary multi-stage processing/inferencing, including multi modal inputs and outputs, needs to be delivered in seconds to reach the needed user experience.

This panel explores how these can be addressed in the context of smart places, retail and industrial.

Moderator

Author:

Matthias Huber

Sr. Director, Solutions Manager, IoT/Embedded & Edge Computing
Supermicro

Matthias Huber is a seasoned technology executive with over 25 years of experience in IoT and Edge computing. As the Sr.Dir. Solutions Management he is driving innovation and growth in several markets for Supermicro, focusing on manufacturing, medial, as we well as retail and smart city. With a deep understanding of customer needs and a passion for innovation, Matthias helps global customers leveraging edge computing to innovate and transform. Prior to joining Supermicro, Matthias held senior roles at Kontron and ADlink. He holds a degree in Engineering and an MBA from Warwick university.

Matthias Huber

Sr. Director, Solutions Manager, IoT/Embedded & Edge Computing
Supermicro

Matthias Huber is a seasoned technology executive with over 25 years of experience in IoT and Edge computing. As the Sr.Dir. Solutions Management he is driving innovation and growth in several markets for Supermicro, focusing on manufacturing, medial, as we well as retail and smart city. With a deep understanding of customer needs and a passion for innovation, Matthias helps global customers leveraging edge computing to innovate and transform. Prior to joining Supermicro, Matthias held senior roles at Kontron and ADlink. He holds a degree in Engineering and an MBA from Warwick university.

Panellists

Author:

Zihan Wang

Global BD Manager, Manufacturing & Industrials
NVIDIA

Zihan is a Global Industry Business Development (IBD) Manager in the Manufacturing and Industrials vertical at NVIDIA. He works with ecosystem partners and customers to see and realize the value of AI and HPC along the value chain.   

 

Zihan started his career as a mechanical engineer at U.S. Dept. of Energy Argonne National Laboratory, where he worked with automakers on advanced powertrain system R&D. He has gradually transitioned to roles that combine business and technology. Prior to NVIDIA, Zihan worked for Ernst & Young as a management consultant. He advised F500 technology and manufacturing clients in P&L improvement and M&A activities.

Zihan Wang

Global BD Manager, Manufacturing & Industrials
NVIDIA

Zihan is a Global Industry Business Development (IBD) Manager in the Manufacturing and Industrials vertical at NVIDIA. He works with ecosystem partners and customers to see and realize the value of AI and HPC along the value chain.   

 

Zihan started his career as a mechanical engineer at U.S. Dept. of Energy Argonne National Laboratory, where he worked with automakers on advanced powertrain system R&D. He has gradually transitioned to roles that combine business and technology. Prior to NVIDIA, Zihan worked for Ernst & Young as a management consultant. He advised F500 technology and manufacturing clients in P&L improvement and M&A activities.

Author:

Gordon Cooper

Product Marketing Manager
Synopsys

Gordon Cooper is a Product Manager for Synopsys' AI/ML processor products. Gordon brings more than 20 years of experience in digital design, field applications and marketing at Raytheon, Analog Devices, and NXP to the role. Gordon also served as a Commanding Officer in the US Army Reserve, including a tour in Kosovo. Gordon holds a Bachelor of Science degree in Electrical Engineering from Clarkson University.

Gordon Cooper

Product Marketing Manager
Synopsys

Gordon Cooper is a Product Manager for Synopsys' AI/ML processor products. Gordon brings more than 20 years of experience in digital design, field applications and marketing at Raytheon, Analog Devices, and NXP to the role. Gordon also served as a Commanding Officer in the US Army Reserve, including a tour in Kosovo. Gordon holds a Bachelor of Science degree in Electrical Engineering from Clarkson University.

6:00 PM
PRESENTATION

AI is revolutionizing the way we interact with our environment, even in the most extreme settings. 
In this talk, we will explore two real-world case studies of how AI is being deployed to push the boundaries of what is possible. The first case study comes from satellites in orbit, leveraging deep learning to improve sensor readings and Federated Learning to share knowledge in a satellite constellation; the second example, we explore autonomous drones that sail the ocean to open new possibilities for scientific research and commercial uses.
Together, we would use these two examples show how do design and deploy AI applications for unlocking the potential of extreme environments. 

Author:

Uri Rosenberg

Specialist Technical Manager, AI/ML
Amazon Web Services

Uri Rosenberg is the Specialist Technical Manager of AI & ML services within enterprise support at Amazon Web Services (AWS) EMEA. Uri works to empower enterprise customers on all things ML: from underwater computer vision models that monitor fish to training models on satellite images in space; from optimizing costs to strategic discussions on deep learning and ethics. Uri brings his extensive experience to drive success of customers at all stages of ML adoption.

Before AWS, Uri led the ML projects at AT&T innovation center in Israel, working on deep learning models with extreme security and privacy constraints.

Uri is also an AWS certified Lead Machine learning subject matter expert and holds an MsC in Computer Science from Tel-Aviv Academic College, where his research focused on large scale deep learning models.

Uri Rosenberg

Specialist Technical Manager, AI/ML
Amazon Web Services

Uri Rosenberg is the Specialist Technical Manager of AI & ML services within enterprise support at Amazon Web Services (AWS) EMEA. Uri works to empower enterprise customers on all things ML: from underwater computer vision models that monitor fish to training models on satellite images in space; from optimizing costs to strategic discussions on deep learning and ethics. Uri brings his extensive experience to drive success of customers at all stages of ML adoption.

Before AWS, Uri led the ML projects at AT&T innovation center in Israel, working on deep learning models with extreme security and privacy constraints.

Uri is also an AWS certified Lead Machine learning subject matter expert and holds an MsC in Computer Science from Tel-Aviv Academic College, where his research focused on large scale deep learning models.


This talk delves into the application of a hybrid cascaded architecture for optimized wakeword detection, focusing on its implementation in Roku's Voice Remote Pro. The importance of accurate wakewords for handsfree operation is introduced, followed by a discussion on how the hybrid cascaded architecture addresses the challenges in wakeword detection. These challenges include accuracy, low latency, low power consumption, noisy environments, and different pronunciations. A hybrid approach, which combines edge and cloud models, is presented as a solution to effectively manage these challenges. The cascaded architecture, a two-stage process involving a remote keyword spotter and a cloud-based validation model, is explained, highlighting how it reduces false rejects while manages false accepts. This talk concludes by discussing the effectiveness of this approach and its successful application in Roku's Voice Remote Pro. A Q&A session follows for further discussion.

Author:

Frank Maker

Director, Software - Remotes, Voice and EdgeML
Roku

Frank Maker is Director of Software at Roku - his team is responsible for Voice, EdgeML, and Remote software He is the engineering owner for remotes and develops innovative EdgeML models for Roku´s new products.

In his role, Frank is responsible for:

* Embedded machine learning (TinyML / EdgeML)
* Microcontroller firmware development
* Embedded Linux firmware development (RokuOS)
* Embedded machine learning model development and deployment
* Automated EdgeML testing
* Wi-Fi remote development

 

Frank Maker

Director, Software - Remotes, Voice and EdgeML
Roku

Frank Maker is Director of Software at Roku - his team is responsible for Voice, EdgeML, and Remote software He is the engineering owner for remotes and develops innovative EdgeML models for Roku´s new products.

In his role, Frank is responsible for:

* Embedded machine learning (TinyML / EdgeML)
* Microcontroller firmware development
* Embedded Linux firmware development (RokuOS)
* Embedded machine learning model development and deployment
* Automated EdgeML testing
* Wi-Fi remote development

 

6.30 PM
END OF DAY
Thursday, 14 Sep, 2023
DAY 3 - SERVER TO EDGE: ML Use Cases - Case Studies & Tutorials
9:00 AM
REGISTRATION & MORNING NETWORKING
10:00 AM
INVITED KEYNOTE
  • Generative AI technology is influencing the evolution of video game content creation, but the question remains whether it offers a unique advantages over traditional methods.
  • In particular, the use of generative AI in game development presents specific challenges to the creative process in terms of maintaining artistic control and ensuring consistency with the game's overall design.
  • One area in which Generative AI has been utilized is to enhance procedural content generation, such as creating dynamic landscapes, quests, or NPC behaviors. If the need for variety can be balanced with maintaining a coherent player experience, we will see a positive impact on the gaming industry.
  • Given that rapid advancements continue in Generative AI, the role of AI-generated content in video games will continue to evolve over the next few years. Every company must establish a strategic plan to stay forefront of these developments while also prioritizing potential ethical concerns.
Interviewee

Author:

Luc Barthelet

CTO
Unity

Luc Barthelet

CTO
Unity
Interviewer

Author:

Paresh Dave

Senior Writer
WIRED

Paresh Dave is a senior writer for WIRED, covering the inner workings of big tech companies. He writes about how apps and gadgets are built and about their impacts, while giving voice to the stories of the underappreciated and disadvantaged. He was previously a reporter for Reuters and the Los Angeles Times, and an investigative reporting fellow at the Maynard Institute for Journalism Education. His work has tackled topics as varied as esportsOlympics baseball, and diversity and ethics in the tech industry. His team reporting on Snapchat’s IPO was recognized by the Society for Advancing Business Editing and Writing. He is a lifelong Californian.

Paresh Dave

Senior Writer
WIRED

Paresh Dave is a senior writer for WIRED, covering the inner workings of big tech companies. He writes about how apps and gadgets are built and about their impacts, while giving voice to the stories of the underappreciated and disadvantaged. He was previously a reporter for Reuters and the Los Angeles Times, and an investigative reporting fellow at the Maynard Institute for Journalism Education. His work has tackled topics as varied as esportsOlympics baseball, and diversity and ethics in the tech industry. His team reporting on Snapchat’s IPO was recognized by the Society for Advancing Business Editing and Writing. He is a lifelong Californian.

10:30 AM
INVITED KEYNOTE

Author:

Amin Vahdat

Fellow & VP of ML, Systems & Cloud AI
Google

Amin Vahdat is a Fellow and Vice President of Engineering at Google, where his team is responsible for delivering industry-best Machine Learning software and hardware that serves all of Google and the world, now and in the future, and Artificial Intelligence technologies that solve customers’ most pressing business challenges. He previously led the Systems and Services Infrastructure organization from 2021 until the present, and the Systems Infrastructure organization from 2019 - 2021. Until 2019, he was the area Technical Lead for the Networking organization at Google, responsible for Google's technical infrastructure roadmap in collaboration with the Compute, Storage, and Hardware organizations. 

Before joining Google, Amin was the Science Applications International Corporation (SAIC) Professor of Computer Science and Engineering at UC San Diego (UCSD) and the Director of UCSD’s Center for Networked Systems. He received his doctorate from the University of California Berkeley in computer science, and is a member of the National Academy of Engineering (NAE) and an Association for Computing Machinery (ACM) Fellow. 

Amin has been recognized with a number of awards, including the National Science Foundation (NSF) CAREER award, the UC Berkeley Distinguished EECS Alumni Award, the Alfred P. Sloan Fellowship, the Association for Computing Machinery's SIGCOMM Networking Systems Award, and the Duke University David and Janet Vaughn Teaching Award. Most recently, Amin was awarded the SIGCOMM lifetime achievement award for his contributions to data center and wide area networks.

Amin Vahdat

Fellow & VP of ML, Systems & Cloud AI
Google

Amin Vahdat is a Fellow and Vice President of Engineering at Google, where his team is responsible for delivering industry-best Machine Learning software and hardware that serves all of Google and the world, now and in the future, and Artificial Intelligence technologies that solve customers’ most pressing business challenges. He previously led the Systems and Services Infrastructure organization from 2021 until the present, and the Systems Infrastructure organization from 2019 - 2021. Until 2019, he was the area Technical Lead for the Networking organization at Google, responsible for Google's technical infrastructure roadmap in collaboration with the Compute, Storage, and Hardware organizations. 

Before joining Google, Amin was the Science Applications International Corporation (SAIC) Professor of Computer Science and Engineering at UC San Diego (UCSD) and the Director of UCSD’s Center for Networked Systems. He received his doctorate from the University of California Berkeley in computer science, and is a member of the National Academy of Engineering (NAE) and an Association for Computing Machinery (ACM) Fellow. 

Amin has been recognized with a number of awards, including the National Science Foundation (NSF) CAREER award, the UC Berkeley Distinguished EECS Alumni Award, the Alfred P. Sloan Fellowship, the Association for Computing Machinery's SIGCOMM Networking Systems Award, and the Duke University David and Janet Vaughn Teaching Award. Most recently, Amin was awarded the SIGCOMM lifetime achievement award for his contributions to data center and wide area networks.

11:00 AM
INVITED KEYNOTE

Machine learning and Generative AI is one of the most transformational technologies that is opening up new opportunities for innovation in every domain across software, finance, healthcare, manufacturing, media, entertainment, and others. This talk will discuss the key trends that are driving AI/ML innovation, how enterprises are using AI/ML today to innovate in how they run their business, the key technology challenges in scaling out ML and Generative AI across the enterprise, some of the key innovations from Amazon, and how this field is likely to evolve in the future. 

Author:

Bratin Saha

VP & GM, ML & AI Services
Amazon

Dr. Bratin Saha is the Vice President of Machine Learning and AI services at AWS where he leads all the ML and AI services and helped build one of the fastest growing businesses in AWS history. He is an alumnus of Harvard Business School (General Management Program), Yale University (PhD Computer Science), and Indian Institute of Technology (BS Computer Science). He has more than 70 patents granted (with another 50+ pending) and more than 30 papers in conferences/journals. Prior to Amazon he worked at Nvidia and Intel leading different product groups spanning imaging, analytics, media processing, high performance computing, machine learning, and software infrastructure.

Bratin Saha

VP & GM, ML & AI Services
Amazon

Dr. Bratin Saha is the Vice President of Machine Learning and AI services at AWS where he leads all the ML and AI services and helped build one of the fastest growing businesses in AWS history. He is an alumnus of Harvard Business School (General Management Program), Yale University (PhD Computer Science), and Indian Institute of Technology (BS Computer Science). He has more than 70 patents granted (with another 50+ pending) and more than 30 papers in conferences/journals. Prior to Amazon he worked at Nvidia and Intel leading different product groups spanning imaging, analytics, media processing, high performance computing, machine learning, and software infrastructure.

11:30 AM
INVITED KEYNOTE

Author:

Animesh Singh

Executive Director, AI & Machine Learning
LinkedIn

Executive Director, AI and ML Platform at LinkedIn | Ex IBM Senior Director and Distinguished Engineer, Watson AI and Data | Founder at Kubeflow | Ex LFAI Trusted AI NA Chair

Animesh is the Executive Director leading the next generation AI and ML Platform at LinkedIn, enabling creation of AI Foundation Models Platform, serving the needs of 930+ Million members of LinkedIn. Building Distributed Training Platform, Machine Learning Pipelines, Feature Pipelines, Metadata engine etc. Leading the creation of LinkedIn GAI platform for fine tuning, experimentation and inference needs. Animesh has more than 20 patents, and 50+ publications. 

Past IBM Watson AI and Data Open Tech CTO, Senior Director and Distinguished Engineer, with 20+ years experience in Software industry, and 15+ years in AI, Data and Cloud Platform. Led globally dispersed teams, managed globally distributed projects, and served as a trusted adviser to Fortune 500 firms. Played a leadership role in creating, designing and implementing Data and AI engines for AI and ML platforms, led Trusted AI efforts, drove the strategy and execution for Kubeflow, OpenDataHub and execution in products like Watson OpenScale and Watson Machines Learning.

Animesh Singh

Executive Director, AI & Machine Learning
LinkedIn

Executive Director, AI and ML Platform at LinkedIn | Ex IBM Senior Director and Distinguished Engineer, Watson AI and Data | Founder at Kubeflow | Ex LFAI Trusted AI NA Chair

Animesh is the Executive Director leading the next generation AI and ML Platform at LinkedIn, enabling creation of AI Foundation Models Platform, serving the needs of 930+ Million members of LinkedIn. Building Distributed Training Platform, Machine Learning Pipelines, Feature Pipelines, Metadata engine etc. Leading the creation of LinkedIn GAI platform for fine tuning, experimentation and inference needs. Animesh has more than 20 patents, and 50+ publications. 

Past IBM Watson AI and Data Open Tech CTO, Senior Director and Distinguished Engineer, with 20+ years experience in Software industry, and 15+ years in AI, Data and Cloud Platform. Led globally dispersed teams, managed globally distributed projects, and served as a trusted adviser to Fortune 500 firms. Played a leadership role in creating, designing and implementing Data and AI engines for AI and ML platforms, led Trusted AI efforts, drove the strategy and execution for Kubeflow, OpenDataHub and execution in products like Watson OpenScale and Watson Machines Learning.

12:00 PM
NETWORKING LUNCH & MODERATED ROUNDTABLES
1:30 PM
KEYNOTE

Generative AI workloads are breaking every aspect of the data center. As the capabilities of AI increase, so does its demand. The conventional path of performance improvement in legacy processors has stagnated, providing diminishing returns. This raises concerns on whether we may ever fully realize the potential of AI. In this talk we will share how combining our software-first methodology and novel computer arithmetic leads to breakthrough performance gains in general purpose accelerated computing, pushing developers to the limit of physics.

Author:

Jay Dawani

CEO
Lemurian Labs

Jay Dawani is co-founder & CEO of Lemurian Labs, a startup at the forefront of general purpose accelerated computing for making AI development affordable and generally available for all companies and people to equally benefit. Author of the influential book "Mathematics for Deep Learning", he has held leadership positions at companies such as BlocPlay and Geometric Energy Corporation, spearheading projects involving quantum computing, metaverse, blockchain, AI, space robotics, and more. Jay has also served as an advisor to NASA Frontier Development Lab, SiaClassic, and many leading AI firms.

Jay Dawani

CEO
Lemurian Labs

Jay Dawani is co-founder & CEO of Lemurian Labs, a startup at the forefront of general purpose accelerated computing for making AI development affordable and generally available for all companies and people to equally benefit. Author of the influential book "Mathematics for Deep Learning", he has held leadership positions at companies such as BlocPlay and Geometric Energy Corporation, spearheading projects involving quantum computing, metaverse, blockchain, AI, space robotics, and more. Jay has also served as an advisor to NASA Frontier Development Lab, SiaClassic, and many leading AI firms.

1.55 PM
PRESENTATION

Author:

Becky Soltanian

VP, Research & Development
Sanborn

Dr. Soltanian’s career in the field of AI has been extensive. With a global reach, her endeavors in AI, engineering, and academia span over 20 years. Her background includes a considerable amount of hands-on experience in a variety of roles in automation, AI, robotics, and computer vision. Most recently, Dr. Soltanian worked as a principal artificial intelligence and machine learning engineer, developing algorithms that improved perception and localization. She has also held leadership and management positions where she successfully directed teams in developing and applying advanced technologies in the use of lidar and other data types.

Dr. Soltanian holds a PhD in Electrical, Electronics and Communications Engineering; a Master of Technology in Digital Signal Processing; and a Bachelor of Science in Electrical Engineering. She’s worked for a variety of different companies such as Byton, Daqri and Velodyne Lidar, and has six (6) patents in the field of Autonomous Driving and automation.

Becky Soltanian

VP, Research & Development
Sanborn

Dr. Soltanian’s career in the field of AI has been extensive. With a global reach, her endeavors in AI, engineering, and academia span over 20 years. Her background includes a considerable amount of hands-on experience in a variety of roles in automation, AI, robotics, and computer vision. Most recently, Dr. Soltanian worked as a principal artificial intelligence and machine learning engineer, developing algorithms that improved perception and localization. She has also held leadership and management positions where she successfully directed teams in developing and applying advanced technologies in the use of lidar and other data types.

Dr. Soltanian holds a PhD in Electrical, Electronics and Communications Engineering; a Master of Technology in Digital Signal Processing; and a Bachelor of Science in Electrical Engineering. She’s worked for a variety of different companies such as Byton, Daqri and Velodyne Lidar, and has six (6) patents in the field of Autonomous Driving and automation.

2:20 PM
NETWORKING BREAK
2:50 PM
PRESENTATION / PANEL

Author:

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

Lisa Cohen

Director of Data Science for Gemini, Google Assistant, and Search Platforms
Google

Lisa Cohen is Director of Data Science for Gemini (formerly "Bard"), Google Assistant, and Search Platforms. She leads an organization of data scientists at Google, responsible for using data to create excellent user experiences across these products, and partnering closely with Product, Engineering, and User Experience Research. Formerly, Lisa was Head of Data Science and Engineering for Twitter, helping drive the strategy and direction of the Twitter product, through machine learning, metric development, experimentation and causal analyses. Before Twitter, Lisa led the Azure Customer Growth Analytics organization as part of Microsoft Cloud Data sciences. Her team was responsible for analyzing OKRs, informing data-driven decisions, and developing data science models to help customers be successful on Azure. Lisa worked at Microsoft for 17yrs, and also helped develop multiple versions of Visual Studio. She holds Bachelor and Masters degrees from Harvard in Applied Mathematics. You can follow Lisa on LinkedIn and Medium.

When bridging the computational gap between edge devices and deep learning algorithms, it is necessary to optimize across the hardware, model architecture and software layers, in order to achieve reliable, low latency and energy-efficient performance. For vision specifically, where accuracy is usually very important, trade-offs between model performance and energy efficiency can dictate many engineering choices. This panel will look into how accelerators, if properly integrated, can solve some of these problems, and cover progress at the model architecture level (i.e. compression, pruning, multimodal inferencing) that is helping to deliver high performance, low latency, and energy-efficient inference at the edge.

Moderator

Author:

Jeff White

CTO, Edge
Dell Technologies

Jeff is the Industry CTO of Dell Technologies of the Automotive sector, specifically in the area of Connected and Autonomous Vehicles and overall Edge Technology strategy lead. Jeff is responsible for leading the team that is developing the overall Dell Technologies technology strategy development, architectural direction and product requirements for the Intelligent Connected Vehicle platform.

He also is the Chairman of the Dell Automotive Design Authority Council responsible for the technical solution design. In his role as Edge Technology Lead he is driving the development of a Dell Technology wide Edge platform including the physical edge systems, heterogenous compute, memory/storage, environment, security, data management, control plane stack and automation/orchestration.

Previously, Jeff has held senior roles at an early stage artificial intelligence/machine reasoning based process automation technology provider and Elefante Group a stratospheric wireless communications platform as CTO. He also held senior positions at Hewlett Packard Enterprise, Ericsson and Alcatel-Lucent where he led technology initiatives, solutions development, business development and services delivery.
Earlier in his career White served in leadership roles at BellSouth and Cingular Wireless (now AT&T). At Cingular, he led the National Transport Infrastructure Engineering with responsibility for national transport, VoIP & IMS engineering. At BellSouth (now AT&T) he led the Broadband Internet Operations & Support organization which included broadband access tier two technical support, customer networking equipment business, broadband OSS & end-to-end process.

White holds a Bachelor of Science degree in Electrical Engineering from Southern Polytechnic University. He also served as Chairman of the Tech Titans Technology Association of North Texas representing over 300 Technology companies in the greater North Texas community. He also served on the North Texas Regional committee of the Texas Emerging Technology fund under Governor Rick Perry.

Jeff White

CTO, Edge
Dell Technologies

Jeff is the Industry CTO of Dell Technologies of the Automotive sector, specifically in the area of Connected and Autonomous Vehicles and overall Edge Technology strategy lead. Jeff is responsible for leading the team that is developing the overall Dell Technologies technology strategy development, architectural direction and product requirements for the Intelligent Connected Vehicle platform.

He also is the Chairman of the Dell Automotive Design Authority Council responsible for the technical solution design. In his role as Edge Technology Lead he is driving the development of a Dell Technology wide Edge platform including the physical edge systems, heterogenous compute, memory/storage, environment, security, data management, control plane stack and automation/orchestration.

Previously, Jeff has held senior roles at an early stage artificial intelligence/machine reasoning based process automation technology provider and Elefante Group a stratospheric wireless communications platform as CTO. He also held senior positions at Hewlett Packard Enterprise, Ericsson and Alcatel-Lucent where he led technology initiatives, solutions development, business development and services delivery.
Earlier in his career White served in leadership roles at BellSouth and Cingular Wireless (now AT&T). At Cingular, he led the National Transport Infrastructure Engineering with responsibility for national transport, VoIP & IMS engineering. At BellSouth (now AT&T) he led the Broadband Internet Operations & Support organization which included broadband access tier two technical support, customer networking equipment business, broadband OSS & end-to-end process.

White holds a Bachelor of Science degree in Electrical Engineering from Southern Polytechnic University. He also served as Chairman of the Tech Titans Technology Association of North Texas representing over 300 Technology companies in the greater North Texas community. He also served on the North Texas Regional committee of the Texas Emerging Technology fund under Governor Rick Perry.

Panellists

Author:

Keith Kressin

CEO
MemryX

I currently serve as CEO of MemryX, Inc., an innovative Edge AI startup founded by professors at the University of Michigan in Ann Arbor. Dr Wei Lu is our co-founder and current CTO, and an expert in memories and neuromorphic computing. We plan to enter production in 2023 with the world's best Edge AI solution.

Prior to MemryX, I worked at Qualcomm for 13+ wonderful years, leading a variety of activities including the Snapdragon roadmap, all technologies for application processing, company strategy, competitive analysis, and more. I joined just prior to the first Android smartphone being launched. My most recent role was as SVP/GM helping Qualcomm diversify from smartphones, and leading several growth businesses including AR/VR, AI, PCs, and gaming. I very much enjoyed working with US partners like Facebook, Microsoft and Google in addition to international customers like Vivo, Oppo, Xiaomi, Sony, Samsung, and so many more.

Prior to Qualcomm, I was able to work at Intel for 8+ years in a number of technology, product, and planning roles in different market segments including laptops, desktops and servers. Together we accomplished a # of industry firsts.

During my time at Intel and Qualcomm, I was deeply engaged in shipping substantially more than 1 Billion advanced chips for mobile personal computing devices. I look forward to helping to grow the use of Edge AI in a number of clients as the world continuously becomes both more intelligent and more connected.

Beyond the world of technology, working with smart/dedicated/honest/ethical people is what keeps me enthused to work each day.

Keith Kressin

CEO
MemryX

I currently serve as CEO of MemryX, Inc., an innovative Edge AI startup founded by professors at the University of Michigan in Ann Arbor. Dr Wei Lu is our co-founder and current CTO, and an expert in memories and neuromorphic computing. We plan to enter production in 2023 with the world's best Edge AI solution.

Prior to MemryX, I worked at Qualcomm for 13+ wonderful years, leading a variety of activities including the Snapdragon roadmap, all technologies for application processing, company strategy, competitive analysis, and more. I joined just prior to the first Android smartphone being launched. My most recent role was as SVP/GM helping Qualcomm diversify from smartphones, and leading several growth businesses including AR/VR, AI, PCs, and gaming. I very much enjoyed working with US partners like Facebook, Microsoft and Google in addition to international customers like Vivo, Oppo, Xiaomi, Sony, Samsung, and so many more.

Prior to Qualcomm, I was able to work at Intel for 8+ years in a number of technology, product, and planning roles in different market segments including laptops, desktops and servers. Together we accomplished a # of industry firsts.

During my time at Intel and Qualcomm, I was deeply engaged in shipping substantially more than 1 Billion advanced chips for mobile personal computing devices. I look forward to helping to grow the use of Edge AI in a number of clients as the world continuously becomes both more intelligent and more connected.

Beyond the world of technology, working with smart/dedicated/honest/ethical people is what keeps me enthused to work each day.

Author:

Vinesh Sukumar

Head of AI Product Management
Qualcomm

Vinesh Sukumar currently serves as Senior Director – Head of AI/ML product management at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

•He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

Vinesh Sukumar

Head of AI Product Management
Qualcomm

Vinesh Sukumar currently serves as Senior Director – Head of AI/ML product management at Qualcomm Technologies, Inc (QTI).  In this role, he leads AI product definition, strategy and solution deployment across multiple business units.

•He has about 20 years of industry experience spread across research, engineering and application deployment. He currently holds a doctorate degree specializing in imaging and vision systems while also completing a business degree focused on strategy and marketing. He is a regular speaker in many AI industry forums and has authored several journal papers and two technical books.

Author:

Pushpak Pujari

Head of Product - Camera Software and Video Products
Verkada

Pushpak leads Product Management at Verkada where he runs their Cloud Connected Security Camera product lines. He is responsible for using AI and Computer Vision on the camera to improve video and analytics capabilities and reduce incidence response time by surfacing only meaningful events in real-time, with minimal impact on bandwidth.

Before Verkada, Pushpak led Product Management at Amazon where he built the end-to-end privacy-preserving ML platform at Amazon Alexa, and launched a no-code platform to design and deploy IoT automation workflows on edge devices at Amazon Web Services (AWS). Previous to Amazon, he spent 4 years at Sony in Japan building Sony’s flagship mirrorless cameras.

Pushpak has extensive experience of starting, running and growing multi-million dollar products used by millions of users at the fastest growing companies in the US and the world. He holds an MBA from Wharton and Bachelors in Electrical Engineering from IIT Delhi, India

Pushpak Pujari

Head of Product - Camera Software and Video Products
Verkada

Pushpak leads Product Management at Verkada where he runs their Cloud Connected Security Camera product lines. He is responsible for using AI and Computer Vision on the camera to improve video and analytics capabilities and reduce incidence response time by surfacing only meaningful events in real-time, with minimal impact on bandwidth.

Before Verkada, Pushpak led Product Management at Amazon where he built the end-to-end privacy-preserving ML platform at Amazon Alexa, and launched a no-code platform to design and deploy IoT automation workflows on edge devices at Amazon Web Services (AWS). Previous to Amazon, he spent 4 years at Sony in Japan building Sony’s flagship mirrorless cameras.

Pushpak has extensive experience of starting, running and growing multi-million dollar products used by millions of users at the fastest growing companies in the US and the world. He holds an MBA from Wharton and Bachelors in Electrical Engineering from IIT Delhi, India

Author:

Sandeep Garg

Director, Engineering
Siemens EDA

Sandeep Garg is the Director of Engineering for Catapult at Siemens EDA, responsible for our High-Level Synthesis (HLS) and verification products.

Sandeep has more than 26 years of experience in EDA industry, 18+ of which have been HLS and verification at Siemens, Mentor Graphics and Calypto in various leadership roles in engineering. Catapult’s bottom-up and low-power solutions were launched under Sandeep’s leadership. Sandeep has been instrumental in building up an extensive network of partnerships and ecosystem with academia and research institutes around Catapult HLS, helping evangelize the HLS tools and methodologies and collaborating with cutting edge research initiatives. Before HLS, Sandeep was an architect and a technology and program manager for Mentor Graphics’ Precision FPGA synthesis product line, helping replace their old HDL frontend with a modern alternative and contributing significantly to both its frontend and backend optimizations.

 

Sandeep came to Mentor Graphics via IKOS Systems acquisition where he was working on hardware-assisted co-simulation products and RTL compiler technology that have since been adopted by many SEDA products.

Sandeep Garg

Director, Engineering
Siemens EDA

Sandeep Garg is the Director of Engineering for Catapult at Siemens EDA, responsible for our High-Level Synthesis (HLS) and verification products.

Sandeep has more than 26 years of experience in EDA industry, 18+ of which have been HLS and verification at Siemens, Mentor Graphics and Calypto in various leadership roles in engineering. Catapult’s bottom-up and low-power solutions were launched under Sandeep’s leadership. Sandeep has been instrumental in building up an extensive network of partnerships and ecosystem with academia and research institutes around Catapult HLS, helping evangelize the HLS tools and methodologies and collaborating with cutting edge research initiatives. Before HLS, Sandeep was an architect and a technology and program manager for Mentor Graphics’ Precision FPGA synthesis product line, helping replace their old HDL frontend with a modern alternative and contributing significantly to both its frontend and backend optimizations.

 

Sandeep came to Mentor Graphics via IKOS Systems acquisition where he was working on hardware-assisted co-simulation products and RTL compiler technology that have since been adopted by many SEDA products.

3.30 PM

Author:

Sutanay Choudhury

Chief Scientist, Data Science
Pacific Northwest National Laboratory

Sutanay Choudhury is Chief Scientist, Data Sciences in Advanced Computing, Mathematics and Data division at Pacific Northwest National Laboratory, and the co-director of the Computational and Theoretical Chemistry Institute. His current research focuses on scalable graph representation learning and neural-symbolic reasoning, with applications to chemistry, medical informatics and power grid. Sutanay has more than a decade's experience in developing artificial intelligence and data analytics systems that extract, learn and search for patterns from the "complex web of things" - webs that emerge from atomistic interaction in molecular networks, to interaction between diseases, drugs and genes, or the web of human knowledge captured in knowledge bases such as Wikipedia, PubChem and SNOMED. His research has been supported by US Department of Energy, US Department of Homeland Security, DARPA and US Department of Veterans Affairs.

Sutanay Choudhury

Chief Scientist, Data Science
Pacific Northwest National Laboratory

Sutanay Choudhury is Chief Scientist, Data Sciences in Advanced Computing, Mathematics and Data division at Pacific Northwest National Laboratory, and the co-director of the Computational and Theoretical Chemistry Institute. His current research focuses on scalable graph representation learning and neural-symbolic reasoning, with applications to chemistry, medical informatics and power grid. Sutanay has more than a decade's experience in developing artificial intelligence and data analytics systems that extract, learn and search for patterns from the "complex web of things" - webs that emerge from atomistic interaction in molecular networks, to interaction between diseases, drugs and genes, or the web of human knowledge captured in knowledge bases such as Wikipedia, PubChem and SNOMED. His research has been supported by US Department of Energy, US Department of Homeland Security, DARPA and US Department of Veterans Affairs.

Model Compression is paramount in the world of Edge AI, as it is key to enhancing the performance and efficiency of AI models on edge devices. This talk will highlight the key drivers behind the increasing necessity for model compression, the essential evaluation metrics, and a range of vital techniques involved in the compression process. We will then delve into notable innovations in the field, illustrated by a case study on the Jabra Panacast20, to demonstrate the real-world applications and benefits of these techniques. The session will wrap up with a summary and a Q&A segment, equipping attendees with the knowledge and tools needed to optimize AI models for edge deployment.

Author:

Anuj Dutt

Senior Software Engineer, AI Systems
Jabra

Anuj Dutt

Senior Software Engineer, AI Systems
Jabra
4.00 PM
CONFERENCE CLOSE

Jump to: Day 1 | Day 2 | Day 3

Download the Community Brochure

Learn more about the network and community we’ve built over 6 years

Download