| Page 285 | Kisaco Research
Systems Infrastructure/Architecture
AI/ML Compute
Market Analysis
Moderator

Author:

Mike Howard

Vice President of DRAM and Memory Markets
TechInsights

Mike has over 15 years of experience tracking the DRAM and memory markets. Prior to TechInsights, he built the DRAM research service at Yole. Prior to Yole, Mike spent time at IHS covering DRAM and Micron Technology where he had roles in engineering, marketing, and corporate development. Mike holds an MBA from The Ohio State University and a BS in Chemical Engineering and BA in Finance from the University of Washington.

 

Mike Howard

Vice President of DRAM and Memory Markets
TechInsights

Mike has over 15 years of experience tracking the DRAM and memory markets. Prior to TechInsights, he built the DRAM research service at Yole. Prior to Yole, Mike spent time at IHS covering DRAM and Micron Technology where he had roles in engineering, marketing, and corporate development. Mike holds an MBA from The Ohio State University and a BS in Chemical Engineering and BA in Finance from the University of Washington.

 

Speakers

Author:

Murali Emani

Computer Scientist
Argonne National Lab

Murali Emani is a Computer Scientist in the Data Science group with the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. At ALCF, he co-leads the AI Testbed where they explore the performance, efficiency of novel AI accelerators for scientific machine learning applications. He also co-chairs the MLPerf HPC group at MLCommons, to benchmark large scale ML on HPC systems. His research interests are in Scalable Machine Learning, AI accelerators, AI for Science, and Emerging HPC architectures.  His current work includes

- Developing performance models to identifying and addressing bottlenecks while scaling machine learning and deep learning frameworks on emerging supercomputers for scientific applications.

- Co-design of emerging hardware architectures to scale up machine learning workloads.

- Efforts on benchmarking ML/DL frameworks and methods on HPC systems.

 

Murali Emani

Computer Scientist
Argonne National Lab

Murali Emani is a Computer Scientist in the Data Science group with the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. At ALCF, he co-leads the AI Testbed where they explore the performance, efficiency of novel AI accelerators for scientific machine learning applications. He also co-chairs the MLPerf HPC group at MLCommons, to benchmark large scale ML on HPC systems. His research interests are in Scalable Machine Learning, AI accelerators, AI for Science, and Emerging HPC architectures.  His current work includes

- Developing performance models to identifying and addressing bottlenecks while scaling machine learning and deep learning frameworks on emerging supercomputers for scientific applications.

- Co-design of emerging hardware architectures to scale up machine learning workloads.

- Efforts on benchmarking ML/DL frameworks and methods on HPC systems.

 

Author:

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

Nuwan Jayasena

Fellow
AMD

Nuwan Jayasena is a Fellow at AMD Research, and leads a team exploring hardware support, software enablement, and application adaptation for processing in memory. His broader interests include memory system architecture, accelerator-based computing, and machine learning. Nuwan holds an M.S. and a Ph.D. in Electrical Engineering from Stanford University and a B.S. from the University of Southern California. He is an inventor of over 70 US patents, an author of over 30 peer-reviewed publications, and a Senior Member of the IEEE. Prior to AMD, Nuwan was a processor architect at Nvidia Corp. and at Stream Processors, Inc.

Author:

Simone Bertolazzi

Principal Analyst, Memory
Yole Group

Simone Bertolazzi, PhD is a Senior Technology & Market analyst, Memory, at Yole Intelligence, part of Yole Group, working with the Semiconductor, Memory & Computing division. As member of the Yole’s memory team, he contributes on a day-to-day basis to the analysis of memory markets and technologies, their related materials, device architectures and fabrication processes. Simone obtained a PhD in physics in 2015 from École Polytechnique Fédérale de Lausanne (Switzerland) and a double M. A. Sc. degree from Polytechnique de Montréal (Canada) and Politecnico di Milano (Italy), graduating cum laude.

Simone Bertolazzi

Principal Analyst, Memory
Yole Group

Simone Bertolazzi, PhD is a Senior Technology & Market analyst, Memory, at Yole Intelligence, part of Yole Group, working with the Semiconductor, Memory & Computing division. As member of the Yole’s memory team, he contributes on a day-to-day basis to the analysis of memory markets and technologies, their related materials, device architectures and fabrication processes. Simone obtained a PhD in physics in 2015 from École Polytechnique Fédérale de Lausanne (Switzerland) and a double M. A. Sc. degree from Polytechnique de Montréal (Canada) and Politecnico di Milano (Italy), graduating cum laude.

Oracle AI Vector Search enables enterprises to leverage their own business data to build cutting-edge generative AI solutions. AI Vectors are data structures that encode the key features or essence of unstructured entities such as images or documents. The more similar two entities are, the shorter the mathematical distance between their corresponding AI vectors. With AI Vector search, Oracle Database is introducing a new vector datatype, new vector indexes (in-memory neighbor graph indexes and neighbor partitioned indexes), and new Vector SQL operators for highly efficient and powerful similarity search queries. Oracle AI Vector Search enables applications to combine their business data with large language models (LLMs) using a technique called Retrieval Augmentation Generation (RAG), to deliver amazingly accurate responses to natural language questions. With AI Vector Search in Oracle Database, users can easily build AI applications that combine relational searches with similarity search, without requiring data movement to a separate vector database, and without any loss of security, data integrity, consistency, or performance. At the heart of this are the hardware and system requirements needed to facilitate scale up and scale out AI vector search.

Emerging Memory Innovations
Systems Infrastructure/Architecture

Author:

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

Tirthankar Lahiri

SVP, Data & In-Memory Technologies
Oracle

Tirthankar Lahiri is Vice President of the Data and In-Memory Technologies group for Oracle Database and is responsible for the Oracle Database Engine (including Database In-Memory, Data and Indexes, Space Management, Transactions, and the Database File System), the Oracle TimesTen In-Memory Database, and Oracle NoSQLDB. Tirthankar has 22 years of experience in the Database industry and has worked extensively in a variety of areas including Manageability, Performance, Scalability, High Availability, Caching, Distributed Concurrency Control, In-Memory Data Management, NoSQL architectures, etc. He has 27 issued and has several pending patents in these areas. Tirthankar has a B.Tech in Computer Science from the Indian Institute of Technology (Kharagpur) and an MS in Electrical Engineering from Stanford University.

Data Movement/Demands
Systems Infrastructure/Architecture
AI/ML Compute
Moderator

Author:

Mahesh Wagh

Senior Fellow & Server System Architect
AMD

Mahesh Wagh is AMD Sr. Fellow, Server System Architect in the AMD Datacenter System Architecture and Engineering team, developing world-class products and solutions around EPYC processors.

Prior to joining AMD, Mahesh was a Senior Principal Engineer at Intel corporation, focusing on IO and SoC architecture and related technology developments. He has broad experience in chipset and IO architecture, design and validation on both Server and Client platforms.

Some of Mahesh´s significant achievements include the enhancements to PCI Express Architecture and Specification, leading CPU IO domain architecture and IO IP architecture & Interfaces and leading AMD´s Compute Express Link (CXL) efforts.

Mahesh Wagh

Senior Fellow & Server System Architect
AMD

Mahesh Wagh is AMD Sr. Fellow, Server System Architect in the AMD Datacenter System Architecture and Engineering team, developing world-class products and solutions around EPYC processors.

Prior to joining AMD, Mahesh was a Senior Principal Engineer at Intel corporation, focusing on IO and SoC architecture and related technology developments. He has broad experience in chipset and IO architecture, design and validation on both Server and Client platforms.

Some of Mahesh´s significant achievements include the enhancements to PCI Express Architecture and Specification, leading CPU IO domain architecture and IO IP architecture & Interfaces and leading AMD´s Compute Express Link (CXL) efforts.

Speakers

Author:

Paul Crumley

Senior Technical Staff Member
IBM Research

Paul G Crumley, a Senior Technical Staff Member at IBM Research, enjoys creating systems to solve problems beyond the reach of current technology.

 

Paul’s current project integrates secure, compliant AI capabilities with enterprise Hybrid Cloud allowing clients to extract new business value from their data.

 

Paul’s previous work includes the design and construction of distributed, and high-performance computing systems at CMU, Transarc, and IBM Research. Projects include The Andrew Project at CMU, ASCI White, IBM Global Storage Architecture, Blue Gene Supercomputers, IBM Cloud, and IBM Cognitive Systems. Paul has managed data centers, and brings his first-hand knowledge of these environments, combined with experience of automation and robustness, to the design of AI for Hybrid Cloud infrastructure.

Paul Crumley

Senior Technical Staff Member
IBM Research

Paul G Crumley, a Senior Technical Staff Member at IBM Research, enjoys creating systems to solve problems beyond the reach of current technology.

 

Paul’s current project integrates secure, compliant AI capabilities with enterprise Hybrid Cloud allowing clients to extract new business value from their data.

 

Paul’s previous work includes the design and construction of distributed, and high-performance computing systems at CMU, Transarc, and IBM Research. Projects include The Andrew Project at CMU, ASCI White, IBM Global Storage Architecture, Blue Gene Supercomputers, IBM Cloud, and IBM Cognitive Systems. Paul has managed data centers, and brings his first-hand knowledge of these environments, combined with experience of automation and robustness, to the design of AI for Hybrid Cloud infrastructure.

Author:

Debendra Das Sharma

TTF Co-Chair: CXL Consortium & Senior Fellow: Intel
CXL Consortium

Debendra Das Sharma (Senior Member, IEEE) was born in Odisha, India, in 1967. He received the B.Tech. degree (Hons.) in computer science and engineering from IIT Kharagpur, Kharagpur, India, in 1989, and the Ph.D. degree in computer systems engineering from the University of Massachusetts, Amherst, MA, USA, in 1995.,He joined Hewlett-Packard, Roseville, CA, USA, in 1994, and Intel, Santa Clara, CA, USA, in 2001. He is currently an Senior Fellow with Intel. He is responsible for delivering Intel-wide critical interconnect technologies in Peripheral Component Interconnect Express (PCI Express), Compute Express Link (CXL), Universal Chiplet Interconnect Express (UCIe), Coherency Interconnect, Multi-Chip Package Interconnect, and Rack Scale Architecture. He has been leading the development of PCI-Express, CXL, and UCIe inside Intel as well as across the industry since their inception. He holds 160+ U.S. patents and more than 400 patents worldwide.,Dr. Das Sharma has been awarded the Distinguished Alumnus Award by IIT, in 2019, the 2021 IEEE Region 6 Engineer of the Year Award, the PCI-SIG Lifetime Contribution Award in 2022, and the 2022 IEEE CAS Industrial Pioneer Award. He is currently the Chair of UCIe Board, a Director of PCI-SIG Board, and the Chair of the CXL Board

Debendra Das Sharma

TTF Co-Chair: CXL Consortium & Senior Fellow: Intel
CXL Consortium

Debendra Das Sharma (Senior Member, IEEE) was born in Odisha, India, in 1967. He received the B.Tech. degree (Hons.) in computer science and engineering from IIT Kharagpur, Kharagpur, India, in 1989, and the Ph.D. degree in computer systems engineering from the University of Massachusetts, Amherst, MA, USA, in 1995.,He joined Hewlett-Packard, Roseville, CA, USA, in 1994, and Intel, Santa Clara, CA, USA, in 2001. He is currently an Senior Fellow with Intel. He is responsible for delivering Intel-wide critical interconnect technologies in Peripheral Component Interconnect Express (PCI Express), Compute Express Link (CXL), Universal Chiplet Interconnect Express (UCIe), Coherency Interconnect, Multi-Chip Package Interconnect, and Rack Scale Architecture. He has been leading the development of PCI-Express, CXL, and UCIe inside Intel as well as across the industry since their inception. He holds 160+ U.S. patents and more than 400 patents worldwide.,Dr. Das Sharma has been awarded the Distinguished Alumnus Award by IIT, in 2019, the 2021 IEEE Region 6 Engineer of the Year Award, the PCI-SIG Lifetime Contribution Award in 2022, and the 2022 IEEE CAS Industrial Pioneer Award. He is currently the Chair of UCIe Board, a Director of PCI-SIG Board, and the Chair of the CXL Board

Author:

Manoj Wadekar

AI Systems Technologist
Meta

Manoj Wadekar

AI Systems Technologist
Meta

Compute performance demand has been growing exponentially in recent years, and with the advent of Generative AI, this demand is growing even faster. Moore’s law coming to an end as well as the Memory Wall (bandwidth & capacity) are the main performance bottlenecks. The chiplet system-in-package (SiP) is the industry's solution to these bottlenecks. Silicon interposers are industry’s main technology to connect chiplets in SiPs, but they introduce several new bottlenecks. The largest interposer going to production is 2700mm2, which is ~1/4 the largest standard package substrate. Thus, a SiP with silicon interposer has limited compute & memory chiplets, thus limited performance.
This presentation introduces Universal Memory Interface (UMI), a high bandwidth efficient D2D connectivity technology between compute and memory chiplets. UMI PHY on standard packaging provides similar bandwidth/power to D2D PHYs with silicon interposers, thus enables creation of large & powerful SiPs required to address Gen AI applications.

Emerging Memory Innovations
Interconnects

Author:

Ramin Farjadrad

Co-Founder & CEO
Eliyan

Ramin Farjadrad is the inventor of over 130 granted and pending patents in communications and networking. He has a successful track record of creating differentiating connectivity technologies adopted by the industry as International standards (Two Ethernet standards at IEEE, one chiplet connectivity at OCP.) Ramin co-founded Velio Communications, which led to a Rambus/LSI Logic acquisition, and Aquantia, which IPO’d and was acquired by Marvell Technologies. Ramin’s Ph.D. EE is from Stanford.

Ramin Farjadrad

Co-Founder & CEO
Eliyan

Ramin Farjadrad is the inventor of over 130 granted and pending patents in communications and networking. He has a successful track record of creating differentiating connectivity technologies adopted by the industry as International standards (Two Ethernet standards at IEEE, one chiplet connectivity at OCP.) Ramin co-founded Velio Communications, which led to a Rambus/LSI Logic acquisition, and Aquantia, which IPO’d and was acquired by Marvell Technologies. Ramin’s Ph.D. EE is from Stanford.

Shell Upstream has been processing large subsurface datasets for multiple decades driving significant business value.  Many of the state of the art algorithms for this have been developed using deep domain knowledge and have benefitted from the hardware technology improvements over the years. However, the demand for more efficient processing as datasets get bigger and the algorithms become even more complex is ever-growing. This talk will focus on the memory and data management challenges for a variety of traditional HPC workflows in the energy industry. It will also cover unique challenges for accelerating modern AI-based workflows requiring new innovations. 

AI/ML Compute
Enterprise Workloads
HPC

Author:

Dr. Vibhor Aggarwal

Manager: Digital & Scientific HPC
Shell

Vibhor is an R&D leader with expertise in HPC Software, Scientific Visualization, Cloud Computing and AI technologies with 14 years of experience. He and his team at Shell are currently work on problems in optimizing HPC software for simulations, large-scale and generative AI, combination of Physics and AI models, developing platform and products for HPC-AI solutions as well as emerging HPC areas for energy transition at the forefront of Digital Innovation. He has two patents and several research publications. Vibhor has a BEng in Computer Engineering from University of Delhi and a PhD in Engineering from University of Warwick.    

Dr. Vibhor Aggarwal

Manager: Digital & Scientific HPC
Shell

Vibhor is an R&D leader with expertise in HPC Software, Scientific Visualization, Cloud Computing and AI technologies with 14 years of experience. He and his team at Shell are currently work on problems in optimizing HPC software for simulations, large-scale and generative AI, combination of Physics and AI models, developing platform and products for HPC-AI solutions as well as emerging HPC areas for energy transition at the forefront of Digital Innovation. He has two patents and several research publications. Vibhor has a BEng in Computer Engineering from University of Delhi and a PhD in Engineering from University of Warwick.    

There are a set of challenges that emanate from memory issues in GenAI deployments in enterprise
• Poor tooling for performance issues related from GPU and memory interconnectedness
• Latency issues as a result of data movement and poor memory capacity planning
• Failing AI training scenarios in low memory constraints

There is both opacity and immature tooling to manage a foundational infrastructure for GenAI deployment, memory. This is experienced by AI teams who need to double-click on the infrastructure and improve on these foundations to deploy AI at scale.

Data Movement/Demands
AI/ML Compute
Enterprise Workloads

Author:

Rodrigo Madanes

Global AI Innovation Officer
EY

Rodrigo Madanes is EY’s Global Innovation AI Leader. Rodrigo has a computer science degree from MIT and a PhD from UC Berkeley. Some testament to his technical expertise includes 3 patents and having created novel AI products at both the MIT Media Lab as well as Apple’s Advanced Technologies Group.

Prior to EY, Rodrigo ran the European business incubator at eBay which launched new ventures including eBay Hire. At Skype, he was the C-suite executive leading product design globally during its hyper-growth phase, where the team scaled the userbase, revenue, and profits 100% YoY for 3 consecutive years.

Rodrigo Madanes

Global AI Innovation Officer
EY

Rodrigo Madanes is EY’s Global Innovation AI Leader. Rodrigo has a computer science degree from MIT and a PhD from UC Berkeley. Some testament to his technical expertise includes 3 patents and having created novel AI products at both the MIT Media Lab as well as Apple’s Advanced Technologies Group.

Prior to EY, Rodrigo ran the European business incubator at eBay which launched new ventures including eBay Hire. At Skype, he was the C-suite executive leading product design globally during its hyper-growth phase, where the team scaled the userbase, revenue, and profits 100% YoY for 3 consecutive years.

Author:

Angela Yeung

VP of Product Management
Cerebras Systems

Angela Yeung

VP of Product Management
Cerebras Systems
Market Analysis
Hyperscaler
Emerging Memory Innovations
Moderator

Author:

Jim Handy

General Director
Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon. A frequent presenter at trade shows, Mr. Handy is highly respected for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media.

Jim Handy

General Director
Objective Analysis

Jim Handy of Objective Analysis has over 35 years in the electronics industry including 20 years as a leading semiconductor and SSD industry analyst. Early in his career he held marketing and design positions at leading semiconductor suppliers including Intel, National Semiconductor, and Infineon. A frequent presenter at trade shows, Mr. Handy is highly respected for his technical depth, accurate forecasts, widespread industry presence and volume of publication. He has written hundreds of market reports, articles for trade journals, and white papers, and is frequently interviewed and quoted in the electronics trade press and other media.

Speakers

Author:

Siddarth Krishnan

MD, Engineering Management
Applied Materials

Siddarth Krishnan is Managing Director, at Applied Materials, with an R&D focus on Materials Engineering for Heterogenous Integration, Power Devices and alternative memories (RERAM, FERAM etc). In his role, Siddarth and his team research ways of building modules that help connect memory chips (such as High Bandwidth memories) with logic chips and chips with other functionality, using 2D, 2.5D and 3D Integration. Prior to working on Heterogenous Integration, Siddarth worked on various other materials engineering areas, such as MicroLED and Analog In Memory Compute. Previously, Siddarth was an engineering manager at IBM, working on High-K/Metal Gate and FinFET devices.

Siddarth Krishnan

MD, Engineering Management
Applied Materials

Siddarth Krishnan is Managing Director, at Applied Materials, with an R&D focus on Materials Engineering for Heterogenous Integration, Power Devices and alternative memories (RERAM, FERAM etc). In his role, Siddarth and his team research ways of building modules that help connect memory chips (such as High Bandwidth memories) with logic chips and chips with other functionality, using 2D, 2.5D and 3D Integration. Prior to working on Heterogenous Integration, Siddarth worked on various other materials engineering areas, such as MicroLED and Analog In Memory Compute. Previously, Siddarth was an engineering manager at IBM, working on High-K/Metal Gate and FinFET devices.

Author:

John Overton

CEO
Kove

John Overton is the CEO of Kove IO, Inc. In the late 1980s, while at the Open Software Foundation, Dr. Overton wrote software that went on to be used by approximately two thirds of the world’s workstation market. In the 1990s, he co-invented and patented technology utilizing distributed hash tables for locality management, now widely used in storage, database, and numerous other markets. In the 2000s, he led development of the first truly capable Software-Defined Memory offering, Kove:SDM™. Kove:SDM™ enables new Artificial Intelligence and Machine Learning capabilities, while also reducing power by up to 50%. Dr. Overton has more than 65 issued patents world-wide and has peer-reviewed publications across numerous academic disciplines. He holds post-graduate and doctoral degrees from Harvard and the University of Chicago.

John Overton

CEO
Kove

John Overton is the CEO of Kove IO, Inc. In the late 1980s, while at the Open Software Foundation, Dr. Overton wrote software that went on to be used by approximately two thirds of the world’s workstation market. In the 1990s, he co-invented and patented technology utilizing distributed hash tables for locality management, now widely used in storage, database, and numerous other markets. In the 2000s, he led development of the first truly capable Software-Defined Memory offering, Kove:SDM™. Kove:SDM™ enables new Artificial Intelligence and Machine Learning capabilities, while also reducing power by up to 50%. Dr. Overton has more than 65 issued patents world-wide and has peer-reviewed publications across numerous academic disciplines. He holds post-graduate and doctoral degrees from Harvard and the University of Chicago.

Author:

Brett Dodds

Senior Director, Azure Memory Devices
Microsoft

Brett Dodds

Senior Director, Azure Memory Devices
Microsoft

Author:

David McIntyre

Director, Product Planning: Samsung & Board Member: SNIA
SNIA

David McIntyre

Director, Product Planning: Samsung & Board Member: SNIA
SNIA

As the cost of sequencing drops and the quantity of data produced by sequencing grows, the amount of processing dedicated to genomics is increasing at a rapid pace.  [Genomics is evolving in a number of directions simultaneously.]  Complex pipelines are written in such a manner that they are portable to either clusters or clouds.  Key kernels are also being ported to GPUs in a drop-in replacement for their non-accelerated counterpart.  These techniques are helping to address challenges of scaling up genomics computations and porting validated pipelines to new systems.  However, all of these computations strain the bandwidth and capacity of available resources.  In this talk, Roche´s Tom Sheffler will share an overview of the memory-bound challenges present in genomics and venture some possible solutions.

Emerging Memory Innovations
AI/ML Compute
Systems Infrastructure/Architecture

Author:

Tom Sheffler

Solution Architect, Next Generation Sequencing
Former Roche

Tom earned his PhD from Carnegie Mellon in Computer Engineering with a focus on parallel computing architectures and prrogramming models.  His interest in high-performance computing took him to NASA Ames, and then to Rambus where he worked on accelerated memory interfaces for providing high bandwidth.  Following that, he co-founded the cloud video analytics company, Sensr.net, that applied scalable cloud computing to analyzing large streams of video data.  He later joined Roche to work on next-generation sequencing and scalable genomics analysis platforms.  Throughout his career, Tom has focused on the application of high performance computer systems to real world problems.

Tom Sheffler

Solution Architect, Next Generation Sequencing
Former Roche

Tom earned his PhD from Carnegie Mellon in Computer Engineering with a focus on parallel computing architectures and prrogramming models.  His interest in high-performance computing took him to NASA Ames, and then to Rambus where he worked on accelerated memory interfaces for providing high bandwidth.  Following that, he co-founded the cloud video analytics company, Sensr.net, that applied scalable cloud computing to analyzing large streams of video data.  He later joined Roche to work on next-generation sequencing and scalable genomics analysis platforms.  Throughout his career, Tom has focused on the application of high performance computer systems to real world problems.

 

Kim Swanson

Head of Commercial
Dennemeyer

Kim Swanson

Head of Commercial
Dennemeyer

Kim Swanson

Head of Commercial
Dennemeyer
 

Kim Swanson

Head of Commercial
Dennemeyer

Kim Swanson

Head of Commercial
Dennemeyer

Kim Swanson

Head of Commercial
Dennemeyer