"I will absolutely recommend to my colleagues to attend in the coming years. I had lot of questions on the application and deployment of AI before I came, now they were all answered. There were many great speakers and many of the presentations offered very insightful information that I can take back to my work."
Sr. Director of Engineering, Oshkosh Corporation
REGISTER YOUR PLACE HERE
Filter by:
Check out the first release agenda below. Look out for new speakers and session summaries being added soon…
Use the top filter to find the sessions most suitable to your role.
September 9: Efficient Generative AI Summit (ticket upgrade required) - Jump to day here
September 10: Jump to day here
- Opening Keynotes (full conference ticket required)
- Expo opens
- Closing Keynotes (full conference ticket required)
- Opening Gala
September 11: Jump to day here
- Opening Keynotes (full conference ticket required)
- Expo Opens and Workshops
September 12: Jump to day here
- Keynotes and Tracked Sessions (full conference ticket required)
Neeraj Kumar
As the Chief Data Scientist at Pacific Northwest National Laboratory (PNNL), Neeraj leads a talented team of scientists and professionals in addressing critical challenges in energy, artificial intelligence, health, and biothreat sectors. With over 15 years of experience in quantitative research and data science, he specializes in developing innovative solutions and managing multidisciplinary teams focused on multimillion-dollar programs at the intersection of fundamental discovery and transformative AI-driven product development.
His expertise spans Applied Math, High-Performance Computing, Computational Chemistry and Biology, Health Science, and Medical Therapeutics, enabling his to guide his team in exploring new frontiers. He has a deep understanding and application of Generative AI, AI Safety and Trustworthiness, Natural Language Processing, Applied Mathematics, Software Engineering, Modeling and Simulations, Quantum Mechanics, Data Integration, Causal Inference/Reasoning, and Reinforcement Learning. These competencies are crucial in developing scalable AI/ML models and computing infrastructures that accelerate scientific discoveries, enhance computer-aided design, and refine autonomous decision-making.
Arun Nandi
Arun is a visionary AI and Analytics expert recognized as one of the Top 100 Influential AI & Analytics leaders. He is the Head of Data & Analytics at Unilever today. With over 15 years of experience driving analytics-driven value in organizations, he has built AI practices from the ground up on several occasions. Arun advocates the adoption of AI to overcome enterprise-wide challenges and create growth. Beyond his professional achievements, Arun loves to travel, having explored over 40 countries and is passionate about adventure motorbiking.
Prasad Saripalli
Prasad Saripalli serves as a Distinguished Engineer at Capital One, a technology driven bank on the Fortune 100 list, redefining Fintech and Banking using data, technology, AI and ML in unprecedented ways. Most recently, Prasad served as the Vice President of AIML and Distinguished Engineer at MindBody Inc - a portfolio company of Vista which manages the world's fourth-largest enterprise software company after Microsoft, Oracle, and SAP. Earlier, he served as VP Data Science at Edifecs, an industry premier healthcare information technology partnership platform and software provider, building Smart Decisions ML & AI Platform with Ml Apps Front. Prior to this, Prasad served as CTO and VP Engineering at Secrata.com, provider of Military grade Security and Privacy solutions developed and deployed over the past 15 years at Topia Technology for the Federal Government and the Enterprise, and as CTO & EVP at ClipCard, a SaaS based Hierarchical Analytics and Visualization platform.
At IBM, Prasad served as the Chief Architect for IBM's SmartCloud Enterprise (http://www.ibm.com/cloud-computing/us/en/). At Runaware, he served as the Vice President of Product Development. As a Principal Group Manager at Microsoft, Prasad co-led the development of virtualization stack on Windows 7 responsible for shipping Virtual PC7 and Windows XP Mode on Windows 7.
Prasad teaches Machine Learning, AI, NLP, Distributed Systems, Cloud Engineering and Robotics at Northeastern University and the University of Washington Continuum College.
Dr. Walden “Wally” Rhines
WALDEN C. RHINES is President & CEO of Cornami. He is also CEO Emeritus of Mentor, a Siemens business, focusing on external communications and customer relations. He was previously CEO of Mentor Graphics for 23 years and Chairman of the Board for 17 years. During his tenure at Mentor, revenue nearly quadrupled and market value of the company increased 10X.
Prior to joining Mentor Graphics, Dr. Rhines was Executive Vice President, Semiconductor Group, responsible for TI’s worldwide semiconductor business. During his 21 years at TI, he was President of the Data Systems Group and held numerous other semiconductor executive management positions.
Dr. Rhines has served on the boards of Cirrus Logic, QORVO, TriQuint Semiconductor, Global Logic and as Chairman of the Electronic Design Automation Consortium (five two-year terms) and is currently a director. He is also a board member of the Semiconductor Research Corporation and First Growth Children & Family Charities. He is a Lifetime Fellow of the IEEE and has served on the Board of Trustees of Lewis and Clark College, the National Advisory Board of the University of Michigan and Industrial Committees advising Stanford University and the University of Florida.
Dr. Rhines holds a Bachelor of Science degree in engineering from the University of Michigan, a Master of Science and PhD in materials science and engineering from Stanford University, a master of Business Administration from Southern Methodist University and Honorary Doctor of Technology degrees from the University of Florida and Nottingham Trent University.
John Almasan
Dr. John Almasan is an accomplished technology executive with over 20 years of experience leading global tech teams and building large-scale data, AI, and cloud platforms for prominent companies such as TIAA, McKinsey & Co., American Express, Bank of America, and Nationwide Insurance. With deep expertise in multi-cloud big data engineering, machine learning, and data science, John is a hands-on practitioner and passionate about enabling the acceleration of AI adoption.
As an adjunct professor at various universities and a member of Arizona State University's Executive Board of Advisors, John is committed to preparing the next generation to meet the future's skillset needs and demands. He focuses on employee cross-training and actively engages in teaching and mentoring students in the field.
John holds two master's degrees in Engineering and Statistics, a Doctor of Business Administration, and has over 20+ patents credited to his name. He has received several awards throughout his career for his contributions to the technology industry.
Abhijeet Gulati
Abhijeet Gulati is the Head of AI & Senior Director of Engineering at Mitchell’s Auto Physical Damage (APD) business unit. Abhijeet is an accomplished technologist with over two decades of experience in the semiconductor, wireless, software & technology industry, focusing on AI, Machine Learning, NLP, Generative AI and SaaS solutions. He is a driven Artificial Intelligence, Advanced Analytics and business intelligence leader.
As the Head of AI at Mitchell International, Abhijeet has spent the past 5 years improving InsureTech inefficiencies, minimize decision biases, developed proprietary, differentiated enterprise scale AI products and an intelligent open platform that democratizes adoption of AI in enterprise business workflows. Abhijeet has extensive experience directing large-scale initiatives involving R&D, business & product strategy, operations, and advanced video, image and data analytics. Abhijeet has authored several patents on the application of AI in InsureTech industry. Abhijeet sits on the board of several AI standards, Ethical & Responsible AI committees.
Krishna Rangasayee
Krishna Rangasayee is Founder and CEO of SiMa.ai. Previously, Krishna was COO of Groq and at Xilinx for 18 years, where he held multiple senior leadership roles including Senior Vice President and GM of the overall business, and Executive Vice President of global sales. While at Xilinx, Krishna grew the business to $2.5B in revenue at 70% gross margin while creating the foundation for 10+ quarters of sustained sequential growth and market share expansion. Prior to Xilinx, he held various engineering and business roles at Altera Corporation and Cypress Semiconductor. He holds 25+ international patents and has served on the board of directors of public and private companies.
SiMa
Website: https://sima.ai/
SiMa.ai is the software-centric, embedded edge machine learning system-on-chip (MLSoC) company. SiMa.ai delivers one platform for all edge AI that flexibly adjusts to any framework, network, model, sensor, or modality. Edge ML applications that run completely on the SiMa.ai MLSoC see a tenfold increase in performance and energy efficiency, bringing higher fidelity intelligence to ML use cases spanning computer vision to generative AI, in minutes. With SiMa.ai, customers unlock new paths to revenue and significant cost savings to innovate at the edge across industrial manufacturing, retail, aerospace, defense, agriculture, and healthcare.
Rashmi Gopinath
Rashmi Gopinath is a General Partner at B Capital Group where she leads the fund’s enterprise software practice in cloud infrastructure, cybersecurity, devops, and AI/ML sectors. She brings over two decades of experience investing and operating in cutting-edge enterprise technologies. She led B Capital’s investments in over 24 companies such as DataRobot, FalconX, Clari, Phenom People, Synack, Innovaccer, Labelbox, Fabric, 6Sense, Highspot, Pendo, Starburst, OwnBackup, Figment, Perimeter81, Zesty, among others.
Rashmi was previously a Managing Director at M12, Microsoft’s venture fund, where she led investments globally in enterprise software and sat on several boards including Synack, Innovaccer, Contrast Security, Frame, UnravelData, Incorta, among others.
Prior to M12, Rashmi was an Investment Director with Intel Capital where she was involved in the firm’s investments in startups including MongoDB (Nasdaq: MDB), ForeScout (Nasdaq: FSCT), Maginatics (acq. by EMC), BlueData (acq. by HPE), among others. Rashmi held operating roles at high-growth startups such as BlueData (acq. by HPE) and Couchbase (Nasdaq: BASE) where she led global business development, product and marketing roles. She began her career in engineering and product roles at Oracle and GE Healthcare. She earned an M.B.A. from Northwestern University, and a B.S. in Electrical Engineering from University of Mumbai in India.
Gayathri Radhakrishnan
Gayathri is currently Partner at Hitachi Ventures. Prior to that, she was with Micron Ventures, actively investing in startups that apply AI to solve critical problems in the areas of Manufacturing, Healthcare and Automotive. She brings over 20 years of multi-disciplinary experience across product management, product marketing, corporate strategy, M&A and venture investments in large Fortune 500 companies such as Dell and Corning and in startups. She has also worked as an early stage investor at Earlybird Venture Capital, a premier European venture capital fund based in Germany. She has a Masters in EE from The Ohio State University and MBA from INSEAD in France. She is also a Kauffman Fellow - Class 16.
Partha Ranganathan
Parthasarathy (Partha) Ranganathan is currently a VP & Engineering Fellow, Google where he is the area technical lead for hardware and datacenters, designing systems at scale. Prior to this, he was a HP Fellow and Chief Technologist at Hewlett Packard Labs where he led their research on systems and data centers. Partha has worked on several interdisciplinary systems projects with broad impact on both academia and industry, including widely-used innovations in energy-aware user interfaces, heterogeneous multi-cores, power-efficient servers, accelerators, and disaggregated and data-centric data centers. He has published extensively (including being the co-author on the popular "Datacenter as a Computer" textbook), is a co-inventor on more than 100 patents, and has been recognized with numerous awards. He has been named a top-15 enterprise technology rock star by Business Insider, one of the top 35 young innovators in the world by MIT Tech Review, and is a recipient of the ACM SIGARCH Maurice Wilkes award, Rice University's Outstanding Young Engineering Alumni award, and the IIT Madras distinguished alumni award. He is one of few computer scientists to have his work recognized with an Emmy award. He is also a Fellow of the IEEE and ACM, and has also served on the board of directors for OpenCompute.
Jia Li
Jia is Co-founder, Chief AI Officer and President of a Stealth Generative AI Startup. She is elected as IEEE Fellow for Leadership in Large Scale AI. She is co-teaching the inaugural course of Generative AI and Medicine at Stanford University, where she has served multiple roles including Advisory Board Committee to Nourish, Chief AI Fellow, RWE for Sleep Health and Adjunct Professor at the School of Medicine in the past. She was the Founding Head of R&D at Google Cloud AI. At Google, she oversaw the development of the full stack of AI products on Google Cloud to power solutions for diverse industries. With the passion to make more impact to our everyday life, she later became an entrepreneur, building and advising companies with award-winning platforms to solve today's greatest challenges in life. She has served as Mentor and Professor-in-Residence at StartX, advising founders/companies from Stanford/Alumni. She is the Co-founder and Chairperson of HealthUnity Corporation, a 501(c)3 nonprofit organization. She served briefly at Accenture as a part-time Chief AI Follow for the Generative AI strategy. She also serves as an advisor to the United Nations Children's Fund (UNICEF). She is a board member of the Children's Discovery Museum of San Jose. She was selected as a World Economic Forum Young Global Leader, a recognition bestowed on 100 of the world’s most promising business leaders, artists, public servants, technologists, and social entrepreneurs in 2018. Before joining Google, She was the Head of Research at Snap, leading the AI/AR innovation effort. She received her Ph.D. degree from the Computer Science Department at Stanford University.
Thomas Sohmers
Thomas Sohmers is an innovative technologist and entrepreneur, renowned for his pioneering work in the field of advanced computing and artificial intelligence. Thomas began programming at a very early age, which led him to MIT as a high school student where he worked on cutting-edge research. By the age of 18, he had become a Thiel Fellow, marking the beginning of his remarkable journey in technology and innovation. In 2013, Thomas founded Rex Computing, where he designed energy-efficient processors for high-performance computing applications. His groundbreaking work earned him numerous accolades, including a feature in Forbes' 30 Under 30. After a stint exploring the AI industry, working on scaling out GPU clouds and large language models, Thomas founded and became CEO of Positron in 2023. Positron develops highly efficient transformer inferencing systems, and under Thomas's leadership, it has quickly become one of the most creative and promising startups in the AI industry.
Positron AI
Website: https://www.positron.ai/
Positron delivers vendor freedom and faster inference for both enterprises and research teams, by allowing them to use hardware and software explicitly designed from the ground up for generative and large language models (LLMs).
Through lower power usage and drastically lower total cost of ownership (TCO), Positron enables you to run popular open source LLMs to serve multiple users at high token rates and long context lengths. Positron is also designing its own ASIC to expand from inference and fine tuning to also support training and other parallel compute workloads.
The exponential growth in compute demands of AI, and the move to Software Defined Products, means that more than ever workloads are defining the semiconductor requirements. The need to hit the restrictive power, performance, area and cost constraints of edge designs, mean that every element of the design needs to be optimized and co-designed with the workloads in mind. Additionally, the design needs to evolve even after semiconductor design is complete as it adapts to new demands.
In this presentation we will look at how semiconductor design is changing to enable rapid development and deployment of custom, application optimized, system-on-chip designs – from concept through to in-life operation, as we chart the path to a sustainable compute future.
Ankur Gupta
Ankur Gupta is vice president and general manager of Tessent Silicon Lifecycle Solutions business unit for Siemens EDA. He and his global team are responsible for developing and marketing best-in-class DFT and Lifecycle solutions for the Semiconductor industry.
Ankur brings 20+ years of experience in Digital design, implementation and signoff. He oversaw the first five deployments of Innovus to the market, while at Cadence. Later, at Ansys, he oversaw the launch and deployment of RedHawk-SC, a market leader in power-grid signoff.
Lip-Bu Tan
Lip-Bu Tan is Founder and Chairman of Walden International (“WI”), and Founding Managing Partner of Celesta Capital and Walden Catalyst Ventures, with over $5 billion under management. He formerly served as Chief Executive Officer and Executive Chairman of Cadence Design Systems, Inc. He currently serves on the Board of Schneider Electric SE (SU: FP), Intel Corporation (NASDAQ: INTC), and Credo Semiconductor (NASDAQ: CRDO).
Lip-Bu focuses on semiconductor/components, cloud/edge infrastructure, data management and security, and AI/machine learning.
Lip-Bu received his B.S. from Nanyang University in Singapore, his M.S. in Nuclear Engineering from the Massachusetts Institute of Technology, and his MBA from the University of San Francisco. He also received his honorary degree for Doctor of Humane Letters from the University of San Francisco. Lip-Bu currently serves on Carnegie Mellon University (CMU)’s Board of Trustees and the School of Engineering Dean’s Council, Massachusetts Institute of Technology (MIT)’s School of Engineering Dean’s Advisory Council, University of California Berkeley (UCB)’s College of Engineering Advisory Board and their Computing, Data Science, and Society Advisory Board, and University of California San Francisco (UCSF)’s Executive Council. He’s also a member of the Global Advisory Board of METI Japan, The Business Council, and Committee 100. He also served on the board of the Board of Global Semiconductor Alliance (GSA) from 2009 to 2021, and as a Trustee of Nanyang Technological University (NTU) in Singapore from 2006 to 2011. Lip-Bu has been named one of the Top 10 Venture Capitalists in China by Zero2ipo and was listed as one of the Top 50 Venture Capitalists on the Forbes Midas List. He’s the recipient of imec’s 2023 Lifetime of Innovation Award, the Semiconductor Industry Association (SIA) 2022 Robert N. Noyce Award, and GSA’s 2016 Dr. Morris Chang's Exemplary Leadership Award. In 2017, he was ranked #1 of the most well-connected executives in the technology industry by the analytics firm Relationship Science.
Kunle Olukotun
Kunle Olukotun is the Cadence Design Professor of Electrical Engineering and Computer Science at Stanford University. Olukotun is a renowned pioneer in multi-core processor design and the leader of the Stanford Hydra chip multiprocessor (CMP) research project.
Prior to SambaNova Systems, Olukotun founded Afara Websystems to develop high-throughput, low-power multi-core processors for server systems. The Afara multi-core processor, called Niagara, was acquired by Sun Microsystems and now powers Oracle’s SPARC-based servers.
Olukotun is the Director of the Pervasive Parallel Lab and a member of the Data Analytics for What’s Next (DAWN) Lab, developing infrastructure for usable machine learning.
Olukotun is an ACM Fellow and IEEE Fellow for contributions to multiprocessors on a chip and multi-threaded processor design. Olukotun recently won the prestigious IEEE Computer Society’s Harry H. Goode Memorial Award and was also elected to the National Academy of Engineering—one of the highest professional distinctions accorded to an engineer.
Kunle received his Ph.D. in Computer Engineering from The University of Michigan.
SambaNova
Website: https://www.sambanovasystems.com/
We are a computing startup focused on building the industry’s most advanced systems platform to run AI applications from the datacenter to the edge.
Daniel Wu
Daniel Wu is a technical leader who brings more than 20 years of expertise in software engineering, AI/ML, and high-impact team development. He is the Head of Commercial Banking AI and Machine Learning at JPMorgan Chase where he drives financial service transformation through AI innovation. His diverse professional background also includes building point of care expert systems for physicians to improve quality of care, co-founding an online personal finance marketplace, and building an online real estate brokerage platform.
Daniel is passionate about the democratization of technology and the ethical use of AI - a philosophy he shares in the computer science and AI/ML education programs he has contributed to over the years.
Abhijeet Gulati
Abhijeet Gulati is the Head of AI & Senior Director of Engineering at Mitchell’s Auto Physical Damage (APD) business unit. Abhijeet is an accomplished technologist with over two decades of experience in the semiconductor, wireless, software & technology industry, focusing on AI, Machine Learning, NLP, Generative AI and SaaS solutions. He is a driven Artificial Intelligence, Advanced Analytics and business intelligence leader.
As the Head of AI at Mitchell International, Abhijeet has spent the past 5 years improving InsureTech inefficiencies, minimize decision biases, developed proprietary, differentiated enterprise scale AI products and an intelligent open platform that democratizes adoption of AI in enterprise business workflows. Abhijeet has extensive experience directing large-scale initiatives involving R&D, business & product strategy, operations, and advanced video, image and data analytics. Abhijeet has authored several patents on the application of AI in InsureTech industry. Abhijeet sits on the board of several AI standards, Ethical & Responsible AI committees.
Karl Freund
Karl Freund is the founder and principal analyst of Cambrian AI Research. Prior to this, he was Moor Insights & Strategy’s consulting lead for HPC and Deep Learning. His recent experiences as the VP of Marketing at AMD and Calxeda, as well as his previous positions at Cray and IBM, positions him as a leading industry expert in these rapidly evolving industries. Karl works with investment and technology customers to help them understand the emerging Deep Learning opportunity in data centers, from competitive landscape to ecosystem to strategy.
Karl has worked directly with datacenter end users, OEMs, ODMs and the industry ecosystem, enabling him to help his clients define the appropriate business, product, and go-to-market strategies. He is also recognized expert on the subject of low-power servers and the emergence of ARM in the datacenter and has been a featured speaker at scores of investment and industry conferences on this topic.
Accomplishments during his career include:
- Led the revived HPC initiative at AMD, targeting APUs at deep learning and other HPC workloads
- Created an industry-wide thought leadership position for Calxeda in the ARM Server market
- Helped forge the early relationship between HP and Calxeda leading to the surprise announcement of HP Moonshot with Calxeda in 2011
- Built the IBM Power Server brand from 14% market share to over 50% share
- Integrated the Tivoli brand into the IBM company’s branding and marketing organization
- Co-Led the integration of HP and Apollo Marketing after the Boston-based desktop company’s acquisition
Karl’s background includes RISC and Mainframe servers, as well as HPC (Supercomputing). He has extensive experience as a global marketing executive at IBM where he was VP Marketing (2000-2010), Cray where he was VP Marketing (1995-1998), and HP where he was a Division Marketing Manager (1979-1995).
Paolo Faraboschi
Paolo Faraboschi leads research in the Systems Research Lab at HP Labs. His technical interests lie at the intersection of hardware and software and include low power servers and systems-on-a-chip, workload-optimized, highly-parallel and distributed systems, ILP and VLIW processor architectures, compilers, and embedded systems. Faraboschi’s current research focuses on next-generation data-centric systems. His work on system-level integration for low energy servers and scale-out architectures is a key element of the HP Moonshot System, HP’s new class of software-defined servers built to address the energy efficiency challenges of hyperscale datacenters.
Previously, Faraboschi led HP Labs research in system-level modeling and simulation, an effort that resulted in the COTSon open-source simulation platform. He is also the founder of HP’s Barcelona Research Office, which pioneered research in contentprocessing systems.. Before that, Faraboschi was technical lead for the Custom-Fit Processors Project at HP Labs, Cambridge (MA), building highly-optimized, softwaredefined CPU cores. In that role, he was the principal architect of the instruction set architecture for the Lx/ST200 family of VLIW embedded processor cores (developed with STMicroelectronics) which have been used for over a decade in a variety of audio, video, and imaging consumer products, including HP's printers and scanners.
A regular keynote speaker at conferences and industry events, Faraboschi is an IEEE Fellow for "contributions to embedded processor architecture & system-on-chip technology." An active member of the computer architecture community, he also serves regularly on IEEE program and organizational committees, was guest editor of the 2012 edition of IEEE Micro TopPicks, and is co-author (with Josh Fisher and Cliff Young) of the book, “Embedded Computing: a VLIW Approach to Architecture, Compilers and Tools.” A co-holder of 24 granted patents, several other patent applications, and co-author of over 65 scientific publications, Faraboschi received his M.S. and Ph.D. (Dottora)
Albert Chen
Syona Sarma
Syona Sarma is the Senior Director, Head of Hardware Systems at Cloudflare, where she runs the engineering team that builds Cloudflare's infrastructure. Since joining Cloudflare in 2022, she leads the design of the next generation servers that is foundational to all of Cloudflare's services, including compute and storage. More recently, she spearheaded the introduction of specialized accelerators designs on the edge, which enabled launch of Cloudflare's inference-as-a-service product, and is integral to the rapidly expanding suite of AI product offerings. Before coming to Cloudflare, Syona was at Intel, where she started her career in CPU design, and held several different roles in hardware, product and business development in Cloud Computing.
She holds a Masters in Electrical and Computer Engineering from University of Texas at Austin, and a business degree from University of Washington.
Nitza Basoco
Donald Thompson
Donald is currently a Distinguished Engineer at LinkedIn, primarily overseeing the company's generative AI strategy, architecture, and technology. He has more than 35 years of hands-on experience as a technical architect and CTO, with an extensive background in designing and delivering innovative software and services on a large scale. In 2013, Donald co-founded Maana, which pioneered computational knowledge graphs and visual no-code/low-code authoring environments to address complex AI-based digital transformation challenges in Fortune 50 companies. During his 15 years at Microsoft, Donald started the Knowledge and Reasoning group within Microsoft's Bing division, where he innovated "Satori", an internet-scale knowledge graph constructed automatically from the web crawl. He co-founded a semantic computing incubation funded directly by Bill Gates, portions of which shipped as the SQL Server Semantic Engine. Additionally, he created Microsoft's first internet display ad delivery system and led numerous AI/ML initiatives in Microsoft Research across embedded systems, robotics, wearable computing, and privacy-preserving personal data services.
Jay Dawani
Jay Dawani is co-founder & CEO of Lemurian Labs, a startup at the forefront of general purpose accelerated computing for making AI development affordable and generally available for all companies and people to equally benefit. Author of the influential book "Mathematics for Deep Learning", he has held leadership positions at companies such as BlocPlay and Geometric Energy Corporation, spearheading projects involving quantum computing, metaverse, blockchain, AI, space robotics, and more. Jay has also served as an advisor to NASA Frontier Development Lab, SiaClassic, and many leading AI firms.
Lemurian Labs
Website: https://www.lemurianlabs.com/
At Lemurian Labs, our mission is to deliver affordable, accessible, and efficient AI computers because we believe in a world where AI isn't a luxury but a tool for everyone. Our founding team brings together expertise in AI, compilers, numerical algorithms, and computer architecture to reimagine accelerated computing. Our approach makes it possible for organizations of any size to equally benefit from the transformative potential of AI. Lemurian Labs is located in Menlo Park and Toronto.
Arun Nandi
Arun is a visionary AI and Analytics expert recognized as one of the Top 100 Influential AI & Analytics leaders. He is the Head of Data & Analytics at Unilever today. With over 15 years of experience driving analytics-driven value in organizations, he has built AI practices from the ground up on several occasions. Arun advocates the adoption of AI to overcome enterprise-wide challenges and create growth. Beyond his professional achievements, Arun loves to travel, having explored over 40 countries and is passionate about adventure motorbiking.
June Paik
June Paik is the founder and CEO of FuriosaAI, a company specializing in AI accelerator design for data centers, with operations in Santa Clara and Seoul. With over 20 years of experience in both academia and the industry, June worked on the design and engineering of CPUs, GPUs, and memory systems at AMD and Samsung. He holds a master’s degree in Electrical Engineering from the Georgia Institute of Technology. At FuriosaAI, June leads the company in innovating the core AI accelerator product and technology, aiming to enhance data center performance and efficiency.
FuriosaAI
Website: https://furiosa.ai/
FuriosaAI, founded in 2017, specializes in high-performance data center AI chips targeting the most capable AI models and applications. Gen 1 product WARBOY (Samsung 14nm), targeting advanced computer vision applications, has successfully entered volume production and is now deployed in public clouds and on-prem data centers. Gen 2 product RNGD (TSMC 5nm; pronounced like Renegade) equipping HBM3 is set to launch this year to address the growing demand for more energy-efficient and powerful computing for LLM and Multimodal deployment. More information can be found on the official website.
Chris Jones
Chris Jones, Director of Product Management at BrainChip, brings over 25 years of experience in technology solutions. He began his career at AT&T in data center solutions, later co-founding a pioneering video streaming platform. His entrepreneurial journey includes ventures in manufacturing/ecommerce, AI, SaaS, and robotics. Recently, he led Software Development at 3DEO and served as a Sr. Product Manager at Meta/Facebook, focusing on ML platform development. Chris holds dual BAs in Psychology and Computer Science from Thomas Edison State University, an MBA from Georgia Tech, and a Master of Computer Science from the University of Illinois at Urbana-Champaign.
BrainChip
Website: https://brainchip.com/
BrainChip is a leader in edge AI on-chip processing and learning. The company’s first-to-market convolutional, neuromorphic processor, AkidaTM, mimics the event-based processing method of the human brain in digital technology to classify sensor data at the point of acquisition, processing data with unparalleled energy-efficiency and independent of the CPU or MCU with high precision. On-device learning that is local to the chip without the need to access the cloud dramatically reduces latency while improving privacy and data security. In enabling effective edge computing to be universally deployable across real-world applications, such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI is the future for customers’ products, the planet and beyond.
Matthew Burns
Matthew Burns develops go-to-market strategies for Samtec’s Silicon-to-Silicon solutions. Over the course of 20+ years, he has been a leader in design, applications engineering, technical sales and marketing in the telecommunications, medical and electronic components industries. Mr. Burns holds a B.S. in Electrical Engineering from Penn State University.
Samtec
Website: http://www.samtec.com/AI
Founded in 1976, Samtec is a privately held, $822 MM global manufacturer of a broad line of electronic interconnect solutions, including High-Speed Board-to-Board, High-Speed Cables, Mid-Board and Panel Optics, Precision RF, Flexible Stacking, and Micro/Rugged components and cables. With 40+ location severing approximately 125 countries, Samtec’s global presence enables its unmatched customer service.
Prasad Saripalli
Prasad Saripalli serves as a Distinguished Engineer at Capital One, a technology driven bank on the Fortune 100 list, redefining Fintech and Banking using data, technology, AI and ML in unprecedented ways. Most recently, Prasad served as the Vice President of AIML and Distinguished Engineer at MindBody Inc - a portfolio company of Vista which manages the world's fourth-largest enterprise software company after Microsoft, Oracle, and SAP. Earlier, he served as VP Data Science at Edifecs, an industry premier healthcare information technology partnership platform and software provider, building Smart Decisions ML & AI Platform with Ml Apps Front. Prior to this, Prasad served as CTO and VP Engineering at Secrata.com, provider of Military grade Security and Privacy solutions developed and deployed over the past 15 years at Topia Technology for the Federal Government and the Enterprise, and as CTO & EVP at ClipCard, a SaaS based Hierarchical Analytics and Visualization platform.
At IBM, Prasad served as the Chief Architect for IBM's SmartCloud Enterprise (http://www.ibm.com/cloud-computing/us/en/). At Runaware, he served as the Vice President of Product Development. As a Principal Group Manager at Microsoft, Prasad co-led the development of virtualization stack on Windows 7 responsible for shipping Virtual PC7 and Windows XP Mode on Windows 7.
Prasad teaches Machine Learning, AI, NLP, Distributed Systems, Cloud Engineering and Robotics at Northeastern University and the University of Washington Continuum College.
Rochan Sankar
Rochan is Founder, President and CEO of Enfabrica. Prior to founding Enfabrica, he was Senior Director and leader of the Data Center Ethernet switch silicon business at Broadcom, where he defined and brought to market multiple generations of Tomahawk/Trident chips and helped build industry-wide ecosystems including 25G Ethernet and disaggregated whitebox networking.
Prior, he held roles in product management, chip architecture, and applications engineering across startup and public semiconductor companies. Rochan holds a B.A.Sc. in Electrical Engineering from the University of Toronto and an MBA from the Wharton School, and has 6 issued patents.
Rashmi Gopinath
Rashmi Gopinath is a General Partner at B Capital Group where she leads the fund’s enterprise software practice in cloud infrastructure, cybersecurity, devops, and AI/ML sectors. She brings over two decades of experience investing and operating in cutting-edge enterprise technologies. She led B Capital’s investments in over 24 companies such as DataRobot, FalconX, Clari, Phenom People, Synack, Innovaccer, Labelbox, Fabric, 6Sense, Highspot, Pendo, Starburst, OwnBackup, Figment, Perimeter81, Zesty, among others.
Rashmi was previously a Managing Director at M12, Microsoft’s venture fund, where she led investments globally in enterprise software and sat on several boards including Synack, Innovaccer, Contrast Security, Frame, UnravelData, Incorta, among others.
Prior to M12, Rashmi was an Investment Director with Intel Capital where she was involved in the firm’s investments in startups including MongoDB (Nasdaq: MDB), ForeScout (Nasdaq: FSCT), Maginatics (acq. by EMC), BlueData (acq. by HPE), among others. Rashmi held operating roles at high-growth startups such as BlueData (acq. by HPE) and Couchbase (Nasdaq: BASE) where she led global business development, product and marketing roles. She began her career in engineering and product roles at Oracle and GE Healthcare. She earned an M.B.A. from Northwestern University, and a B.S. in Electrical Engineering from University of Mumbai in India.
Gaia Bellone
Gaia is a dynamic and accomplished leader in the field of Data Science and Artificial Intelligence. In her current role at Prudential Financial, she leads Global Data and AI Governance and serves as Chief Data Officer (CDO) for Emerging Markets.
Her contributions to Prudential Financial have been significant and impactful. As the former Chief Data Scientist at Prudential, she led the Data Science team in creating innovative solutions for Digital, Marketing, Sales, and Distribution, the AI/ML Platform team, and the GenAI Enterprise Program. Her leadership and strategic vision have been instrumental in driving business growth and enhancing operational efficiency.
Prior to her tenure at Prudential, she held prominent positions at Key Bank and JPMorgan Chase. At Key Bank, she served as the Head of Data Science for the Community Bank. Her leadership and expertise in data science were crucial in optimizing the bank's operations and improving customer experience. At JPMorgan Chase, she led the data science teams for Home Lending and Auto Finance. Her strategic insights and data-driven solutions significantly improved the business performance in these sectors, contributing to the overall success of the enterprise.
Throughout her career, she has consistently demonstrated her ability to leverage data and AI to drive business growth and improve operational efficiency. Her contributions to the businesses and the enterprise have been substantial and transformative.
John Almasan
Dr. John Almasan is an accomplished technology executive with over 20 years of experience leading global tech teams and building large-scale data, AI, and cloud platforms for prominent companies such as TIAA, McKinsey & Co., American Express, Bank of America, and Nationwide Insurance. With deep expertise in multi-cloud big data engineering, machine learning, and data science, John is a hands-on practitioner and passionate about enabling the acceleration of AI adoption.
As an adjunct professor at various universities and a member of Arizona State University's Executive Board of Advisors, John is committed to preparing the next generation to meet the future's skillset needs and demands. He focuses on employee cross-training and actively engages in teaching and mentoring students in the field.
John holds two master's degrees in Engineering and Statistics, a Doctor of Business Administration, and has over 20+ patents credited to his name. He has received several awards throughout his career for his contributions to the technology industry.
Sree Ganesan
Sree Ganesan, VP of Product, d-Matrix: Sree is responsible for product management functions and business development efforts across the company. She manages the product lifecycle, definition and translation of customer needs to the product development function, acting as the voice of the customer. Prior, Sree led the Software Product Management effort at Habana Labs/Intel, delivering state-of-the-art deep learning capabilities of the Habana SynapseAI® software suite to the market. Previously, she was Engineering Director in Intel’s AI Products Group, where she was responsible for AI software strategy and deep learning framework integration for Nervana NNP AI accelerators. Sree earned a bachelor’s degree in electrical engineering from the Indian Institute of Technology Madras and a PhD in computer engineering from the University of Cincinnati, Ohio.
Manoj Wadekar
Steven Woo
I was drawn to Rambus to focus on cutting edge computing technologies. Throughout my 15+ year career, I’ve helped invent, create and develop means of driving and extending performance in both hardware and software solutions. At Rambus, we are solving challenges that are completely new to the industry and occur as a response to deployments that are highly sophisticated and advanced.
As an inventor, I find myself approaching a challenge like a room filled with 100,000 pieces of a puzzle where it is my job to figure out how they all go together – without knowing what it is supposed to look like in the end. For me, the job of finishing the puzzle is as enjoyable as the actual process of coming up with a new, innovative solution.
For example, RDRAM®, our first mainstream memory architecture, implemented in hundreds of millions of consumer, computing and networking products from leading electronics companies including Cisco, Dell, Hitachi, HP, Intel, etc. We did a lot of novel things that required inventiveness – we pushed the envelope and created state of the art performance without making actual changes to the infrastructure.
I’m excited about the new opportunities as computing is becoming more and more pervasive in our everyday lives. With a world full of data, my job and my fellow inventors’ job will be to stay curious, maintain an inquisitive approach and create solutions that are technologically superior and that seamlessly intertwine with our daily lives.
After an inspiring work day at Rambus, I enjoy spending time with my family, being outdoors, swimming, and reading.
Education
- Ph.D., Electrical Engineering, Stanford University
- M.S. Electrical Engineering, Stanford University
- Master of Engineering, Harvey Mudd College
- B.S. Engineering, Harvey Mudd College
Taeksang Song
aeksang is a Corporate VP at Samsung Electronics where he is leading a team dedicated to pioneering cutting-edge technologies including CXL memory expander, fabric attached memory solution and processing near memory to meet the evloing demands of next-generation data-centric AI architecture. He has almost 20 years' professional experience in memory and sub-system architecture, interconnect protocols, system-on-chip design and collaborating with CSPs to enable heterogneous computing infrastructure. Prior to join Samsung Electronics, he worked at Rambus Inc., SK hynix and Micron Technology in lead architect roles for the emerging memory controllers and systems.
Taeksang receives his Ph.D. degree from KAIST, South Korea, in 2006. Dr. Song has authored and co-authored over 20 technical papers and holds over 50 U.S. patents.
Rambus
Website: https://www.rambus.com/
Rambus is a provider of industry-leading chips and silicon IP making data faster and safer. With over 30 years of advanced semiconductor experience, we are a pioneer in high-performance memory subsystems that solve the bottleneck between memory and processing for data-intensive systems. Whether in the cloud, at the edge or in your hand, real-time and immersive applications depend on data throughput and integrity. Rambus products and innovations deliver the increased bandwidth, capacity and security required to meet the world’s data needs and drive ever-greater end-user experiences. For more information, visit rambus.com.
Sanchit Juneja
Sanchit Juneja has 18+ years of Tech Leadership Experience in tech and product roles across The US, South-east Asia, Africa, South-east Asia, and Europe with organizations such as Booking.com, AppsFlyer, GoJek, Rocket Internet, and National Instruments. Currently Director- Product (Big Data & ML/AI) with Booking.com
Phil Pokorny
Phil Pokorny is the Chief Technology Officer (CTO) for SGH / Penguin Solutions. He brings a wealth of engineering experience and customer insight to the design, development, support, and vision for our technology solutions.
Phil joined Penguin in February of 2001 as an engineer, and steadily progressed through the organization, taking on more responsibility and influencing the direction of key technology and design decisions. Prior to joining Penguin, he spent 14 years in various engineering and system administration roles with Cummins, Inc. and Cummins Electronics. At Cummins, Phil participated in the development of internal network standards, deployed and managed a multisite network of multiprotocol routers, and supported a diverse mix of office and engineering workers with a variety of server and desktop operating systems.
He has contributed code to Open Source projects, including the Linux kernel, lm_sensors, and LCDproc.
Phil graduated from Rose-Hulman Institute of Technology with Bachelor of Science degrees in math and electrical engineering, with a second major in computer science.
Penguin Solutions
Website: https://www.penguinsolutions.com/
Penguin Solutions designs, builds, deploys, and manages AI and accelerated computing infrastructures at scale. With 25+ years of HPC experience – and more than 75,000 GPUs deployed and managed to date – Penguin is a trusted strategic partner for AI and HPC solutions and services for leading organizations around the world.
Designing, deploying, and operating “AI factories” is an incredibly complex endeavor and Penguin has successfully been delivering AI factories at scale since 2017. The company’s OriginAI infrastructure, which is backed by Penguin's specialized intelligent cluster management software and expert services, streamlines AI implementation and management, and enables predictable AI cluster performance that supports customers’ business needs and return on investment goals for clusters small or large, ranging in size from hundreds to thousands of GPUs.
The OriginAI solution builds on Penguin’s extensive AI infrastructure expertise to reduce complexity and accelerate return on investment, providing CEOs and CIOs alike the essential and reliable infrastructure they need to deploy and manage demanding AI workloads at scale in the data center and at the edge. To learn more visit their website at: https://www.penguinsolutions.com. Follow Penguin Solutions on LinkedIn, Twitter, YouTube, and Facebook.
Drew Matter
Drew Matter leads Mikros Technologies, a designer and manufacturer of best-in-class direct liquid cold plates for AI/HPC, semiconductor testing, laser & optics, and power electronics. Mikros provides leading microchannel thermal solutions in single-phase, 2-phase, DLC and immersion systems to leading companies around the world.
Manoj Wadekar
Join Vamsi Bopanna, AMD Senior Vice President of AI, as he unveils the latest breakthroughs in AI technology, driving advancements across cloud, HPC, embedded, and client segments. Discover the impact of strategic partnerships and open-source innovation in accelerating AI adoption. Through real-world examples, see how AI is getting developed and deployed, reshaping the global compute landscape from the cloud to the client.
Vamsi Boppana
Vamsi Boppana is responsible for AMD’s AI strategy, driving the AI roadmap across the client, edge and cloud for AMD’s AI software stack and ecosystem efforts. Until 2022, he was Senior Vice President of the Central Products Group (CPG), responsible for developing and marketing Xilinx’s Adaptive and AI product portfolio. He also served as executive sponsor for the Xilinx integration into AMD.
At Xilinx, Boppana led the silicon development of leading products such as Versal™ and Zynq™ UltraScale™+ MPSoC. Before joining the company in 2008, he held engineering management roles at Open-Silicon and Zenasis Technologies, a company he co-founded. Boppana began his career at Fujitsu Laboratories. Caring deeply about the benefits of the technology he creates, Boppana aspires both to achieve commercial success and improve lives through the products he builds.
Mo Elshenawy
With more than 25 years of engineering and leadership expertise, Mo is the President and CTO at Cruise, a self-driving car company. Over the last six years, he has played a pivotal role in driving Cruise's engineering advancements, while scaling the team from hundreds to thousands of engineers. Mo currently leads Cruise’s engineering, operations, and product teams – those who are responsible for all aspects of our autonomous vehicles development and deployment, including AI, robotics, simulation, product, program, data and machine learning platforms, infrastructure, security, safety, operations, and hardware.
Prior to Cruise, Mo led global technologies for Amazon ReCommerce Platform, Warehouse Deals, and Liquidations: a massive scale global business that enables Amazon to evaluate, price, sell, liquidate, and donate millions of used products daily. In addition, over the past decade, Mo was a technical co-founder and CTO for three tech startups, the latest of which is a cloud-based financial services development platform used by top financial institutions.
Salam Al Mosawi
Nscale
Website: https://www.nscale.com/
Nscale's GPU cloud platform is engineered for the demands of AI, offering high-performance compute optimised for training, fine-tuning, and intensive workloads. From our data centres to software stack, we are vertically integrated in Europe to provide unparalleled performance, efficiency and sustainability.
Arne Stoschek
Arne is the Vice President of AI, Autonomy & Digital Information and oversees the company’s development of autonomous flight and machine learning solutions to enable future, self-piloted aircraft. In his role, he also leads the advancement of large-scale data-driven processes to develop novel aircraft functions. He is passionate about robotics, autonomy and the impact these technologies will have on future mobility. After holding engineering leadership positions at global companies such as Volkswagen/Audi and Infineon, and at aspiring Silicon Valley startups, namely Lucid Motors/Atieva, Knightscope and Better Place, Arne dared to take his unique skill set to altitude above ground inside Airbus. Arne earned a Doctor of Philosophy in Electrical and Computer Engineering from the Technical University of Munich and held a computer vision and data analysis research position at Stanford University.
Zaid Kahn
Zaid is currently GM in Cloud Hardware Infrastructure Engineering where he leads a team focusing on advanced architecture and engineering efforts for AI. He is passionate about building balanced teams of artists and soldiers that solve incredibly difficult problems at scale.
Prior to Microsoft Zaid was head of infrastructure engineering at LinkedIn responsible for all aspects of engineering for Datacenters, Compute, Networking, Storage and Hardware. He also lead several software development teams spanning from BMC, network operating systems, server and network fleet automation to SDN efforts inside the datacenter and global backbone including edge. He introduced the concept of disaggregation inside LinkedIn and pioneered JDM with multiple vendors through key initiatives like OpenSwitch, Open19 essentially controlling destiny for hardware development at LinkedIn. During his 9 year tenure at LinkedIn his team scaled network and systems 150X, members from 50M to 675M, and hiring someone every 7 seconds on the LinkedIn Platform.
Prior to LinkedIn Zaid was Network Architect at WebEx responsible for building the MediaTone network and later I built a startup that built a pattern recognition security chip using NPU/FPGA. Zaid holds several patents in networking and SDN and is also a recognized industry leader. He previously served as a board member of the Open19 Foundation and San Francisco chapter of Internet Society. Currently he serves on DE-CIX and Pensando advisory boards.
David Lazovsky
In 2004 Mr. Lazovsky founded Intermolecular, a semiconductor and clean energy research, development and Intellectual Property licensing company. He served as the company’s Chief Executive Officer, President and as a member of the board of directors from September 2004 through October 2014.
As President and CEO, Mr. Lazovsky led all aspects of the business through its lifecycle from early-stage start-up to public company. Intermolecular (IMI) went public on the NASDAQ in 2011. Mr. Lazovsky has an in-depth knowledge of semiconductor, data/telecommunications, photonics and clean energy industries, as well as extensive international business experience.
Mr. Lazovsky previously served as Chairman, Energy Storage Systems (ESS), industry leader in low-cost, long-duration energy storage 2017 - 2018. Prior to Intermolecular, Mr. Lazovsky held several senior management positions at Applied Materials. He was responsible for managing more than $1 billion in Applied Materials’ semiconductor manufacturing equipment business in the Metal Deposition and Thin Films Product Business Groups.
Mr. Lazovsky was Ernst & Young Entrepreneur of the Year 2011, Northern California finalist. He holds a Bachelor of Science in mechanical engineering from Ohio University. He currently has over 50 issued and 5 pending U.S. patents.
Mark Wade
Mark is the Chief Executive Officer and Co-Founder of Ayar Labs. His prior roles at Ayar Labs include Chief Technology Officer and Senior Vice President of Engineering. He is recognized as a pioneer in photonics technologies and, before founding the company, led the team that designed the optics in the world's first processor to communicate using light. He and his co-founders invented breakthrough technology at MIT and UC Berkeley from 2010-2015, which led to the formation of Ayar Labs. He holds a PhD from the University of Colorado.
We at Positron set out to build a cost-effective alternative to NVIDIA for LLM inference, and after 12 months, our Florida-based head of sales made our first sale. He taught us the value of chasing our largest competitive advantages, across industries and around the globe. We also managed to build a FPGA-based hardware-and-software inference platform capable of serving monolithic and mixture-of-experts models at very competitive token rates. It wasn't easy, because the LLM landscape changes meaningfully every two weeks. Yet today we have customers both evaluating and in production, with both our physical servers and our hosted cloud service. We'll share a few of the hairy workarounds and engineering heroics that achieved equivalence with NVIDIA so quickly, and tamed the complexity of building a dedicated LLM computer from FPGAs.
Barrett Woodside
In developer-oriented, marketing, and product roles, Barrett spent the past decade of his career working on AI inference, first at NVIDIA, running and profiling computer vision workloads on Jetson. After three years shoehorning models onto embedded systems powering drones, robots, and surveillance systems, he joined Google Cloud where he first-hand experienced the incredible power of Transformer models running accurate translation workloads on third-generation TPUs. He helped launch Cloud AutoML Vision with Fei-Fei Li and announced the TPU Pod's first entry into the MLPerf benchmark. Most recently, he spent two years at Scale AI working on product strategy and go-to-market for Scale Spellbook, its first LLM inference and fine tuning product. Today, he is Positron's co-founder and VP of Product.
Positron AI
Website: https://www.positron.ai/
Positron delivers vendor freedom and faster inference for both enterprises and research teams, by allowing them to use hardware and software explicitly designed from the ground up for generative and large language models (LLMs).
Through lower power usage and drastically lower total cost of ownership (TCO), Positron enables you to run popular open source LLMs to serve multiple users at high token rates and long context lengths. Positron is also designing its own ASIC to expand from inference and fine tuning to also support training and other parallel compute workloads.
Join Mark Russinovich, Azure CTO and Technical Fellow, for an in-depth exploration of Microsoft's AI architecture. Discover the technology behind our sustainable datacenter design, massive supercomputers used for foundational model training, efficient infrastructure for serving models, workload management and optimizations, AI safety, and advancements in confidential computing to safeguard data during processing.
Mark Russinovich
Mark Russinovich is Chief Technology Officer and Technical Fellow for Microsoft Azure, Microsoft’s global enterprise-grade cloud platform. A widely recognized expert in distributed systems, operating systems and cybersecurity, Mark earned a Ph.D. in computer engineering from Carnegie Mellon University. He later co-founded Winternals Software, joining Microsoft in 2006 when the company was acquired. Mark is a popular speaker at industry conferences such as Microsoft Ignite, Microsoft Build, and RSA Conference. He has authored several nonfiction and fiction books, including the Microsoft Press Windows Internals book series, Troubleshooting with the Sysinternals Tools, as well as fictional cyber security thrillers Zero Day, Trojan Horse and Rogue Code.
Bratin Saha
Dr. Bratin Saha is the Vice President of Machine Learning and AI services at AWS where he leads all the ML and AI services and helped build one of the fastest growing businesses in AWS history. He is an alumnus of Harvard Business School (General Management Program), Yale University (PhD Computer Science), and Indian Institute of Technology (BS Computer Science). He has more than 70 patents granted (with another 50+ pending) and more than 30 papers in conferences/journals. Prior to Amazon he worked at Nvidia and Intel leading different product groups spanning imaging, analytics, media processing, high performance computing, machine learning, and software infrastructure.
Andrew Ng
Dr. Andrew Ng is a globally recognized leader in AI (Artificial Intelligence). He is Founder of DeepLearning.AI, Founder & CEO of Landing AI, General Partner at AI Fund, Chairman & Co-Founder of Coursera and an Adjunct Professor at Stanford University’s Computer Science Department.
In 2011, he led the development of Stanford University's main MOOC (Massive Open Online Courses) platform and taught an online Machine Learning course that was offered to over 100,000 students leading to the founding of Coursera where he is currently Chairman and Co-founder.
Previously, he was Chief Scientist at Baidu, where he led the company’s ~1300 person AI Group and was responsible for driving the company’s global AI strategy and infrastructure. He was also the founding lead of the Google Brain team.
As a pioneer in machine learning and online education, Dr. Ng has changed countless lives through his work in AI, and has authored or co-authored over 200 research papers in machine learning, robotics and related fields. In 2013, he was named to the Time 100 list of the most influential persons in the world. He holds degrees from Carnegie Mellon University, MIT and the University of California, Berkeley.
Flexible and programmable solutions are the key to delivering high performance, high efficiency AI at the edge. As semiconductor technologies experience the biggest shift in decades in order to meet the requirements of the latest generation of AI models, software is set to be the true enabler of success. Optimised libraries and toolkits empower all stakeholders in the developer journey to follow the “functional to performant to optimal” workflow typical of today’s edge compute application development cycle.
This presentation will present a software-first approach to enabling AI at the edge, touching on the importance of community-wide initiatives such as The UXL Foundation, of which Imagination is a founding member.
Imagination is a global leader in innovative edge technology, delivering landmark GPU, CPU and AI semiconductor solutions across automotive, mobile, consumer and desktop markets for over thirty years.
Tim Mamtora
Gerald Friedland
Dr. Gerald Friedland is a Principal Scientist at AWS working on Low-Code, No-Code Machine Learning. Before that he was CTO and founder of Brainome, a no-code machine learning service for miniature models. Other posts include UC Berkeley, Lawrence Livermore National Lab, and the International Computer Science Institute. He was the lead figure behind the Multimedia Commons initiative, a collection of 100M images and 1M videos for research and has published more than 200 peer-reviewed articles in conferences, journals, and books. His latest book "Information-Driven Machine Learning" was released by Springer-Nature in Dec. 2023. He also co-authored a textbook on Multimedia Computing with Cambridge University Press. Dr. Friedland received his doctorate (summa cum laude) and master's degree in computer science from Freie Universitaet Berlin, Germany, in 2002 and 2006, respectively.
Hira Dangol
Industry experience in AI/ML, engineering, architecture and executive roles in leading technology companies, service providers and Silicon Valley leading organizations. Currently focusing on innovation, disruption, and cutting-edge technologies through startups and technology-driven corporation in solving the pressing problems of industry and world.
Puja Das
Dr. Puja Das, leads the Personalization team at Warner Brothers Discovery (WBD) which includes offerings on Max, HBO, Discovery+ and many more.
Prior to WBD, she led a team of Applied ML researchers at Apple, who focused on building large scale recommendation systems to serve personalized content on the App Store, Arcade and Apple Books. Her areas of expertise include user modeling, content modeling, recommendation systems, multi-task learning, sequential learning and online convex optimization. She also led the Ads prediction team at Twitter (now X), where she focused on relevance modeling to improve App Ads personalization and monetization across all of Twitter surfaces.
She obtained her Ph.D from University of Minnesota in Machine Learning, where the focus of her dissertation was online learning algorithms, which work on streaming data. Her dissertation was the recipient of the prestigious IBM Ph D. Fellowship Award.
She is active in the research community and part of the program committee at ML and recommendation system conferences. Shas mentored several undergrad and grad students and participated in various round table discussions through Grace Hopper Conference, Women in Machine Learning Program colocated with NeurIPS, AAAI and Computing Research Association- Women’s chapter.
Logan Grasby
Logan Grasby is a Senior Machine Learning Engineer at Cloudflare, based in Calgary, Alberta. As part of Cloudflare's Workers AI team he works on developing, deploying and scaling AI inference servers across Cloudflare's edge network. In recent work he has designed services for multi-tenant LLM LoRA inference and dynamic diffusion model pipeline servers. Prior to Cloudflare, Logan founded Azule, an LLM driven customer service and product recommendation platform for ecommerce. He also co-founded Conversion Pages and served as Director of Product at Appstle, a Shopify app development firm.
Sadasivan Shankar
Sadasivan (Sadas) Shankar is Research Technology Manager at SLAC National Laboratory, adjunct Professor in Stanford Materials Science and Engineering, and Lecturer in the Stanford Graduate School of Business. He was an Associate in the Department of Physics at Harvard University, and was the first Margaret and Will Hearst Visiting Lecturer in Harvard and the first Distinguished Scientist in Residence at the Harvard Institute of Applied Computational Sciences. He has co-instructed classes related to design of materials, computing, sustainability in materials, and has received Excellence in Teaching award from Harvard University. He is co-instructing a class at Stanford University on Translation for
Innovations. He is a co-founder of and the Chief Scientist at Material Alchemy, a “last mile” translational and independent venture that has been recently founded to accelerate the path from materials discovery to adoption, with environmental sustainability as a key goal. In addition to research on fundamentals of Materials Design, his current research is on new architectures for specialized AI methods is exploring ways of bringing machine intelligence to system-level challenges in inorganic/biochemistry, materials, and physics and new frameworks for computing as information processing inspired by lessons from
nature.
Dr. Shankar’s current research and analysis on Sustainable Computing is helping provide directions for the US Department of Energy’s EES2 scaling initiatives (energy reduction in computing every generation for 1000X reduction in 2 decades) as part of the White House Plan to Revitalize American Manufacturing and Secure Critical Supply Chains in 2022 for investment in research, development, demonstration, and commercial application (RDD&CA) in conventional semiconductors.
In addition, his analysis is helping identify pathways for energy efficient computing. While in the industry, Dr. Shankar and his team have enabled several critical technology decisions in the semiconductor industrial applications of chemistry, materials, processing, packaging, manufacturing, and design rules for over nine generations of Moore’s law including first advanced
process control application in 300 mm wafer technology; introduction of flip chip packaging using electrodeposition, 100% Pb-elimination in microprocessors, design of new materials, devices including nano warp-around devices for the advanced semiconductor technology manufacturing, processing
methods, reactors, etc. Dr. Shankar managed his team members distributed across multiple sites in the US, with collaborations in Europe. The teams won several awards from the Executive Management and technology organizations.
He is a co-inventor in over twenty patent filings covering areas in new
chemical reactor designs, semiconductor processes, bulk and nano materials for the sub 10 nanometer generation of transistors, device structures, and algorithms. He is also a co-author in over hundred publications and presentations in measurements, multi-scale and multi-physics methods spanning from quantum scale to macroscopic scales, in the areas of chemical synthesis, plasma chemistry and processing, non-equilibrium electronic, ionic, and atomic transport, energy efficiency of information processing, and machine learning methods for bridging across scales, and estimating complex materials
properties and in advanced process control.
Dr. Shankar was an invited speaker at the Clean-IT Conference in Germany on Revolutionize Digital Systems and AI (2023), Telluride Quantum Inspired Neuromorphic Computing Workshop (2023) on Limiting Energy Estimates for Classical and Quantum Information Processing, Argonne National
Laboratory Director’s Special Colloquium on the Future of Computing (2022), panelist on Carnegie Science series on Brain and Computing (2020), lecturer in the Winter Course on Computational Brain Research in IIT-M-India (2020), invited participant in the Kavli Institute of Theoretical Physics program
on Cellular Energetics in UCSB (2019), invited speaker to the Camille and Henry Dreyfus Foundation meeting on Machine Learning for problems in Chemistry and Materials Science (2019), a Senior Fellow in UCLA Institute of Pure and Applied Mathematics during the program on Machine Learning and Manybody
Physics (2016), invited to the White House event for starting of the Materials Genome Initiative (2012), Invited speaker in Erwin Schrödinger International Institute for Mathematical Physics-Vienna (2007), Intel’s first Distinguished Lecturer in Caltech (1998) and MIT (1999). He has also given several
colloquia and lectures in universities all over the world and his research was also featured in the publications Science (2012), TED (2013), Nature Machine Intelligence (2022), Nature Physics (2022).
In this presentation, we will explore the advanced integration of Digital In-Memory Computing (D-IMC) and RISC-V technology by Axelera AI to accelerate AI inference workloads. Our approach uniquely combines the high energy efficiency and throughput of D-IMC with the versatility of RISC-V technology, creating a powerful and scalable platform. This platform is designed to handle a wide range of AI tasks, from advanced computer vision at the edge to emerging AI challenges.
We will demonstrate how our scalable architecture not only meets but exceeds the demands of modern AI applications. Our platform enhances performance while significantly reducing energy use and operational costs. By pushing the boundaries of Edge AI and venturing into new AI domains, Axelera AI is setting new benchmarks in AI processing efficiency and deployment capabilities.
Evangelos Eleftheriou
Evangelos Eleftheriou, an IEEE and IBM Fellow, is the Chief Technology Officer and co-founder of Axelera AI, a best-in-class performance company that develops a game-changing hardware and software platform for AI.
As a CTO, Evangelos oversees the development and dissemination of technology for external customers, vendors, and other clients to help improve and increase Axelera AI’s business.
Before his current role, Evangelos worked for IBM Research – Zurich, where he held various management positions for over 35 years. His outstanding achievements led him to become an IBM Fellow, which is IBM’s highest technical honour.
In 2002, Evangelos became a Fellow of the IEEE, and later in 2003, he was co-recipient of the IEEE ComS Leonard G. Abraham Prize Paper Award. He was also co-recipient of the 2005 Technology Award of the Eduard Rhein Foundation. In 2005, he was appointed an IBM Fellow and inducted into the IBM Academy of Technology. In 2009, he was co-recipient of the IEEE Control Systems Technology Award and the IEEE Transactions on Control Systems Technology Outstanding Paper Award. In 2016, Evangelos received an honoris causa professorship from the University of Patras, Greece. In 2018, he was inducted into the US National Academy of Engineering as Foreign Member. Evangelos has authored or coauthored over 250 publications and holds over 160 patents (granted and pending applications).
His primary interests lie in AI and machine learning, including emerging computing paradigms such as neuromorphic and in-memory computing.
Evangelos holds a PhD and a Master of Eng. degrees in Electrical Engineering from Carleton University, Canada, and a BSc in Electrical & Computer Engineering from the University of Patras, Greece.
Axelera
Website: https://www.axelera.ai/
Axelera AI is delivering the world’s most powerful and advanced solutions for AI at the Edge. Its industry-defining Metis™ AI platform – a complete hardware and software solution for AI inference at the edge – makes computer vision applications more accessible, powerful and user friendly than ever before. Based in the AI Innovation Center of the High Tech Campus in Eindhoven, The Netherlands, Axelera AI has R&D offices in Belgium, Switzerland, Italy and the UK, with over 170 employees in 18 countries. Its team of experts in AI software and hardware come from top AI firms and Fortune 500 companies.
For more information on Axelera AI, see: www.axelera.ai
Sakyasingha Dasgupta
Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals. He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations.
Sakya holds a PhD. in Physics of Complex Systems from the Max Planck Institute in Germany, along with Masters in Artificial Intelligence from The University of Edinburgh and a Bachelors of Computer Engineering. Prior to founding EdgeCortix he completed his entrepreneurship studies from the MIT Sloan School of Management.
EdgeCortix
Website: https://www.edgecortix.com/
EdgeCortix is a fabless semiconductor design company focused on enabling energy-efficient edge intelligence. Founded in 2019 with the radical idea of taking a software first approach, while designing an artificial intelligence specific runtime reconfigurable processor from the ground up using a technique called “hardware & software co-exploration”. Targeting advanced computer vision applications first, using proprietary hardware and software IP on existing processors like Field Programmable Gate arrays (FPGAs) and custom designed Application Specific Integrated Circuits (ASICs), the company is geared towards positively disrupting the rapidly growing AI hardware space across defense, aerospace, smart cities, industry 4.0, autonomous vehicles and robotics.
With the ubiquitous and increasing use of computing, the talk will quantitatively demonstrate unsustainable energy and complexity trends in computing and AI, from hardware, algorithms, and software. Our discussion of the unsustainability of these trends will motivate a few exciting directions for computing, especially for applications to AI/ML. Specifically, we will touch upon the evolution of hardware in terms of energy used following Dennard scaling and the challenges posed by continuing these current trends. We will illustrate opportunities suggested by a few of these unsustainable trends of computing, specifically on applications to Machine Learning and Artificial Intelligence including at the edge. Given the goals of achieving AGI promised by current technologies, we will propose a modified form of Turing’s test that points to a new conceptualization of computing for application beyond the current paradigms.
Sadasivan Shankar
Sadasivan (Sadas) Shankar is Research Technology Manager at SLAC National Laboratory, adjunct Professor in Stanford Materials Science and Engineering, and Lecturer in the Stanford Graduate School of Business. He was an Associate in the Department of Physics at Harvard University, and was the first Margaret and Will Hearst Visiting Lecturer in Harvard and the first Distinguished Scientist in Residence at the Harvard Institute of Applied Computational Sciences. He has co-instructed classes related to design of materials, computing, sustainability in materials, and has received Excellence in Teaching award from Harvard University. He is co-instructing a class at Stanford University on Translation for
Innovations. He is a co-founder of and the Chief Scientist at Material Alchemy, a “last mile” translational and independent venture that has been recently founded to accelerate the path from materials discovery to adoption, with environmental sustainability as a key goal. In addition to research on fundamentals of Materials Design, his current research is on new architectures for specialized AI methods is exploring ways of bringing machine intelligence to system-level challenges in inorganic/biochemistry, materials, and physics and new frameworks for computing as information processing inspired by lessons from
nature.
Dr. Shankar’s current research and analysis on Sustainable Computing is helping provide directions for the US Department of Energy’s EES2 scaling initiatives (energy reduction in computing every generation for 1000X reduction in 2 decades) as part of the White House Plan to Revitalize American Manufacturing and Secure Critical Supply Chains in 2022 for investment in research, development, demonstration, and commercial application (RDD&CA) in conventional semiconductors.
In addition, his analysis is helping identify pathways for energy efficient computing. While in the industry, Dr. Shankar and his team have enabled several critical technology decisions in the semiconductor industrial applications of chemistry, materials, processing, packaging, manufacturing, and design rules for over nine generations of Moore’s law including first advanced
process control application in 300 mm wafer technology; introduction of flip chip packaging using electrodeposition, 100% Pb-elimination in microprocessors, design of new materials, devices including nano warp-around devices for the advanced semiconductor technology manufacturing, processing
methods, reactors, etc. Dr. Shankar managed his team members distributed across multiple sites in the US, with collaborations in Europe. The teams won several awards from the Executive Management and technology organizations.
He is a co-inventor in over twenty patent filings covering areas in new
chemical reactor designs, semiconductor processes, bulk and nano materials for the sub 10 nanometer generation of transistors, device structures, and algorithms. He is also a co-author in over hundred publications and presentations in measurements, multi-scale and multi-physics methods spanning from quantum scale to macroscopic scales, in the areas of chemical synthesis, plasma chemistry and processing, non-equilibrium electronic, ionic, and atomic transport, energy efficiency of information processing, and machine learning methods for bridging across scales, and estimating complex materials
properties and in advanced process control.
Dr. Shankar was an invited speaker at the Clean-IT Conference in Germany on Revolutionize Digital Systems and AI (2023), Telluride Quantum Inspired Neuromorphic Computing Workshop (2023) on Limiting Energy Estimates for Classical and Quantum Information Processing, Argonne National
Laboratory Director’s Special Colloquium on the Future of Computing (2022), panelist on Carnegie Science series on Brain and Computing (2020), lecturer in the Winter Course on Computational Brain Research in IIT-M-India (2020), invited participant in the Kavli Institute of Theoretical Physics program
on Cellular Energetics in UCSB (2019), invited speaker to the Camille and Henry Dreyfus Foundation meeting on Machine Learning for problems in Chemistry and Materials Science (2019), a Senior Fellow in UCLA Institute of Pure and Applied Mathematics during the program on Machine Learning and Manybody
Physics (2016), invited to the White House event for starting of the Materials Genome Initiative (2012), Invited speaker in Erwin Schrödinger International Institute for Mathematical Physics-Vienna (2007), Intel’s first Distinguished Lecturer in Caltech (1998) and MIT (1999). He has also given several
colloquia and lectures in universities all over the world and his research was also featured in the publications Science (2012), TED (2013), Nature Machine Intelligence (2022), Nature Physics (2022).
Sunghyun Park
Sunghyun Park is the Founder and Chief Executive Officer of Rebellions, a leading AI chip company based in South Korea. After gaining experience at prominent organizations like Morgan Stanley, Intel, and SpaceX, Park drew upon his expertise in chip design to co-found Rebellions in 2020, choosing to leverage South Korea's robust semiconductor ecosystem. As an MIT alumnus, Park has guided the company's release of two innovative AI chips and secured over $210 million in funding within just three and a half years, solidifying Rebellions' position as a pioneer in the AI infrastructure space.
Rebellions
Website: https://rebellions.ai/
The founding team of Rebellions relocated to Korea from New York and elsewhere in 2020 to revolutionize AI chip industry.
At the heart of the Korean Silicon Eco-system, Rebellions has built a cutting-edge AI inference accelerator and full-stack software optimized for generative AI.
Within just three years of its inception, the company has introduced two groundbreaking chips: the finance market focused ION, released in 2021, and the datacenter-focused ATOM, taped-out in 2023. ATOM has demonstrated its superior performance in the MLPerf benchmarks and has been commercialized in a data center through a strategic partnership with KT(Korea Telecom), the biggest IDC company in South Korea.
Currently, Rebellions is developing its next-generation AI chip, REBEL, equipped with HBM3E in a collaboration with Samsung Electronics, paving the way for the advanced technology in the era of generative AI.
Sakyasingha Dasgupta
Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals. He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations.
Sakya holds a PhD. in Physics of Complex Systems from the Max Planck Institute in Germany, along with Masters in Artificial Intelligence from The University of Edinburgh and a Bachelors of Computer Engineering. Prior to founding EdgeCortix he completed his entrepreneurship studies from the MIT Sloan School of Management.
Gayathri Radhakrishnan
Gayathri is currently Partner at Hitachi Ventures. Prior to that, she was with Micron Ventures, actively investing in startups that apply AI to solve critical problems in the areas of Manufacturing, Healthcare and Automotive. She brings over 20 years of multi-disciplinary experience across product management, product marketing, corporate strategy, M&A and venture investments in large Fortune 500 companies such as Dell and Corning and in startups. She has also worked as an early stage investor at Earlybird Venture Capital, a premier European venture capital fund based in Germany. She has a Masters in EE from The Ohio State University and MBA from INSEAD in France. She is also a Kauffman Fellow - Class 16.
Roberto Mijat
Roberto leads product marketing and strategy at Blaize. He is an AI technology and product leader with an engineering background and over 20 years of experience in developing and taking to market advanced semiconductor hardware and software solutions.
Roberto spent over 15 years at Arm, holding several senior product and business leadership positions and leading multiple global product teams. He was a member of the company’s Product Line Board and Steering board for AI on CPU. He created and architected the Compute Libraries framework, a key component of Arm’s AI software stack, deployed in billions of devices today. Roberto established the Arm GPU Compute ecosystem from scratch and led collaborations with dozens of industry leaders, including Facebook, Google, Huawei, MediaTek, and Samsung.
At Graphcore, Roberto led the launch of the Bow IPU AI accelerator, promoted the standardization of FP8, and led collaborations with storage partners.
Roberto is an advisor at Silicon Catalyst and a Mentor at London Business School. He holds a first degree in Artificial Intelligence and Quantum Computing and an Executive MBA from London Business School.
EdgeCortix
Website: https://www.edgecortix.com/
EdgeCortix is a fabless semiconductor design company focused on enabling energy-efficient edge intelligence. Founded in 2019 with the radical idea of taking a software first approach, while designing an artificial intelligence specific runtime reconfigurable processor from the ground up using a technique called “hardware & software co-exploration”. Targeting advanced computer vision applications first, using proprietary hardware and software IP on existing processors like Field Programmable Gate arrays (FPGAs) and custom designed Application Specific Integrated Circuits (ASICs), the company is geared towards positively disrupting the rapidly growing AI hardware space across defense, aerospace, smart cities, industry 4.0, autonomous vehicles and robotics.
Zaid Kahn
Zaid is currently GM in Cloud Hardware Infrastructure Engineering where he leads a team focusing on advanced architecture and engineering efforts for AI. He is passionate about building balanced teams of artists and soldiers that solve incredibly difficult problems at scale.
Prior to Microsoft Zaid was head of infrastructure engineering at LinkedIn responsible for all aspects of engineering for Datacenters, Compute, Networking, Storage and Hardware. He also lead several software development teams spanning from BMC, network operating systems, server and network fleet automation to SDN efforts inside the datacenter and global backbone including edge. He introduced the concept of disaggregation inside LinkedIn and pioneered JDM with multiple vendors through key initiatives like OpenSwitch, Open19 essentially controlling destiny for hardware development at LinkedIn. During his 9 year tenure at LinkedIn his team scaled network and systems 150X, members from 50M to 675M, and hiring someone every 7 seconds on the LinkedIn Platform.
Prior to LinkedIn Zaid was Network Architect at WebEx responsible for building the MediaTone network and later I built a startup that built a pattern recognition security chip using NPU/FPGA. Zaid holds several patents in networking and SDN and is also a recognized industry leader. He previously served as a board member of the Open19 Foundation and San Francisco chapter of Internet Society. Currently he serves on DE-CIX and Pensando advisory boards.
Bing Yu
Bing Yu is a Sr. Technical Director at Andes Technology. He has over 30 years of experience in technical leadership and management, specializing in machine learning hardware, high performance CPUs and system architecture. In his current role, he is responsible for processor roadmap, architecture, and product design. Bing received his BS degree in Electrical Engineering from San Jose State University and completed the Stanford Executive Program (SEP) at the Stanford Graduate School of Business.
Edward Kmett
Edward spent most of his adult life trying to build reusable code in imperative languages. He converted to Haskell in 2006 while searching for better building materials. Edward served as the founding chair for the Haskell Core Libraries Committee and continues to collaborate with hundreds of other developers on over three hundred functional programming projects on Github. He previously wrote software for stock exchanges at Standard & Poors, served as a researcher at MIRI, and ran Groq's Software Engineering team. As Positron's CTO, today he sets long-term software, architecture, and optimization strategy.
Tom Sheffler
Tom earned his PhD from Carnegie Mellon in Computer Engineering with a focus on parallel computing architectures and prrogramming models. His interest in high-performance computing took him to NASA Ames, and then to Rambus where he worked on accelerated memory interfaces for providing high bandwidth. Following that, he co-founded the cloud video analytics company, Sensr.net, that applied scalable cloud computing to analyzing large streams of video data. He later joined Roche to work on next-generation sequencing and scalable genomics analysis platforms. Throughout his career, Tom has focused on the application of high performance computer systems to real world problems.
Euicheol Lim
Eui-cheol Lim is a Research Fellow and leader of Solution Advanced Technology team in SK Hynix. He received the B.S. degree and the M.S. degree from Yonsei University, Seoul, Korea, in 1993 and 1995, and the Ph.D. degree from Sungkyunkwan University, suwon, Korea in 2006. Dr.Lim joined SK Hynix in 2016 as a system architect in memory system R&D. Before joining SK Hynix, he had been working as an SoC architect in Samsung Electronics and leading the architecture of most Exynos mobile SoC. His recent interesting points are memory and storage system architecture with new media memory and new memory solution such as CXL memory and Processing in Memory. In particular, he is proposing a new computing architecture based on PIM, which is more efficient and flexible than existing AI accelerators, to process generative AI and LLM (large language Model) that is currently causing a sensation.
Girish Venkataramani
Sanchit Juneja
Sanchit Juneja has 18+ years of Tech Leadership Experience in tech and product roles across The US, South-east Asia, Africa, South-east Asia, and Europe with organizations such as Booking.com, AppsFlyer, GoJek, Rocket Internet, and National Instruments. Currently Director- Product (Big Data & ML/AI) with Booking.com
Mo Haghighi
Dr Mo Haghighi is a director of engineering/distinguished engineer at Discover Financial Services. His current focus is hybrid and multi-cloud strategy, application modernisation and automating application/workload migration across public and private clouds. Previously, he held various leadership positions as a program director at IBM, where he led Developer Ecosystem and Cloud Engineering teams in 27 countries across Europe, Middle East and Africa. Prior to IBM, he was a research scientist at Intel and an open source advocate at Sun Microsystems/Oracle.
Mo obtained a PhD in computer science, and his primary areas of expertise are distributed and edge computing, cloud native, IoT and AI, with several publications and patents in those areas.
Mo is a regular keynote/speaker at major developer conferences including Devoxx, DevOpsCon, Java/Code One, Codemotion, DevRelCon, O’Reilly, The Next Web, DevNexus, IEEE/ACM, ODSC, AiWorld, CloudConf and Pycon.
Prasad Jogalekar
Paul Karazuba
Paul is Vice President of Marketing at Expedera, a leading provider of AI Inference NPU semiconductor IP. He brings a talent for transforming new technology into products that excite customers. Previously, Paul was VP Marketing at PLDA, specializing in high-speed interconnect IP, until its acquisition by Rambus. Before PLDA, he was Senior Director of Marketing at Rambus. Paul brings more than 25 years of marketing experience including Quicklogic, Aptina (Micron), and others. He holds a BS in Management and Marketing from Manhattan College.
Expedera
Website: https://www.expedera.com/
Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI-inference applications. Third-party silicon-validated, Expedera’s solutions produce superior performance and are scalable to a wide range of applications from edge nodes and smartphones to automotive and data centers. Expedera’s Origin™ deep learning accelerator products are easily integrated, readily scalable, and can be customized to application requirements. The company is headquartered in Santa Clara, California.
Hooman Sedghamiz
Hooman Sedghamiz is Director of AI & ML at Bayer. He has lead algorithm development and generated valuable insights to improve medical products ranging from implantable, wearable medical and imaging devices to bioinformatics and pharmaceutical products for a variety of multinational medical companies.
He has lead projects, data science teams and developed algorithms for closed loop active medical implants (e.g. Pacemakers, cochlear and retinal implants) as well as advanced computational biology to study the time evolution of cellular networks associated with cancer , depression and other illnesses.
His experience in healthcare also extends to image processing for Computer Tomography (CT), iX-Ray (Interventional X-Ray) as well as signal processing of physiological signals such as ECG, EMG, EEG and ACC.
Recently, his team has been working on cutting edge natural language processing and developed cutting edge models to address the healthcare challenges dealing with textual data.