Eiman Kanjo
Professor Eiman Kanjo is the Provost’s Visiting Professor in tinyML at Imperial College London. She is recognized among the Top 50 Women in Engineering and is a recipient of the Turing Network Development award. She is actively involved with the tinyML Foundation, serving on the steering committee for tinyML Research Symposium, tinyML EMEA, as a publication chair, and as the tinyML UK academic lead. Additionally, she is an editorial member of the Data Centric Engineering Journal and an Associate Director at Health Data Research UK.
With over 140 publications and grants from prestigious funders like DCMS and EPSRC, she collaborates extensively with industry charities and local authorities.
Prior to her current roles, she conducted significant research at the Computer Science departments at both the University of Cambridge and the University of Nottingham.
Currently, she is also a Professor of Pervasive Computing at Nottingham Trent University, leading the Smart Sensing Lab.
Anton Abrarov
In the case study I provide a deep dive into the area of food production (bakery goods) in europe´s largest bakery factory. The case study is about how we successfully implemented Computer Vision to monitor our production process to evaluate 3.5 mio. bakery goods every day. I will show how this leads to a reduction of food waste during production and how this is aligned with environmental/sustainability goals in food production. I will motivate why AI on the edge was identified as being a crucial success factor and which considerations we took as Schwarz group to get started in this area 3 years ago. I will discuss – based on the case study – how we selected and benchmarked hardware chips (Nvidia, Intel, GPU vs CPU…), how we needed to change software engineering to accommodate for Edge AI, how we integrated edge AI models into our cloud-based infrastructure, how we do the deployment and how/why we still use cloud-based GPUs for model training.
Maximilian Stauder
Sakyasingha Dasgupta
Sakya is the founder and Chief Executive officer of EdgeCortix. He is an artificial intelligence (AI) and machine learning technologist, entrepreneur, and engineer with over a decade of experience in taking cutting edge AI research from ideation stage to scalable products, across different industry verticals. He has lead teams at global companies like Microsoft and IBM Research / IBM Japan, along with national research labs like RIKEN Japan and the Max Planck Institute Germany. Previously, he helped establish and lead the technology division at lean startups in Japan and Singapore, in semiconductor technology, robotics and Fintech sectors. Sakya is the inventor of over 20 patents and has published widely on machine learning and AI with over 1,000 citations.
Sakya holds a PhD. in Physics of Complex Systems from the Max Planck Institute in Germany, along with Masters in Artificial Intelligence from The University of Edinburgh and a Bachelors of Computer Engineering. Prior to founding EdgeCortix he completed his entrepreneurship studies from the MIT Sloan School of Management.
As billions of IoT devices permeates into our daily lives, their resource limitations and connectivity constraints have confined them to only a narrow subset of machine learning capabilities within the broader scope of AI. Edge Processing stands as a pivotal solution, optimizing large-scale data mining and aggregation by relocating the data processing segment of an application to more resourceful devices edge devices within the local network. The integration of GPU-accelerated ML inference on edge devices opens new avenues for harnessing the full potential of AI, fostering a future where intelligent decision-making in IoT is not only accessible via cloud.
Mo will commence the talk by outlining the current landscape of IoT networks, emphasizing the increasing demand for intelligent decision-making at the edge. Traditional challenges associated with centralized cloud-based ML models will be highlighted, setting the stage for the exploration of decentralized solutions.
We will delve into the technical considerations for integrating GPUs, including model optimization, compatibility with popular ML frameworks, and the advantages of parallel processing. Real-world examples will be presented to showcase the transformative impact of GPU-accelerated ML inference on edge devices, enabling the deployment of pre-trained models using latest hardware without compromising performance. Practical considerations, such as power efficiency and scalability, will also be addressed to provide a comprehensive understanding of the benefits and challenges associated with this approach.
Mo Haghighi
Dr Mo Haghighi is a director of engineering/distinguished engineer at Discover Financial Services. His current focus is hybrid and multi-cloud strategy, application modernisation and automating application/workload migration across public and private clouds. Previously, he held various leadership positions as a program director at IBM, where he led Developer Ecosystem and Cloud Engineering teams in 27 countries across Europe, Middle East and Africa. Prior to IBM, he was a research scientist at Intel and an open source advocate at Sun Microsystems/Oracle.
Mo obtained a PhD in computer science, and his primary areas of expertise are distributed and edge computing, cloud native, IoT and AI, with several publications and patents in those areas.
Mo is a regular keynote/speaker at major developer conferences including Devoxx, DevOpsCon, Java/Code One, Codemotion, DevRelCon, O’Reilly, The Next Web, DevNexus, IEEE/ACM, ODSC, AiWorld, CloudConf and Pycon.