The Future of Consumer Edge-AI Computing | Kisaco Research
Edge AI

The Future of Consumer Edge-AI Computing

In the last decade, Deep Learning has rapidly infiltrated the consumer end, mainly thanks to hardware acceleration across devices. However, as we look towards the future, it is evident that isolated hardware will be insufficient. Increasingly complex AI tasks demand shared resources, cross-device collaboration, and multiple data types, all without compromising user privacy or quality of experience.

To address this, we introduce a novel paradigm centered around EdgeAI-Hub devices, designed to reorganise and optimise compute resources and data access at the consumer edge.

To this end, we lay a holistic foundation for the transition from on-device to Edge-AI serving systems in consumer environments, detailing their components, structure, challenges and opportunities.

Since their very advent, Deep Neural Networks (DNNs) have been getting larger in their attempt to be more accurate without losing generality. Simultaneously, higher accuracies have also been a result of combining multiple models (ensembles or cascades) or inventing more exotic architectures, manually or automatically, that offer higher capacity, better generalisation or fewer inductive biases.

More recently, there have been emerging trends in Artificial Intelligence (AI), generative or discriminative, which are changing the computational landscape quite significantly. On the one hand, the training of hyperscale models that act as foundations in latent spaces for solving a multitude of downstream tasks in one or multiple modalities has been dominating computation in cloud AI. Prominent examples include Large Language Models (LLMs), text-to-image generation (out-painting) or generative image composition (in-painting). On the other hand, as devices become more capable, an increasing number of DNNs are deployed ondevice, oftentimes required to run simultaneously.

Furthermore, the advent of fields like Federated Learning (FL) and personalisation introduce on-device training workloads. Despite the forward-looking use-cases, such workloads have been pushing the compute and memory requirements to unprecedented scales, along with their data ingestion needs.

However, individual edge device capabilities have not scaled at the same pace. While the consumer edge becomes increasingly populated by smart devices, these continue to operate as standalone entities in isolation from their compute environment. Therefore, there are many missed opportunities for shaping a common context to learn and perform higher-level or fidelity tasks under a collaborative environment. As such, a gap exists between compute requirements and resource availability for deploying intelligence at the consumer edge, which is unlikely to be bridged only through traditional hardware

Read the full research paper here