Edge AI refers to the deployment of machine learning (ML) models on edge devices, such as smartphones, IoT sensors, and embedded systems, as well as at the edge of the network infrastructure, rather than relying on centralized cloud servers. This approach enables real-time data processing, reduced latency, enhanced privacy, and lower bandwidth usage, making it ideal for applications like autonomous vehicles, smart surveillance, industrial automation, and personalized healthcare. By leveraging computing power at the edge, Edge AI minimizes the need for cloud connectivity, ensuring efficient AI-driven decision-making, even in remote or bandwidth-constrained environments. The use of ML models at the edge is not limited to applications and services, ML is expected to become an essential component in the management of edge infrastructure resources as well as in security risk management.
In the EdgeAI project we are interested in developing a fundamental understanding of opportunities and limitations brought by ML models deployed at the network edge as well as in developing architectures, models and algorithms for trustworthy ML-enabled edge infrastructures and applications. Examples of research directions include how to structure inference models and where to place them in distributed edge infrastructures, looking at aspects of accuracy, latency, security, among others. Another research direction of interest is how to use machine learning for improved operation and management of edge infrastructures and relevant cyber-physical systems, including the potential use of foundation models. Tight collaborations with the Trustworthiness assurance track as well the the Industrial Edge Applications track are expected.
Contact: György Dan, KTH