Program        Tutorial


Tutorials


  Tutorial 1

Towards On-Device AI: Collaboration of Connected Devices


Dr. JinYi Yoon


Virginal Tech, USA


Abstract


As artificial intelligence (AI) permeates nearly every field, data has become a pivotal source driving remarkable advancements. Most of the data originates from end users or systems, necessitating deployment of AI on edge devices such as smartphones, laptops, IoT devices, or sensors near to users. However, most AI systems still rely on cloud computing, where the growing demand for AI systems leads to communication bottleneck, bandwidth contention, and thereby increased latency. Cloud-based services also face privacy concerns, as users must upload raw data, and clients experience unavailability under network disconnections or server outages. Here, edge intelligence has emerged as a promising approach to alleviate the reliance on clouds by deploying models directly on edge devices. While this approach can provide stable services through local processing, individual edge devices often lack sufficient data and computational or resource capability to generate advanced knowledge. Fortunately, shifting towards edge-side deployment poses these challenges, but at the same time, it has more available devices near users, stimulating a potential collaboration among edge devices. In this talk, we explore collaborative learning paradigms of connected edge devices for on-device AI, including federated learning, split/distributed learning, and knowledge transfer problem. We introduce how each learning paradigm works and the research challenges.

Biography


JinYi Yoon received her B.S., M.S., and Ph.D. degrees in the Department of Computer Science and Engineering, Ewha Womans, University, South Korea, in 2017, 2019, and 2022, respectively. She is currently working as a postdoctoral associate in the Department of Computer Science at Virginia Tech. She has broad interests in the area of edge for AI and AI for edge, including on-device AI, distributed/split learning, federated learning, AI-driven/native/powered networked systems, indoor localization, network security, and wireless ad-hoc networks.

  Tutorial 2

Optimizing Multimodal Large Language Model Deployment in Edge Computing: From Edge-Cloud Collaboration to Adaptation in Heterogeneous Environments


Prof. He Li


Muroran Institute of Technology, Japan


Abstract


With the emergence of GPT-4, the capabilities of Multi-modal Large Language Models (MLLMs) have transformed our lives, enabling complex natural language understanding and generation. MLLMs integrate multi-modal encoders with Large Language Models (LLMs), overcoming the limitations of text-only models and enabling the handling of diverse data types. However, deploying MLLMs on edge devices faces significant challenges due to limited computational resources and storage space. This talk aims to explore two innovative frameworks designed to address these challenges and optimize the deployment of MLLMs in edge computing environments. The first part of the talk will focus on ECCoMLLM, an Edge Cloud Collaboration system framework specifically designed for MLLM inference tasks. In this framework, we decouple the MLLM components to facilitate their deployment within the ECC system. We will discuss a queue scheduling algorithm that optimizes the service provider's interests while minimizing user latency. The second part of the talk will introduce DistMLLM, a framework that further enhances MLLM deployment by focusing on the heterogeneous capabilities of edge environments. DistMLLM decouples multimodal encoders from LLM inference, allowing these components to be processed on the most suitable edge devices. This strategy takes advantage of high-performance edge devices capable of handling LLM inference tasks, reducing reliance on cloud resources and minimizing latency. In conclusion, both ECCoMLLM and DistMLLM provide robust solutions for deploying MLLMs in edge computing settings. By decoupling tasks and optimizing resource allocation, these frameworks reduce latency and improve efficiency, paving the way for advanced generative AI applications in heterogeneous and resource-constrained environments.

Biography


He Li received the B.S., M.S. degrees in Computer Science and Engineering from Huazhong University of Science and Technology in 2007 and 2009, respectively, and Ph.D. degree in Computer Science and Engineering from The University of Aizu in 2015. He is currently an Associate Professor with Department of Sciences and Informatics, Muroran Institute of Technology, Japan. In 2018, he is selected as a Ministry of Education, Culture, Sports, Science and Technology (MEXT) Excellent Young Researcher. His research interests include IoT, edge computing, cloud computing and software defined networking. He has received the best journal paper awards from IEEE ComSoc APB and IEEE CSIM, and best paper awards from ICPADS 2019 and IEEE VTC2016-Fall. Dr. Li serves as an Associate Editor for Human-centric Computing and Information Sciences (HCIS), as well as Guest Associate Editors for Security and Communication Networks ,Environments, and IEICE Transactions on Information and Systems. He is the recipient of 2019 IEEE TCSC Award for Excellence (Early Career Researcher) and 2016 IEEE TCSC Award for Excellence (Outstanding Ph.D Thesis). Youn-Hee Han (Ph.D.), Professor