Le Chen | 陈乐

Ph.D. Student at IPADS lab, SJTU

Email cen-le AT sjtu DOT edu DOT cn GitHub 中文主页
Profile photo of Le Chen

About

I am a Ph.D. student in Computer Science and Technology at Shanghai Jiao Tong University, advised by Prof. Yubin Xia. I received my bachelor's degree in Software Engineering from Shanghai Jiao Tong University. My research interests lie in operating system and on-device LLM inference.

Publications

Characterizing Mobile SoC for Accelerating Heterogeneous LLM Inference
Le Chen*, Dahu Feng*, Erhu Feng, Yingrui Wang, Rong Zhao, Yubin Xia, Pinjie Xu, Haibo Chen
Proceedings of the 31th ACM Symposium on Operating Systems Principles (SOSP’25)
Link

With the rapid advancement of artificial intelligence technologies such as ChatGPT, AI agents, and video generation, contemporary mobile systems have begun integrating these AI capabilities on local devices to enhance privacy and reduce response latency. To meet the computational demands of AI tasks, current mobile SoCs are equipped with diverse AI accelerators, including GPUs and Neural Processing Units (NPUs). However, there has not been a comprehensive characterization of these heterogeneous processors, and existing designs typically only leverage a single AI accelerator for LLM inference, leading to suboptimal use of computational resources and memory bandwidth.

In this paper, we first summarize key performance characteristics of heterogeneous processors, SoC memory bandwidth, etc. Drawing on these observations, we propose different heterogeneous parallel mechanisms to fully exploit both GPU and NPU computational power and memory bandwidth. We further design a fast synchronization mechanism between heterogeneous processors that leverages the unified memory architecture. By employing these techniques, we present HeteroInfer, the fastest LLM inference engine in mobile devices which supports GPU-NPU heterogeneous execution. Evaluation shows that HeteroInfer delivers a 1.34x to 6.02x end-to-end speedup over state-of-the-art GPU-only and NPU-only LLM engines, while maintaining negligible interference with other applications.

μEFI: A Microkernel-Style UEFI with Isolation and Transparency
Le Chen, Yiyang Wu, Jinyu Gu, Yubin Xia, Haibo Chen
The 2025 USENIX Annual Technical Conference (ATC’25)
Link

The Unified Extensible Firmware Interface (UEFI) has established itself as the leading firmware standard in modern devices, offering enhanced extensibility, user-friendly graphical interface, and improved security capabilities. At the core of UEFI security is UEFI Secure Boot, designed to ensure that only trusted drivers and applications are loaded during system startup. However, the growing number of UEFI-related CVEs and the emergence of attacks that bypass UEFI Secure Boot have highlighted its limitations, exposing vulnerabilities that could be exploited by attackers.

We propose μEFI, the first isolation framework for UEFI firmware that can transparently run UEFI modules in sandboxes. Drawing inspiration from microkernel design, we deprivilege UEFI modules to user mode and isolate them in different address spaces (sandboxes). To enable the transparent execution of UEFI modules, we propose trampoline injection and protocol analysis. To further strengthen UEFI security, we incorporate a seccomp-like mechanism to restrict module capabilities and perform automated input validation to detect and prevent invalid inputs. Evaluation results demonstrate that our system can run complex UEFI modules without modifications, which incurs a small overhead of 1.91% for UEFI boot phase.

Encrypted Databases Made Secure Yet Maintainable
Mingyu Li, Xuyang Zhao, Le Chen, Cheng Tan, Huorong Li, Sheng Wang, Zeyu Mi, Yubin Xia, Feifei Li, Haibo Chen
The 17th USENIX Symposium on Operating Systems Design and Implementation (OSDI’23)

Skills

C C++ Python

Service and Teaching

TA, AI Computing Systems, Fall 2025

TA, Operating System (SE3357), Spring 2024

TA, Computer System Engineering (SJTU SE3331), Fall 2023