您的瀏覽器不支援JavaScript語法,網站的部份功能在JavaScript沒有啟用的狀態下無法正常使用。

Institute of Information Science, Academia Sinica

Events

Print

Press Ctrl+P to print from browser

Seminar

:::

Toward Robust and Reliable Large Language Models

  • LecturerProf. Kuan-Hao Huang (Texas A&M University)
    Host: Lun-Wei Ku
  • Time2025-06-16 (Mon.) 14:40 ~ 16:40
  • LocationAuditorium 106 at IIS new Building
Abstract
Large language models (LLMs) have shown remarkable potential in real-world applications. Despite their impressive capabilities, they can still produce errors in simple situations and behave in ways that are misaligned with human expectations, raising concerns about their reliability. As a result, ensuring their robustness has become a critical challenge. In this talk, I will explore key robustness issues across three aspects of LLMs: pure text-based LLMs, multimodal LLMs, and multilingual LLMs. Specifically, I will first introduce how position bias can hurt the understanding capabilities of LLMs and present a training-free solution to address this issue. Next, I will discuss position bias in the multimodal setting and introduce a Primal Visual Description (PVD) module that enhances robustness in multimodal understanding. Finally, I will examine the impact of language alignment on the robustness of multilingual LLMs.
BIO
Kuan-Hao Huang is an Assistant Professor in the Department of Computer Science and Engineering at Texas A&M University. Before joining Texas A&M in 2024, he was a Postdoctoral Research Associate at University of Illinois Urbana-Champaign. His research focuses on natural language processing and machine learning, with a particular emphasis on building trustworthy and generalizable language AI systems that can adapt across domains, languages, and modalities. His research has been published in top-tier conferences such as ACL, EMNLP, and ICLR. His work on paraphrase understanding was recognized with the ACL Area Chair Award in 2023.