您的瀏覽器不支援JavaScript語法,網站的部份功能在JavaScript沒有啟用的狀態下無法正常使用。

Institute of Information Science, Academia Sinica

Events

Print

Press Ctrl+P to print from browser

Seminar

:::

Guiding Instruction-based Image Editing via Multimodal Large Language Models

  • LecturerMr. Tsu-Jui Fu (University of California, Santa Barbara)
    Host: Wei-Yun Ma
  • Time2024-03-26 (Tue.) 10:30 ~ 12:30
  • LocationAuditorium 106 at IIS new Building
Abstract
Instruction-based image editing improves the controllability and flexibility of image manipulation via natural commands without elaborate descriptions or regional masks.
However, human instructions are sometimes too brief for current methods to capture and follow. Multimodal large language models (MLLMs) show promising capabilities in cross-modal understanding and visual-aware response generation via LMs. In this talk, we investigate how MLLMs facilitate edit instructions and present MLLM-Guided Image Editing (MGIE).
We will involve a background review of MLLMs and diffusion models for visual generation, so everyone is welcome to join!
BIO
Tsu-Jui (https://tsujuifu.github.io) is a Ph.D. candidate at UCSB and an incoming research scientist at Apple. His research lies in vision+language and text-guided visual editing.
He is also interested in language grounding and information extraction. He has done research internships at Apple AI/ML, Meta AI, Microsft Azure AI, and Microsoft Research.