Deep learning has cracked a tremendous hole in artificial intelligence and the revolution hits numerous application fields. Models of convolutional neural networks (CNN), Transformer-based large language models (including BERT, GPT) strut their stuff in mainly computer vision and natural language processing but not limit to these fields. However, so far, we have limited understanding about how the black-box models work in principle and practice. Inductive biases are introduced by people to empower deep learning model to learn better representation over data it processes. Every inductive bias has its corresponding data properties and mathematical background. In this talk, I will start from foundations of deep learning technology and go to the inductive biases behind attention and convolutional layers, which are the core of popular deep learning models. An insight could be bring out to reveal the magic behind these models. Extending these inductive biases, graph convolutions are generalized from traditional convolution to form graph neural networks (GNN). GNN as new branch of deep learning brings more applications for more fields included like chemistry, biology, physics, social science, etc. Finally, extending convolutions to geometric priors pushes deep learning towards geometric deep learning.
Yueh-Hua Tu is a machine learning engineer in Taiwan AI Labs. His work focuses on research and developing computational biological models using machine learning and deep learning techniques. His research interests are in modeling gene regulatory networks, trajectory inference and cell fate discovery form integrative single-cell transcriptome and epigenome analysis. He received his PhD of Bioinformatics in Taiwan International Graduate Program (TIGP), Academia Sinica from Taiwan National University (NTU) in 2022. He received the Bachelor of Medical Laboratory Science and Computer Science dual degree from National Cheng Kung University (NCKU) in 2014 and the Master of Biomedical Informatics from National Yang Ming University (NYMU) in 2016. He has been lecturer for machine learning lesson in ITRI. He is the host and founder of Julia Taiwan community and maintains geometric deep learning package GeometricFlux.jl in Julia. He has written two books about Julia programming and data science. He has published seven research articles in journals, including Briefings in Bioinformatics and BMC Genomics, of which he is listed as first author on four.