In this talk, we highlight two important topics that improve the performance of a deep neural network (DNN) in general. The first topic is on neural architecture search (NAS), demonstrating our recent work (NeuralScale) that is able to search for the optimal number of filters for each layer in a convolutional neural network. As the formulation of NeuralScale is not computationally intensive when compared to existing NAS methods, our approach can be adopted without the requirement of multiple GPUs and long search time. The second topic we will discuss is also on our recent work on transductive inference based on meta-learning, applied on the task of remote heart rate estimation. State-of-the-art DNN models usually perform point estimation using a static model (fixed set of weights) once deployed. We argue that such approach is infeasible for practical use since the real world is always changing, hence the model needs to be able to adapt to the time-varying input distribution. The task of remote heart rate estimation using a video camera is a good vehicle to measure the performance of an estimation model in a varying environment. We show that through a transductive meta-learner, we are able to perform well in an environment that is distant from our training set.
Eugene is currently a PhD student advised by Dr. Chen-Yi Lee. His research is focused on machine learning approaches suited for self-supervised or autonomous learning. He works mainly on causal learning, meta-learning, self-supervised learning and neural architecture search. He is also interested in the application of machine learning techniques on computer vision and multi-dimensional signal related tasks.