Mind reading is no longer science fiction, to some extent. This talk will address cutting-edge technology helping us to pursue this dream. We constructed a bio-directional decoding and encoding model for visual system based on the visual stimulus patterns and the induced magnetoencephalographic (MEG) signals. We exploited the temporal information in the MEG data to decode the brain activity for stimulus image reconstruction and to encode the stimulus image for the prediction of spatiotemporal neural activity. It was the first attempt to demonstrate that the high-dimensional brain activity can be well represented by a lower-dimensional nonlinear manifold. We have also developed another neural manifold model which can reveal the manifold subspace for the perception and representation of human face angles. This manifold model is embedded in the original brain activity space and can be determined by using the proposed supervised learning method. In addition to the capability to predict the face angles from brain activity, this model also validated that the face angle perception is majorly performed in inferior occipital gyrus. These techniques are highly valuable for investigating the process and mechanisms of human visual system.
Yong-Sheng Chen received the B.S. degree in computer and information science from National Chiao Tung University, Hsinchu, Taiwan, in 1993 and the M.S. degree and the Ph.D. degree in computer science and information engineering from National Taiwan University, Taipei, Taiwan, in 1995 and 2001, respectively. He is currently a Professor in the Department of Computer Science and the Chairman of the Electrical Engineering and Computer Science Undergraduate Honors Program, National Chiao Tung University, Hsinchu, Taiwan. His research interests include biomedical signal processing, functional brain mapping, and computer vision.