Institute of Information Science, Academia Sinica

Library

Print

Press Ctrl+P to print from browser

2013 Technical Report

:::

Code
Subject / Author / Abstract
View

Code

TR-IIS-13-001

Subject / Author / Abstract

Null Space Component Analysis for Noisy Blind Source Separation
Wen-Liang Hwang and Jinn Ho

We propose an approach called the Null Space Component Analysis (NCA) algorithm to solve the noisy blind source separation (BSS) problem. In a set of m linearly independent source signals, each signal is associated with a separating operator that includes the signal in its null space and repels other signals from the space. The signal model induced by the m operators represents the space where each operator separates a single signal from the other signals. We show that the model can act as a constraint on the source signals in the noisy BSS problem. In contrast to the ICA-based and the sparsitybased approaches, NCA is a deterministic and data-adaptive algorithm that can solve both the under-determined and the over-determined BSS problem. To demonstrate the algorithm’s efficiency, we process several signals, including real-life signals obtained from biomedical systems, and compare the results with those derived by other methods.

View

Fulltext

Code

TR-IIS-13-002

Subject / Author / Abstract

A Framework for Fusion of Human Sensor and Physical Sensor Data
P. H. Tsai, Y.-C. Lin, Y. Z. Ou, E. T.-H. Chu, and J. W. S. Liu

Many disaster warning and response systems can improve their surveillance coverage of the threatened area by supplementing in-situ and remote sensor data with crowdsourced human sensor data captured and sent by people in the area. This revised version of a 2012 technical report also presents fusion methods which enable a crowdsourcing enhanced system to use human sensor data and physical sensor data synergistically to improve its sensor coverage and the quality of its decisions. The methods are built on results of classical statistical detection and estimation theory and use value fusion and decision fusion of human sensor data and physical sensor data in a coherent way. They are building blocks of a central fusion unit in a crowdsourcing support system for disaster surveillance. In addition, this version contains a brief description of CROSS, a crowdsourcing support platform that can be used to enhance existing disaster surveillance systems, as well as performance data on relative merits of the detection method proposed here.

Keywords: Crowdsourcing, Multiple sensor fusion, Statistical detection and estimation

View

Fulltext

Code

IM-IIS-13-001

Subject / Author / Abstract

Robust Action Recognition via Borrowing Information across Video Modalities
Nick C. Tang (譚家棟), Yen-Yu Lin (林彥宇), Ju-Hsuan Hua (花如萱), Ming-Fang Weng (翁明昉), and Hong-Yuan Mark Liao (廖弘源)

The recent advances in imaging devices have opened the opportunity of better solving the tasks of video content analysis and understanding. Next-generation cameras, such as the depth or binocular cameras, capture diverse information, and complement the conventional 2D RGB cameras. Thus, investigating the yielded multi-modal videos generally facilitates the accomplishment of related applications. However, the limitations of the emerging cameras, such as short effective distances, expensive costs, or long response time, degrade their applicability, and currently make these devices not online accessible in practical use. In this work, we provide an alternative scenario to address this problem, and illustrate it with the task of recognizing human actions. Specifically, we aim at improving the accuracy of action recognition in RGB videos with the aid of one additional RGBD camera. Since RGB-D cameras, such as Kinect, are typically not applicable in a surveillance system due to its short effective distance, we instead offline collect a database, in which not only the RGB videos but also the depth maps and the skeleton data of actions are available jointly. The proposed approach can adapt the inter-database variations, and activate the borrowing of visual knowledge across different video modalities. Each action to be recognized in RGB representation is then augmented with the borrowed depth and skeleton features. Our approach is comprehensively evaluated on three benchmark datasets of action recognition. The promising results manifest that the borrowed information leads to remarkable boost in recognition accuracy.

View

not for public 原文請洽圖書室