Institute of Information Science
近期研究成果
Authors: Ming-Chang Yang, Yuan-Hao Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
The market trend of flash memory chips has been going for high density but low reliability. The rapidly increasing bit error rates and emerging reliability issues of the coming triple-level cell (TLC) and even three-dimensional (3D) flash chips would let users take an extremely high risk to store data in such low reliability storage media. With the observations in mind, this paper rethinks the layer design of flash devices and propose a complete paradigm shift to re-configure physical flash chips of potentially massive parallelism into better ??Virtual chips?? in order to improve the data recoverability in a modular and low-cost way. The concept of virtual chips is realized at hardware abstraction layer (HAL) without continually complicating the conventional flash management software of flash translation layer (FTL). The capability and compatibility of the proposed design are then both verified by a series of experiments with encouraging results.
"Achieving SLC Performance with MLC Flash Memory," ACM/IEEE Design Automation Conference (DAC), June 2015.
Authors: Yu-Ming Chang, Yuan-Hao Chang, Tei-Wei Kuo, Yung-Chun Li, and Hsiang-Pang Li

Yuan-HaoChangAbstract:
Although the Multi-Level-Cell technique is widely adopted by flash-memory vendors to boost the chip density and to lower the cost, it results in serious performance and reliability problems. Different from the past work, a new cell programming method is proposed to not only significantly improve the chip performance but also reduce the potential bit error rate. In particular, a Single-Level-Cell-like programming style is proposed to better explore the threshold-voltage relationship to denote different Multi-Level-Cell bit information, which in turn drastically provides a larger window of threshold voltage similar to that found in Single-Level-Cell chips. It could result in less programming iterations and simultaneously a much less reliability problem in programming flash-memory cells. In the experiments, the new programming style could accelerate the programming speed up to 742% and even reduce the bit error rate up to 471% for Multi-Level-Cell pages.
Current Research Results
Authors: Ke-Shiuan Lynn, Chen-Chun Chen, T. Mamie Lih, Cheng-Wei Cheng, Wan-Chih Su, Chun-Hao Chang, Chia-Ying Cheng, Wen-Lian Hsu, Yu-Ju Chen, and Ting-Yi Sung

Ke-Shiuan LynnT. Mamie LihCheng-Wei ChengTing-YiSungWen-LianHsuAbstract:
Glycosylation is a highly complex modification influencing the functions and activities of proteins. Interpretation of intact glycopeptide spectra is crucial but challenging. In this paper, we present a mass spectrometry-based automated glycopeptide identification platform (MAGIC) to identify peptide sequences and glycan compositions directly from intact N-linked glycopeptide collision-induced-dissociation spectra. The identification of the Y1 (peptideY0 + GlcNAc) ion is critical for the correct analysis of unknown glycoproteins, especially without prior knowledge of the proteins and glycans present in the sample. To ensure accurate Y1-ion assignment, we propose a novel algorithm called Trident that detects a triplet pattern corresponding to [Y0, Y1, Y2] or [Y0-NH3, Y0, Y1] from the fragmentation of the common trimannosyl core of N-linked glycopeptides. To facilitate the subsequent peptide sequence identification by common database search engines, MAGIC generates in silico spectra by overwriting the original precursor with the naked peptide m/z and removing all of the glycan-related ions. Finally, MAGIC computes the glycan compositions and ranks them. For the model glycoprotein horseradish peroxidase (HRP) and a 5-glycoprotein mixture, a 2- to 31-fold increase in the relative intensities of the peptide fragments was achieved, which led to the identification of 7 tryptic glycopeptides from HRP and 16 glycopeptides from the mixture via Mascot. In the HeLa cell proteome data set, MAGIC processed over a thousand MS(2) spectra in 3 min on a PC and reported 36 glycopeptides from 26 glycoproteins. Finally, a remarkable false discovery rate of 0 was achieved on the N-glycosylation-free Escherichia coli data set. MAGIC is available at http://ms.iis.sinica.edu.tw/COmics/Software_MAGIC.html.
"From Weak to Strong Zero-Knowledge and Applications," The 12th Theory of Cryptography Conference (TCC 2015), 2015.
Authors: Kai-Min Chung and Edward Lui and Rafael Pass

Kai-MinChungAbstract:
The notion of zero-knowledge is formalized by requiring that for every malicious efficient verifier V* simulator S that can reconstruct the view of V* the prover, in a way that is indistinguishable to every polynomial-time distinguisher. Weak zero-knowledge weakens this notions by switching the order of the quantifiers and only requires that for every distinguisher D, there exists a (potentially different) simulator SD
 
In this paper we consider various notions of zero-knowledge, and investigate whether their weak variants are equivalent to their strong variants. Although we show (under complexity assumption) that for the standard notion of zero-knowledge, its weak and strong counterparts are not equivalent, for meaningful variants of the standard notion, the weak and strong counterparts are indeed equivalent. Towards showing these equivalences, we introduce new non-black-box simulation techniques permitting us, for instance, to demonstrate that the classical 2-round graph non-isomorphism protocol of Goldreich-Micali-Wigderson satisfies a “distributional” variant of zero-knowledge.
 
Our equivalence theorem has other applications beyond the notion of zero-knowledge. For instance, it directly implies the dense model theorem of Reingold et al (STOC ’08), and the leakage lemma of Gentry-Wichs (STOC ’11), and provides a modular and arguably simpler proof of these results (while at the same time recasting these result in the language of zero-knowledge).
Current Research Results
"Robust Action Recognition via Borrowing Information across Video Modalities," IEEE Trans. on Image Processing, To Appear.
Authors: Nick C Tang, Y. Y. Lin, Ju-Hsuan Hua, Shih-En Wei, M. F. Weng, and H. Y. Mark Liao

Y. Y. LinNick C TangMarkLiaoAbstract:
The recent advances 1 in imaging devices have opened the opportunity of better solving the tasks of video content analysis and understanding. Next-generation cameras, such as the depth or binocular cameras, capture diverse information, and complement the conventional 2D RGB cameras. Thus, inves6 tigating the yielded multimodal videos generally facilitates the accomplishment of related applications. However, the limitations of the emerging cameras, such as short effective distances, expensive costs, or long response time, degrade their applicability, and currently make these devices not online accessible in practical use. In this paper, we provide an alternative scenario to address this problem, and illustrate it with the task of recognizing human actions. In particular, we aim at improving the accuracy of action recognition in RGB videos with the aid of one additional RGB-D camera. Since RGB-D cameras, such as Kinect, are typically not applicable in a surveillance system due to its short effective distance, we instead offline collect a database, in which not only the RGB videos but also the depth maps and the skeleton data of actions are available jointly. The proposed approach can adapt the interdatabase variations, and activate the borrowing of visual knowledge across different video modalities. Each action to be recognized in RGB representation is then augmented with the borrowed depth and skeleton features. Our approach is comprehensively evaluated on five benchmark data sets of action recognition. The promising results manifest that the borrowed information leads to remarkable boost in recognition accuracy
Current Research Results
"Assistive Image Comment Robot - A Novel Mid-Level Concept-based Representation," IEEE Transactions on Affective Computing, To Appear.
Authors: Y. Y. Chen, Tao Chen, Taikun Liu, H. Y. Mark Liao, and Shih-Fu Chang

MarkLiaoAbstract:
We present a general framework and working system for predicting likely affective responses of the viewers in the social media environment after an image is posted online. Our approach emphasizes a mid-level concept representation, in which intended affects of the image publisher is characterized by a large pool of visual concepts (termed PACs) detected from image content directly instead of textual metadata, evoked viewer affects are represented by concepts (termed VACs) mined from online comments, and statistical methods are used to model the correlations among these two types of concepts. We demonstrate the utilities of such approaches by developing an end-to-end Assistive Comment Robot application, which further includes components for multi-sentence comment generation, interactive interfaces, and relevance feedback functions. Through user studies, we showed machine suggested comments were accepted by users for online posting in 90% of completed user sessions, while very favorable results were also observed in various dimensions (plausibility, preference, and realism) when assessing the quality of the generated image comments.
"Improving SIMD Code Generation in QEMU," Design, Automation and Test in Europe Conference (DATE), March 2015.
Authors: Sheng-Yu Fu, Jan-Jan Wu, Wei-Chung Hsu

Jan-JanWuAbstract:
Modern processors are often enhanced using SIMD instructions, such as the MMX, SSE, and AVX instructions set in the x86 architecture, or the NEON instruction set in the ARM architecture. Using these SIMD instructions could significantly increase application performance, hence in application binaries a significant proportion of instructions are likely to be SIMD instructions. However, Dynamic Binary Translation (DBT) has largely overlooked SIMD instruction translation. For example, in the popular QEMU system emulator, guest SIMD instructions are often emulated with a sequence of scalar instructions even when the host machines have SIMD instructions to support such parallel computation, leaving significant potential for performance enhancement. In this paper, we propose two approaches, one leveraging the existing helper function implementation in QEMU, and the other using a newly introduced vector IR (Intermediate Representation) to enhance the performance of SIMD instruction translation in DBT of QEMU. Both approaches were implemented in the QEMU to support ARM and IA32 frontend and x86-64 backend. Preliminary experiments show that adding vector IR can significantly enhance the performance of guest applications containing SIMD instructions for both ARM and IA32 architectures when running with QEMU on the x86-64 platform.
Current Research Results
"A Virtual Repository for Linked-Data Based Disaster Management Applications," International Journal on Safety and Security Engineering, WIT Press, March 2015.
Authors: F.-P. Yang, Y. Z. Ou, C. W. Yu, J. Su, S.-W. Bai, J.-M. Ho and J. W. S. Liu

Jane Win ShihLiuAbstract:
Typical state-of-the-art disaster management information systems (DMIS) cannot support responsive discovery and access of data and information needed to handle unforeseen emergencies. Adding semantics and relations to legacy data and transforming them to linked data can remove this limitation. The virtual repository presented in this paper is a development environment for this purpose: It provides application developers with tools for incremental transformation of legacy data and information in the DMIS into linked data as needed by the applications. The virtual repository also provides the applications with support for runtime access of linked data created and maintained using its tools.
Current Research Results
"Design and Implementation of Participant Selection for Crowdsourcing Disaster Information," International Journal on Safety and Security Engineering, WIT Press, March 2015.
Authors: E. T.-H Chu, C.-Y. Lin, P. H. Tsai and J. W. S. Liu

Jane Win ShihLiuAbstract:
Experiences with past major disasters tell us that people with wireless devices and social network services can serve effectively as mobile human sensors. A disaster warning and response system can solicit eye-witness reports from selected participants and use information provided by them to supplement surveillance sensor coverage. This paper first presents an overview the participant selection problem of how to select participants from available volunteers given the benefits and costs of deploying them. The greedy algorithm, named PSP-G, is known to be a near optimal solution with a small fraction of execution time when compared with well-known optimization methods. The paper then describes an implementation of the PSP-G algorithm and the integration of the PSP-G module into the Ushahidi platform. Performance data from two case studies, Haiti Earthquake 2010 and Typhoon Morakot 2009, also described here, clearly show that PSP-G is a general, practical solution.
"Tight Parallel Repetition Theorems for Public-Coin Arguments using KL-divergence," The 12th Theory of Cryptography Conference (TCC 2015), 2015.
Authors: Kai-Min Chung and Rafael Pass

Kai-MinChungAbstract:
We present a new and conceptually simpler proof of a tight parallel-repetition theorem for public-coin arguments [Pass-Venkitasubramaniam, STOC’07], [H˚astad et al, TCC’10], [Chung-Liu, TCC’10]. We follow the same proof framework as the previous non-tight parallel-repetition theorem of H˚astad et al—which relied on statistical distance to measure the distance between experiments—and show that it can be made tight (and further simplified) if instead relying on KL-divergence as the distance between the experiments. 
We then use this new proof to present the first tight “Chernoff-type” parallel repetition theorem for arbitrary public-coin arguments, demonstrating that parallel-repetition can be used to simultaneously decrease both the soundness and completeness error of any public-coin argument at a rate matching the standard Chernoff bound.
 
Current Research Results
Authors: Ke-Shiuan Lynn, Mei-Ling Cheng, Yet-Ran Chen, Chin Hsu, Ann Chen, T. Mamie Lih, Hui-Yin Chang, Ching-jang Huang, Ming-Shi Shiao, Wen-Harn Pan, Ting-Yi Sung, and Wen-Lian Hsu

Wen-LianHsuTing-YiSungAbstract:
Metabolite identification remains a bottleneck in mass spectrometry (MS)-based metabolomics. Currently, this process relies heavily on tandem mass spectrometry (MS/MS) spectra generated separately for peaks of interest identified from previous MS runs. Such a delayed and labor-intensive procedure creates a barrier to automation. Further, information embedded in MS data has not been used to its full extent for metabolite identification. Multimers, adducts, multiply charged ions, and fragments of given metabolites occupy a substantial proportion (40–80%) of the peaks of a quantitation result. However, extensive information on these derivatives, especially fragments, may facilitate metabolite identification. We propose a procedure with automation capability to group and annotate peaks associated with the same metabolite in the quantitation results of opposite modes and to integrate this information for metabolite identification. In addition to the conventional mass and isotope ratio matches, we would match annotated fragments with low-energy MS/MS spectra in public databases. For identification of metabolites without accessible MS/MS spectra, we have developed characteristic fragment and common substructure matches. The accuracy and effectiveness of the procedure were evaluated using one public and two in-house liquid chromatography–mass spectrometry (LC–MS) data sets. The procedure accurately identified 89% of 28 standard metabolites with derivative ions in the data sets. With respect to effectiveness, the procedure confidently identified the correct chemical formula of at least 42% of metabolites with derivative ions via MS/MS spectrum, characteristic fragment, and common substructure matches. The confidence level was determined according to the fulfilled identification criteria of various matches and relative retention time.
Current Research Results
Authors: Jiann-Horng Leu, Kuan-Fu Liu, Kuan-Yu Chen, Shu-Hwa Chen, Yu-Bin Wang, Chung-Yen Lin, Chu-Fang Lo.

Yu-Bin WangShu-Hwa ChenChung-Yen LinAbstract:
By microarray screening, we identified a white spot syndrome virus (WSSV)-strongly induced novel gene in gills of Penaeus monodon. The gene, PmERP15, encodes a putative transmembrane protein of 15 kDa, which only showed some degree of similarity (54–59%) to several unknown insect proteins, but had no hits to shrimp proteins. RT-PCR showed that PmERP15 was highly expressed in the hemocytes, heart and lymphoid organs, and that WSSV-induced strong expression of PmERP15 was evident in all tissues examined. Western blot analysis likewise showed that WSSV strongly up-regulated PmERP15 protein levels. In WSSV-infected hemocytes, immunofluorescence staining showed that PmERP15 protein was colocalized with an ER enzyme, protein disulfide isomerase, and in Sf9 insect cells, PmERP15-EGFP fusion protein colocalized with ER -Tracker™ Red dye as well. GRP78, an ER stress marker, was found to be up-regulated in WSSV-infected P. monodon, and both PmERP15 and GRP78 were up-regulated in shrimp injected with ER stress inducers tunicamycin and dithiothreitol. Silencing experiments showed that although PmERP15 dsRNA-injected shrimp succumbed to WSSV infection more rapidly, the WSSV copy number had no significant changes. These results suggest that PmERP15 is an ER stress-induced, ER resident protein, and its induction in WSSV-infected shrimp is caused by the ER stress triggered by WSSV infection. Furthermore, although PmERP15 has no role in WSSV multiplication, its presence is essential for the survival of WSSV-infected shrimp.
Current Research Results
"Placing Virtual Machines to Optimize Cloud Gaming Experience," IEEE Transactions on Cloud Computing, 2015.
Authors: Hua-Jun Hong, De-Yu Chen, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu

Sheng-Wei(Kuan-Ta)ChenAbstract:
Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer Quality of Experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider’s total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20000 servers and 40000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Current Research Results
"Re-Weighted and Adaptive Morphology Separation," SIAM Image Processing, October 2014.
Authors: Guan-Ju Peng and Wen-Liang Hwang

Wen-LiangHwangAbstract:
Morphological component analysis (MCA) for signal separation decomposes a signal into a super-position of morphological subcomponents, each of which is approximately sparse in a certain dictionary. Some of the dictionaries can also be modified to make them adaptive to local structure in images. We show that signal separation performance can be improved over the previous MCA approaches by replacing L1-norm optimization with “weighted” L1-norm optimization and replacing their dictionary adaptation with regularized dictionary adaptation. The weight on an atom for sparse coding is commonly derived from the corresponding coefficient's value. In contrast, the weight of an atom in a dictionary for signal separation is derived from the mutual coherence between the atom and the atoms in the other dictionaries. The proposed solution for regularized dictionary adaptation is an extension of the K-SVD method, where the dictionary and “weighted” sparse coefficients are estimated simultaneously. We present a series of experiments demonstrating the significant performance improvement of the proposed algorithm over the previous approaches for signal separation.
"Reliability-Aware Striping with Minimized Performance Overheads for Flash-based Storage Devices," ACM Symposium on Applied Computing (SAC), April 2015.
Authors: Ming-Chang Yang, Yu-Ming Chang, Po-Chun Huang, Yuan-Hao Chang, Lue-Jane Lee, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
As data retention and disturb problems of coming flash chips keep deteriorating, improving the reliability/endurance of flash devices has become a critical issue in many flash-based storage applications. Meanwhile, the increasing access parallelism of nowadays flash devices provides a possibility to adopt larger striping units in RAID-liked technology to furthermore enhance the data reliability, even though larger striping units might also bring more performance overheads. This work is thus motivated by the individual merits on reliability (resp. performance) provided by the larger (resp. smaller) striping units. Very different from many existing RAID-integrated flash management designs, this work proposes a reliability-aware striping design to adaptively determine a ``proper'' stripe size for data with different (updated) patterns to achieve reliability enhancement with minimized performance degradation. In particular, our design considers the special ``erase-before-write'' feature of flash chips to furthermore reduce the performance overheads. The experiments were conducted based on representative realistic workloads to evaluate the efficacy of the proposed scheme, for which the results are very encouraging.
"The Deployment of Shared Data Objects Among Handheld and Wearable Devices," ACM Symposium on Applied Computing (SAC), April 2015.
Authors: Sheng-Wei Cheng, Che-Wei Chang, Yuan-Hao Chang, Pi-Cheng Hsiu, and Chia-Heng Tu

Pi-ChengHsiuYuan-HaoChangAbstract:
With the great success on making phones smarter, vendors now plan on replicating the same idea on wearable accessories. Accordingly, applications on these devices are full of new possibilities to interact with users. However, in order to provide consistent user experience, it poses a major challenge on how to efficiently deploy shared application states among the devices. In this paper, we consider to minimize the data transmission latencies between the processes and the shared data objects on a set of mobile devices with distributed shared memory. The problem is proved to be ${\cal NP}$-hard. Nevertheless, efficient solutions can still be obtained when special cases are considered. On one hand, we propose a polynomial-time optimal algorithm when the memory of each mobile device is segmented into blocks and each of the shared data objects is of single block. On the other hand, in order to provide a practical way to address the problem, we then propose a $(1,\epsilon)$ asymptotic approximation algorithm, where $\epsilon > 0$ and can be arbitrarily small, with a 2-augmentation-bound of memory size. In the end, a series of simulations was conducted, and the results were very encouraging.
"A Progressive Wear Leveling to Minimize Data Migration Overheads for NAND Flash Devices," ACM/IEEE Design, Automation and Test in Europe (DATE), March 2015.
Authors: Fu-Hsin Chen, Ming-Chang Yang, Yuan-Hao Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
As the endurance of flash memory keeps deteriorating, exploiting wear leveling techniques to improve the lifetime/endurance of flash memory has become a critical issue in the design of flash storage devices. Nevertheless, the deteriorated access performance of high-density flash memory makes the performance overheads introduced by wear leveling non-negligible. In contrast to existing wear-leveling techniques that aggressively distributes the erases to all flash blocks evenly by a fixed threshold, we propose a progress wear leveling design to perform wear leveling in a ``progressive'' way to prevent any block from being worn out prematurely, and to ultimately minimize the performance overheads caused by the unnecessary data migration imposed by the aggressive wear leveling in the early stage of the device lifespan or imposed by the improper selection of victim blocks for erases. The experiments were conducted based on representative realistic workloads to evaluate the efficacy of the proposed scheme. The results reveal that instead of sacrificing the device lifetime, performing wear leveling in such a progressive way can not only minimize the performance overheads but even have potentials to extend the device lifetime.
"4-Round Resettably-Sound Zero Knowledge," The 11th IACR Theory of Cryptography Conference (TCC), February 2014.
Authors: Kai-Min Chung and Rafail Ostrovsky and Rafael Pass and Muthuramakrishnan Venkitasubramaniam and Ivan Visconti

Kai-MinChungAbstract:

While 4-round constructions of zero-knowledge arguments are known based on the existence of one-way functions, constuctions of *resettably-sound* zero-knowledge arguments require either stronger assumptions (the existence of a fully-homomorphic encryption scheme), or more communication rounds. We close this gap by demonstrating a 4-round resettably-sound zero-knowledge argument for NP based on the existence of one-way functions.
 
Current Research Results
"Cross-camera Knowledge Transfer for Multiview People Counting," IEEE Trans. on Image Processing, January 2015.
Authors: Nick C Tang, Y. Y. Lin, M. F. Weng, and H. Y. Mark Liao

YenYuLinNick C TangMarkLiaoAbstract:
We present a novel two-pass framework for counting the number of people in an environment where multiple cameras provide different views of the subjects. By exploiting the complementary information captured by the cameras, we can transfer knowledge between the cameras to address the difficulties of people counting and improve the performance. The contribution of this work is threefold. First, normalizing the perspective of visual features and estimating the size of a crowd are highly correlated tasks. Hence we treat them as a joint learning problem. The derived counting model is scalable and it provides more accurate results than existing approaches. Second, we introduce an algorithm that matches groups of pedestrians in images captured by different cameras. The results provide a common domain for knowledge transfer, so we can work with multiple cameras without worrying about their differences. Third, the proposed counting system is comprised of a pair of collaborative regressors. The first one determines the people count based on features extracted from intra-camera visual information, while the second calculates the residual by considering the conflicts between inter-camera predictions. The two regressors are elegantly coupled and provide an accurate people counting system. The results of experiments in various settings show that, overall, our approach outperforms comparable baseline methods. The significant performance improvement demonstrates the effectiveness of our two-pass regression framework.
Current Research Results
"Seamless Handover for High-Speed Trains Using Femtocell-based Multiple Egress Network Interfaces," IEEE Transactions on Wireless Communications, December 2014.
Authors: Cheng-Wei Lee, Ming-Chin Chuang, Meng Chang Chen and Yeali S. Sun

Meng ChangChenAbstract:
High-speed rail systems are becoming increasingly popular among long-distance travelers.With the explosive growth in the number of mobile devices, the provision of high quality telecommunication and Internet access services on high-speed trains is now a pressing problem. Network mobility (NEMO) has been proposed to enable a large number of mobile devices on a vehicle to access the Internet; however, several issues must be solved before it can be put into practice, e.g., frequent handovers, long handover latency, and poor quality of service. To resolve the above problems, we propose an LTE femtocell-based network mobility scheme that uses Multiple Egress Network interfaces to support seamless handover for high-speed rail systems, called MEN-NEMO. The results of simulations show that the proposed MEN-NEMO scheme reduces the handover latency and transmission overhead of handover signaling substantially.
Current Research Results
"Per-cluster Ensemble Kernel Learning for Multi-modal Image Clustering with Group-dependent Feature Selection," IEEE Transactions on Multimedia, December 2014.
Authors: J. T. Tsai, Y. Y. Lin, and H. Y. Mark Liao

MarkLiaoAbstract:
We present a clustering approach, MK-SOM, that carries out cluster-dependent feature selection, and partitions images with multiple feature representations into clusters. This work is motivated by the observations that human visual systems (HVS) can receive various kinds of visual cues for interpreting the world. Images identified by HVS as the same category are typically coherent to each other in certain crucial visual cues, but the crucial cues vary from category to category. To account for this observation and bridge the semantic gap, the proposed MK-SOM integrates multiple kernel learning (MKL) into the training process of self-organizing map (SOM), and associates each cluster with a learnable, ensemble kernel. Hence, it can leverage information captured by various image descriptors, and discoveries the cluster-specific characteristics via learning the per-cluster ensemble kernels. Through the optimization iterations, cluster structures are gradually revealed via the features specified by the learned ensemble kernels, while the quality of these ensemble kernels is progressively improved owing to the coherent clusters by enforcing SOM. Besides, MK-SOM allows the introduction of side information to improve performance, and it hence provides a new perspective of applying MKL to address both unsupervised and semisupervised clustering tasks. Our approach is comprehensively evaluated in the two applications. The superior and promising results manifest its effectiveness.
Current Research Results
Authors: Mayfield, Anderson; Wang, Yu-Bin; Chen, Chii-Shiarng; Lin, Chung-Yen; Chen, Shu-Hwa

Chung-YenLinAbstract:
Although rising ocean temperatures threaten scleractinian corals and the reefs they construct, certain reef corals can acclimate to elevated temperatures to which they are rarely exposed in situ. Specimens of the Indo-Pacific reef coral Pocillopora damicornis collected from upwelling reefs of Southern Taiwan were previously found to have survived a 36-week exposure to 30°C, a temperature they encounter infrequently and one that elicits the breakdown of the coral-dinoflagellate (genusSymbiodinium) endosymbiosis in many corals of the Pacific Ocean. In order to gain insight into the sub-cellular pathways utilized by both the coral hosts and their mutualistic Symbiodinium populations to acclimate to this temperature, mRNAs from both control (27°C) and high (30°C) temperature samples were sequenced on an Illumina platform and assembled into a 236,435-contig transcriptome. These P. damicornis specimens were found to be ~60% anthozoan and 40% microbe (Symbiodinium, other eukaryotic microbes, and bacteria), from an mRNA-perspective. Furthermore, a significantly higher proportion of genes from the Symbiodiniumcompartment were differentially expressed after two weeks of exposure. Specifically, at elevated temperatures Symbiodinium populations residing within the coral gastrodermal tissues were more likely to up-regulate the expression of genes encoding proteins involved in metabolism than their coral hosts. Collectively, these transcriptome-scale data suggest that the two members of this endosymbiosis have distinct strategies for acclimating to elevated temperatures that are expected to characterize many of Earth's coral reefs in the coming decades.
http://ips.iis.sinica.edu.tw/coral/
Current Research Results
"A Two-Stage Link Scheduling Scheme for Variable-Bit-Rate Traffic Flows in Wireless Mesh Networks," IEEE Transactions on Wireless Communications, November 2014.
Authors: Yung-Cheng Tu, Meng Chang Chen and Yeali S. Sun

Meng ChangChenAbstract:
Providing end-to-end QoS for delay sensitive flows with variable-bit-rate (VBR) traffic in wireless mesh networks is a major challenge. There are several reasons for this phenomenon, including time-varied bandwidth requirements, competition for transmission opportunities from flows on the same link, and interference from other wireless links. In this paper, we propose a flexible bandwidth allocation and uncoordinated scheduling scheme, called two-stage link scheduling (2SLS), to support flow delay control in TDMA-based wireless mesh networks. The scheme is implemented in two stages: slot allocation and on-thego scheduling. The slot allocation mechanism allocates contiguous time slots to each link in each frame based on pre-defined maximum and minimum bandwidth requirements. Then, each link’s on-the-go scheduling mechanism dynamically schedules the transmissions within the allocated time slots. The objective is to maximally satisfy the demand of all flows on the link according to the bandwidth requirements and channel condition. Compared to traditional slot allocation approaches, 2SLS achieves higher channel utilization and provides better end-to-end QoS for delay sensitive flows with VBR traffic.
"On the (Im)Possibility of Tamper-Resilient Cryptography: Using Fourier Analysis in Computer Viruses," The 34th International Cryptology Conference (CRYPTO), August 2014.
Authors: Per Austrin and Kai-Min Chung and Mohammad Mahmoody and Rafael Pass and Karn Seth

Kai-MinChungAbstract:
We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties. More precisely, we consider $p$-tampering attackers that may efficiently tamper with each bit of the honest parties' random tape with probability $p$, but have to do so in an "online"fashion. Our main result is a strong negative result: We show that any secure encryption scheme, bit commitment scheme, or zero-knowledge protocol can be "broken" with probability $p$by a $p$-tampering attacker. The core of this result is a new Fourier analytic technique for biasing the output of bounded-value functions, which may be of independent interest. We also show that this result cannot be extended to primitives such as signature schemes and identification protocols: assuming the existence of one-way functions, such primitives can be made resilient to  $(\frac{1}{\mbox{poly}(n)})$-tampering attacks where  $n$ is the security parameter.
Current Research Results
Authors: Hong-Sheng Liao, Po-Yu Chen, and W. T. Chen

Wen-TsuenChenAbstract:
In long term evolution-advanced (LTE-A) networks, the carrier aggregation technique is incorporated for user equipments
(UEs) to simultaneously aggregate multiple component carriers (CCs) for achieving higher transmission rate. Many research works for LTE-A systems with carrier aggregation configuration have concentrated on the radio resource management problem for downlink transmission, including mainly CC assignment and packet scheduling. Most previous studies have not considered that the assigned CCs in each UE can be changed. Furthermore, they also have not considered the modulation and coding scheme constraint, as specified in LTE-A standards. Therefore, their proposed schemes may limit the radio resource usage and are not compatible with LTE-A systems. In this paper, we assume that the scheduler can reassign CCs to each UE at each transmission time interval and formulate the downlink radio resource scheduling problem under the modulation and coding scheme constraint, which is proved to be NP-hard. Then, a novel greedy-based scheme is proposed to maximize the system throughput while maintaining proportional fairness of radio resource allocation among all UEs. We show that this scheme can guarantee at least half of the performance of the optimal solution. Simulation results show that our proposed scheme outperforms the schemes in previous studies.
"Leveraging Effective Query Modeling Techniques for Speech Recognition and Summarization," Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), October 2014.
Authors: Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, Ea-Ee Jan, Hsin-Min Wang, Wen-Lian Hsu, and Hsin-Hsi Chen

Wen-LianHsuHsin-MinWangAbstract:
Statistical language modeling (LM) that purports to quantify the acceptability of a given piece of text has long been an interesting yet challenging research area. In particular, language modeling for information retrieval (IR) has enjoyed remarkable empirical success; one emerging stream of the LM approach for IR is to employ the pseudo-relevance feedback process to enhance the representation of an input query so as to improve retrieval effective-ness. This paper presents a continuation of such a general line of research and the main contribution is three-fold. First, we propose a principled framework which can unify the relationships among several widely-used query modeling for-mulations. Second, on top of the successfully de-veloped framework, we propose an extended query modeling formulation by incorporating critical query-specific information cues to guide the model estimation. Third, we further adopt and formalize such a framework to the speech recognition and summarization tasks. A series of empirical exper-iments reveal the feasibility of such an LM framework and the performance merits of the de-duced models on these two tasks.
Current Research Results
Authors: Roland Gilbert Remenyi , Hangfei Qi , Sheng-Yao Su , Zugen Chen , Nicholas C. Wu , Vaithilingaraja Arumugaswami , Shawna Truong , Virginia Chu , Tamar Stokelman , Hung-Hao Lo , C. Anders Olson ,Ting-Ting Wu , Shu-Hwa Chen , Chung-Yen Lin, Ren Sun

Chung-YenLinAbstract:

Pairing high-throughput sequencing technologies with high-throughput mutagenesis enables genome-wide investigations of pathogenic organisms. Knowledge of the specific functions of protein domains encoded by the genome of the hepatitis C virus (HCV), a major human pathogen that contributes to liver disease worldwide, remains limited to insight from small-scale studies. To enhance the capabilities of HCV researchers, we have obtained a high-resolution functional map of the entire viral genome by combining transposon-based insertional mutagenesis with next-generation sequencing. We generated a library of 8,398 mutagenized HCV clones, each containing one 15-nucleotide sequence inserted at a unique genomic position. We passaged this library in hepatic cells, recovered virus pools, and simultaneously assayed the abundance of mutant viruses in each pool by next-generation sequencing. To illustrate the validity of the functional profile, we compared the genetic footprints of viral proteins with previously solved protein structures. Moreover, we show the utility of these genetic footprints in the identification of candidate regions for epitope tag insertion. In a second application, we screened the genetic footprints for phenotypes that reflected defects in later steps of the viral life cycle. We confirmed that viruses with insertions in a region of the nonstructural protein NS4B had a defect in infectivity while maintaining genome replication. Overall, our genome-wide HCV mutant library and the genetic footprints obtained by high-resolution profiling represent valuable new resources for the research community that can direct the attention of investigators toward unidentified roles of individual protein domains.

IMPORTANCE Our insertional mutagenesis library provides a resource that illustrates the effects of relatively small insertions on local protein structure and HCV viability. We have also generated complementary resources, including a website (http://hangfei.bol.ucla.edu) and a panel of epitope-tagged mutant viruses that should enhance the research capabilities of investigators studying HCV. Researchers can now detect epitope-tagged viral proteins by established antibodies, which will allow biochemical studies of HCV proteins for which antibodies are not readily available. Furthermore, researchers can now quickly look up genotype-phenotype relationships and base further mechanistic studies on the residue-by-residue information from the functional profile. More broadly, this approach offers a general strategy for the systematic functional characterization of viruses on the genome scale.

"Enhanced Language Modeling for Extractive Speech Summarization with Sentence Relatedness Information," Interspeech2014, September 2014.
Authors: Shih-Hung Liu, Kuan-Yu Chen, Yu-Lun Hsieh, Berlin Chen, Hsin-Min Wang, Hsu-Chun Yen, and Wen-Lian Hsu

Wen-LianHsuHsin-MinWangAbstract:
Extractive summarization is intended to automatically select a set of representative sentences from a text or spoken document that can concisely express the most important topics of the document. Language modeling (LM) has been proven to be a promising framework for performing extractive summarization in an unsupervised manner. However, there remain two fundamental challenges facing existing LM-based methods. One is how to construct sentence models involved in the LM framework more accurately without resorting to external information sources. The other is how to additionally take into account the sentence-level structural relationships embedded in a document for important sentence selection. To address these two challenges, in this paper we explore a novel approach that generates overlapped clusters to extract sentence relatedness information from the document to be summarized, which can be used not only to enhance the estimation of various sentence models but also to allow for the sentencelevel structural relationships for better summarization performance. Further, the utilities of our proposed methods and several state-of-the-art unsupervised methods are analyzed and compared extensively. A series of experiments conducted on a Mandarin broadcast news summarization task demonstrate the effectiveness and viability of our method.

相關資訊

Academia Sinica 資訊科學研究所 Academia Sinica