Institute of Information Science
近期研究成果
Current Research Results
"Combining Relevance Language Modeling and Clarity Measure for Extractive Speech Summarization," IEEE Trans. on Audio, Speech, and Language Processing, To Appear.
Authors: Shih-Hung Liu, Kuan-Yu Chen, Berlin Chen, Hsin-Min Wang, Hsu-Chun Yen, and Wen-Lian Hsu

Wen-LianHsuHsin-MinWangAbstract:
Extractive speech summarization, which purports to select an indicative set of sentences from a spoken document so as to succinctly represent the most important aspects of the document, has garnered much research over the years. In this paper, we cast extractive speech summarization as an ad-hoc information retrieval (IR) problem and investigate various language modeling (LM) methods for important sentence selection. The main contributions of this paper are four-fold. First, we explore a novel sentence modeling paradigm built on top of the notion of relevance, where the relationship between a candidate summary sentence and a spoken document to be summarized is discovered through different granularities of context for relevance modeling. Second, not only lexical but also topical cues inherent in the spoken document are exploited for sentence modeling. Third, we propose a novel clarity measure for use in important sentence selection, which can help quantify the thematic specificity of each individual sentence that is deemed to be a crucial indicator orthogonal to the relevance measure provided by the LM-based methods. Fourth, in an attempt to lessen summarization performance degradation caused by imperfect speech recognition, we investigate making use of different levels of index features for LM-based sentence modeling, including words, subword-level units, and their combination. Experiments on broadcast news summarization seem to demonstrate the performance merits of our methods when compared to several existing well-developed and/or state-of-the-art methods.
Current Research Results
"Block-based Multi-version B+-Tree for Flash-based Embedded Database Systems," IEEE Transactions on Computers, April 2015.
Authors: Jian-Tao Wang, Kam-Yiu Lam, Yuan-Hao Chang, Jen-Wei Hsieh, and Po-Chun Huang

Yuan-HaoChangAbstract:
In this paper, we propose a novel multi-version B+-tree index structure, called block-based multi-version B+-tree (BbMVBT), for indexing multi-versions of data items in an embedded multi-version database (EMVDB) on flash memory. An EMVDB needs to support streams of update transactions and version-range queries to access different versions of data items maintained in the database. In BbMVBT, the index is divided into two levels. At the higher level, a multi-version index is maintained for keeping successive versions of each data item. These versions are allocated consecutively in a version block. At the lower level, a version array is used to search for a specific data version within a version block. With the reduced index structure of BbMVBT, the overhead for managing the index in processing update operations can be greatly reduced. At the same time, BbMVBT can also greatly reduce the number of accesses to the index in processing versionrange queries. To ensure sufficient free blocks for creating version blocks for efficient execution of BbMVBT, in this paper, we also discuss how to perform garbage collection using the purging-range queries for reclaiming “old” versions of data items and their associated entries in the index nodes. Analysis of the performance of BbMVBT is presented and verified with performance studies using both synthetic and real workloads. The performance results illustrate that BbMVBT can significantly improve the read and write performance to the multi-version index as compared with MVBT even though the sizes of the version blocks are not large. 
Current Research Results
"Modeling the Affective Content of Music with a Gaussian Mixture Model," IEEE Transactions on Affective Computing, March 2015.
Authors: Ju-Chiang Wang, Yi-Hsuan Yang, Hsin-Min Wang, and Shyh-Kang Jeng

Hsin-MinWangYi-HsuanYangAbstract:
Modeling the association between music and emotion has been considered important for music information retrieval and affective human computer interaction. This paper presents a novel generative model called acoustic emotion Gaussians (AEG) for computational modeling of emotion. Instead of assigning a music excerpt with a deterministic (hard) emotion label, AEG treats the affective content of music as a (soft) probability distribution in the valence-arousal space and parameterizes it with a Gaussian mixture model (GMM). In this way, the subjective nature of emotion perception is explicitly modeled. Specifically, AEG employs two GMMs to characterize the audio and emotion data. The fitting algorithm of the GMM parameters makes the model learning process transparent and interpretable. Based on AEG, a probabilistic graphical structure for predicting the emotion distribution from music audio data is also developed. A comprehensive performance study over two emotion-labeled datasets demonstrates that AEG offers new insights into the relationship between music and emotion (e.g., to assess the “affective diversity” of a corpus) and represents an effective means of emotion modeling. Readers can easily implement AEG via the publicly available codes. As the AEG model is generic, it holds the promise of analyzing any signal that carries affective or other highly subjective information.
Current Research Results
Authors: Jen-Chieh Lee,Yung-Ming Jeng, Sheng-Yao Su, Chen-Tu Wu, Keh-Sung Tsai, Cheng-Han Lee, Chung-Yen Lin, Jodi M. Carter, Jeng-Wen Huang, Shu-Hwa Chen, Shyang-Rong Shih, Adrián Mariño-Enríquez, Chih-Chih Chen, Andrew L. Folpe, Yih-Leong Chang and Cher-Wei Liang

Shu-Hwa ChenDanielChung-YenLinAbstract:
Phosphaturic mesenchymal tumours (PMTs) are uncommon soft tissue and bone tumours that typically cause hypophosphataemia and tumour-induced osteomalacia (TIO) through secretion of phosphatonins including fibroblast growth factor 23 (FGF23). PMT has recently been accepted by the World Health Organization as a formal tumour entity. The genetic basis and oncogenic pathways underlying its tumourigenesis remain obscure. In this study, we identified a novel FN1–FGFR1 fusion gene in three out of four PMTs by next-generation RNA sequencing. The fusion transcripts and proteins were subsequently confirmed with RT-PCR and western blotting. Fluorescence in situ hybridization analysis showed six cases with FN1–FGFR1 fusion out of an additional 11 PMTs. Overall, nine out of 15 PMTs (60%) harboured this fusion. The FN1 gene possibly provides its constitutively active promoter and the encoded protein's oligomerization domains to overexpress and facilitate the activation of the FGFR1 kinase domain. Interestingly, unlike the prototypical leukaemia-inducing FGFR1 fusion genes, which are ligand-independent, the FN1–FGFR1 chimeric protein was predicted to preserve its ligand-binding domains, suggesting an advantage of the presence of its ligands (such as FGF23 secreted at high levels by the tumour) in the activation of the chimeric receptor tyrosine kinase, thus effecting an autocrine or a paracrine mechanism of tumourigenesis.
Current Research Results
"Marching-based Wear Leveling for PCM-based Storage Systems," ACM Transactions on Design Automation of Electronic Systems, February 2015.
Authors: Hung-Sheng Chang, Yuan-Hao Chang, Pi-Cheng Hsiu, Tei-Wei Kuo, and Hsiang-Pang Li

Pi-ChengHsiuYuan-HaoChangAbstract:
Improving the performance of storage systems without losing the reliability and sanity/integrity of file systems is a major issue in storage system designs. In contrast to existing storage architectures, we consider a PCM-based storage architecture to enhance the reliability of storage systems. In PCM-based storage systems, the major challenge falls on how to prevent the frequently updated (meta)data from wearing out their residing PCM cells without excessively searching and moving metadata around the PCM space and without extensively updating the index structures of file systems. In this work, we propose an adaptive wearleveling mechanism to prevent any PCM cell from being worn out prematurely by selecting appropriate data for swapping with constant search/sort cost. Meanwhile, the concept of indirect pointers is designed in the proposed mechanism to swap data without any modification to the file system’s indexes. Experiments were conducted based on well-known benchmarks and realistic workloads to evaluate the effectiveness of the proposed design, for which the results are encouraging.
"Virtual Flash Chips: Rethinking the Layer Design of Flash Devices to Improve the Data Recoverability by Trading Potentially Massive Parallelism," ACM/IEEE Design Automation Conference (DAC), June 2015.
Authors: Ming-Chang Yang, Yuan-Hao Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
The market trend of flash memory chips has been going for high density but low reliability. The rapidly increasing bit error rates and emerging reliability issues of the coming triple-level cell (TLC) and even three-dimensional (3D) flash chips would let users take an extremely high risk to store data in such low reliability storage media. With the observations in mind, this paper rethinks the layer design of flash devices and propose a complete paradigm shift to re-configure physical flash chips of potentially massive parallelism into better ??Virtual chips?? in order to improve the data recoverability in a modular and low-cost way. The concept of virtual chips is realized at hardware abstraction layer (HAL) without continually complicating the conventional flash management software of flash translation layer (FTL). The capability and compatibility of the proposed design are then both verified by a series of experiments with encouraging results.
"Achieving SLC Performance with MLC Flash Memory," ACM/IEEE Design Automation Conference (DAC), June 2015.
Authors: Yu-Ming Chang, Yuan-Hao Chang, Tei-Wei Kuo, Yung-Chun Li, and Hsiang-Pang Li

Yuan-HaoChangAbstract:
Although the Multi-Level-Cell technique is widely adopted by flash-memory vendors to boost the chip density and to lower the cost, it results in serious performance and reliability problems. Different from the past work, a new cell programming method is proposed to not only significantly improve the chip performance but also reduce the potential bit error rate. In particular, a Single-Level-Cell-like programming style is proposed to better explore the threshold-voltage relationship to denote different Multi-Level-Cell bit information, which in turn drastically provides a larger window of threshold voltage similar to that found in Single-Level-Cell chips. It could result in less programming iterations and simultaneously a much less reliability problem in programming flash-memory cells. In the experiments, the new programming style could accelerate the programming speed up to 742% and even reduce the bit error rate up to 471% for Multi-Level-Cell pages.
Current Research Results
Authors: Ke-Shiuan Lynn, Chen-Chun Chen, T. Mamie Lih, Cheng-Wei Cheng, Wan-Chih Su, Chun-Hao Chang, Chia-Ying Cheng, Wen-Lian Hsu, Yu-Ju Chen, and Ting-Yi Sung

Ke-Shiuan LynnT. Mamie LihCheng-Wei ChengTing-YiSungWen-LianHsuAbstract:
Glycosylation is a highly complex modification influencing the functions and activities of proteins. Interpretation of intact glycopeptide spectra is crucial but challenging. In this paper, we present a mass spectrometry-based automated glycopeptide identification platform (MAGIC) to identify peptide sequences and glycan compositions directly from intact N-linked glycopeptide collision-induced-dissociation spectra. The identification of the Y1 (peptideY0 + GlcNAc) ion is critical for the correct analysis of unknown glycoproteins, especially without prior knowledge of the proteins and glycans present in the sample. To ensure accurate Y1-ion assignment, we propose a novel algorithm called Trident that detects a triplet pattern corresponding to [Y0, Y1, Y2] or [Y0-NH3, Y0, Y1] from the fragmentation of the common trimannosyl core of N-linked glycopeptides. To facilitate the subsequent peptide sequence identification by common database search engines, MAGIC generates in silico spectra by overwriting the original precursor with the naked peptide m/z and removing all of the glycan-related ions. Finally, MAGIC computes the glycan compositions and ranks them. For the model glycoprotein horseradish peroxidase (HRP) and a 5-glycoprotein mixture, a 2- to 31-fold increase in the relative intensities of the peptide fragments was achieved, which led to the identification of 7 tryptic glycopeptides from HRP and 16 glycopeptides from the mixture via Mascot. In the HeLa cell proteome data set, MAGIC processed over a thousand MS(2) spectra in 3 min on a PC and reported 36 glycopeptides from 26 glycoproteins. Finally, a remarkable false discovery rate of 0 was achieved on the N-glycosylation-free Escherichia coli data set. MAGIC is available at http://ms.iis.sinica.edu.tw/COmics/Software_MAGIC.html.
"From Weak to Strong Zero-Knowledge and Applications," The 12th Theory of Cryptography Conference (TCC 2015), 2015.
Authors: Kai-Min Chung and Edward Lui and Rafael Pass

Kai-MinChungAbstract:
The notion of zero-knowledge is formalized by requiring that for every malicious efficient verifier V* simulator S that can reconstruct the view of V* the prover, in a way that is indistinguishable to every polynomial-time distinguisher. Weak zero-knowledge weakens this notions by switching the order of the quantifiers and only requires that for every distinguisher D, there exists a (potentially different) simulator SD
 
In this paper we consider various notions of zero-knowledge, and investigate whether their weak variants are equivalent to their strong variants. Although we show (under complexity assumption) that for the standard notion of zero-knowledge, its weak and strong counterparts are not equivalent, for meaningful variants of the standard notion, the weak and strong counterparts are indeed equivalent. Towards showing these equivalences, we introduce new non-black-box simulation techniques permitting us, for instance, to demonstrate that the classical 2-round graph non-isomorphism protocol of Goldreich-Micali-Wigderson satisfies a “distributional” variant of zero-knowledge.
 
Our equivalence theorem has other applications beyond the notion of zero-knowledge. For instance, it directly implies the dense model theorem of Reingold et al (STOC ’08), and the leakage lemma of Gentry-Wichs (STOC ’11), and provides a modular and arguably simpler proof of these results (while at the same time recasting these result in the language of zero-knowledge).
Current Research Results
"Assistive Image Comment Robot - A Novel Mid-Level Concept-based Representation," IEEE Transactions on Affective Computing, To Appear.
Authors: Y. Y. Chen, Tao Chen, Taikun Liu, H. Y. Mark Liao, and Shih-Fu Chang

MarkLiaoAbstract:
We present a general framework and working system for predicting likely affective responses of the viewers in the social media environment after an image is posted online. Our approach emphasizes a mid-level concept representation, in which intended affects of the image publisher is characterized by a large pool of visual concepts (termed PACs) detected from image content directly instead of textual metadata, evoked viewer affects are represented by concepts (termed VACs) mined from online comments, and statistical methods are used to model the correlations among these two types of concepts. We demonstrate the utilities of such approaches by developing an end-to-end Assistive Comment Robot application, which further includes components for multi-sentence comment generation, interactive interfaces, and relevance feedback functions. Through user studies, we showed machine suggested comments were accepted by users for online posting in 90% of completed user sessions, while very favorable results were also observed in various dimensions (plausibility, preference, and realism) when assessing the quality of the generated image comments.
"Improving SIMD Code Generation in QEMU," Design, Automation and Test in Europe Conference (DATE), March 2015.
Authors: Sheng-Yu Fu, Jan-Jan Wu, Wei-Chung Hsu

Jan-JanWuAbstract:
Modern processors are often enhanced using SIMD instructions, such as the MMX, SSE, and AVX instructions set in the x86 architecture, or the NEON instruction set in the ARM architecture. Using these SIMD instructions could significantly increase application performance, hence in application binaries a significant proportion of instructions are likely to be SIMD instructions. However, Dynamic Binary Translation (DBT) has largely overlooked SIMD instruction translation. For example, in the popular QEMU system emulator, guest SIMD instructions are often emulated with a sequence of scalar instructions even when the host machines have SIMD instructions to support such parallel computation, leaving significant potential for performance enhancement. In this paper, we propose two approaches, one leveraging the existing helper function implementation in QEMU, and the other using a newly introduced vector IR (Intermediate Representation) to enhance the performance of SIMD instruction translation in DBT of QEMU. Both approaches were implemented in the QEMU to support ARM and IA32 frontend and x86-64 backend. Preliminary experiments show that adding vector IR can significantly enhance the performance of guest applications containing SIMD instructions for both ARM and IA32 architectures when running with QEMU on the x86-64 platform.
Current Research Results
"A Virtual Repository for Linked-Data Based Disaster Management Applications," International Journal on Safety and Security Engineering, WIT Press, March 2015.
Authors: F.-P. Yang, Y. Z. Ou, C. W. Yu, J. Su, S.-W. Bai, J.-M. Ho and J. W. S. Liu

Jane Win ShihLiuAbstract:
Typical state-of-the-art disaster management information systems (DMIS) cannot support responsive discovery and access of data and information needed to handle unforeseen emergencies. Adding semantics and relations to legacy data and transforming them to linked data can remove this limitation. The virtual repository presented in this paper is a development environment for this purpose: It provides application developers with tools for incremental transformation of legacy data and information in the DMIS into linked data as needed by the applications. The virtual repository also provides the applications with support for runtime access of linked data created and maintained using its tools.
Current Research Results
"Design and Implementation of Participant Selection for Crowdsourcing Disaster Information," International Journal on Safety and Security Engineering, WIT Press, March 2015.
Authors: E. T.-H Chu, C.-Y. Lin, P. H. Tsai and J. W. S. Liu

Jane Win ShihLiuAbstract:
Experiences with past major disasters tell us that people with wireless devices and social network services can serve effectively as mobile human sensors. A disaster warning and response system can solicit eye-witness reports from selected participants and use information provided by them to supplement surveillance sensor coverage. This paper first presents an overview the participant selection problem of how to select participants from available volunteers given the benefits and costs of deploying them. The greedy algorithm, named PSP-G, is known to be a near optimal solution with a small fraction of execution time when compared with well-known optimization methods. The paper then describes an implementation of the PSP-G algorithm and the integration of the PSP-G module into the Ushahidi platform. Performance data from two case studies, Haiti Earthquake 2010 and Typhoon Morakot 2009, also described here, clearly show that PSP-G is a general, practical solution.
"Tight Parallel Repetition Theorems for Public-Coin Arguments using KL-divergence," The 12th Theory of Cryptography Conference (TCC 2015), 2015.
Authors: Kai-Min Chung and Rafael Pass

Kai-MinChungAbstract:
We present a new and conceptually simpler proof of a tight parallel-repetition theorem for public-coin arguments [Pass-Venkitasubramaniam, STOC’07], [H˚astad et al, TCC’10], [Chung-Liu, TCC’10]. We follow the same proof framework as the previous non-tight parallel-repetition theorem of H˚astad et al—which relied on statistical distance to measure the distance between experiments—and show that it can be made tight (and further simplified) if instead relying on KL-divergence as the distance between the experiments. 
We then use this new proof to present the first tight “Chernoff-type” parallel repetition theorem for arbitrary public-coin arguments, demonstrating that parallel-repetition can be used to simultaneously decrease both the soundness and completeness error of any public-coin argument at a rate matching the standard Chernoff bound.
 
Current Research Results
Authors: Ke-Shiuan Lynn, Mei-Ling Cheng, Yet-Ran Chen, Chin Hsu, Ann Chen, T. Mamie Lih, Hui-Yin Chang, Ching-jang Huang, Ming-Shi Shiao, Wen-Harn Pan, Ting-Yi Sung, and Wen-Lian Hsu

Wen-LianHsuTing-YiSungAbstract:
Metabolite identification remains a bottleneck in mass spectrometry (MS)-based metabolomics. Currently, this process relies heavily on tandem mass spectrometry (MS/MS) spectra generated separately for peaks of interest identified from previous MS runs. Such a delayed and labor-intensive procedure creates a barrier to automation. Further, information embedded in MS data has not been used to its full extent for metabolite identification. Multimers, adducts, multiply charged ions, and fragments of given metabolites occupy a substantial proportion (40–80%) of the peaks of a quantitation result. However, extensive information on these derivatives, especially fragments, may facilitate metabolite identification. We propose a procedure with automation capability to group and annotate peaks associated with the same metabolite in the quantitation results of opposite modes and to integrate this information for metabolite identification. In addition to the conventional mass and isotope ratio matches, we would match annotated fragments with low-energy MS/MS spectra in public databases. For identification of metabolites without accessible MS/MS spectra, we have developed characteristic fragment and common substructure matches. The accuracy and effectiveness of the procedure were evaluated using one public and two in-house liquid chromatography–mass spectrometry (LC–MS) data sets. The procedure accurately identified 89% of 28 standard metabolites with derivative ions in the data sets. With respect to effectiveness, the procedure confidently identified the correct chemical formula of at least 42% of metabolites with derivative ions via MS/MS spectrum, characteristic fragment, and common substructure matches. The confidence level was determined according to the fulfilled identification criteria of various matches and relative retention time.
Current Research Results
Authors: Jiann-Horng Leu, Kuan-Fu Liu, Kuan-Yu Chen, Shu-Hwa Chen, Yu-Bin Wang, Chung-Yen Lin, Chu-Fang Lo.

Yu-Bin WangShu-Hwa ChenChung-Yen LinAbstract:
By microarray screening, we identified a white spot syndrome virus (WSSV)-strongly induced novel gene in gills of Penaeus monodon. The gene, PmERP15, encodes a putative transmembrane protein of 15 kDa, which only showed some degree of similarity (54–59%) to several unknown insect proteins, but had no hits to shrimp proteins. RT-PCR showed that PmERP15 was highly expressed in the hemocytes, heart and lymphoid organs, and that WSSV-induced strong expression of PmERP15 was evident in all tissues examined. Western blot analysis likewise showed that WSSV strongly up-regulated PmERP15 protein levels. In WSSV-infected hemocytes, immunofluorescence staining showed that PmERP15 protein was colocalized with an ER enzyme, protein disulfide isomerase, and in Sf9 insect cells, PmERP15-EGFP fusion protein colocalized with ER -Tracker™ Red dye as well. GRP78, an ER stress marker, was found to be up-regulated in WSSV-infected P. monodon, and both PmERP15 and GRP78 were up-regulated in shrimp injected with ER stress inducers tunicamycin and dithiothreitol. Silencing experiments showed that although PmERP15 dsRNA-injected shrimp succumbed to WSSV infection more rapidly, the WSSV copy number had no significant changes. These results suggest that PmERP15 is an ER stress-induced, ER resident protein, and its induction in WSSV-infected shrimp is caused by the ER stress triggered by WSSV infection. Furthermore, although PmERP15 has no role in WSSV multiplication, its presence is essential for the survival of WSSV-infected shrimp.
Current Research Results
"Placing Virtual Machines to Optimize Cloud Gaming Experience," IEEE Transactions on Cloud Computing, January 2015.
Authors: Hua-Jun Hong, De-Yu Chen, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu

Sheng-Wei(Kuan-Ta)ChenAbstract:
Optimizing cloud gaming experience is no easy task due to the complex tradeoff between gamer Quality of Experience (QoE) and provider net profit. We tackle the challenge and study an optimization problem to maximize the cloud gaming provider’s total profit while achieving just-good-enough QoE. We conduct measurement studies to derive the QoE and performance models. We formulate and optimally solve the problem. The optimization problem has exponential running time, and we develop an efficient heuristic algorithm. We also present an alternative formulation and algorithms for closed cloud gaming services with dedicated infrastructures, where the profit is not a concern and overall gaming QoE needs to be maximized. We present a prototype system and testbed using off-the-shelf virtualization software, to demonstrate the practicality and efficiency of our algorithms. Our experience on realizing the testbed sheds some lights on how cloud gaming providers may build up their own profitable services. Last, we conduct extensive trace-driven simulations to evaluate our proposed algorithms. The simulation results show that the proposed heuristic algorithms: (i) produce close-to-optimal solutions, (ii) scale to large cloud gaming services with 20000 servers and 40000 gamers, and (iii) outperform the state-of-the-art placement heuristic, e.g., by up to 3.5 times in terms of net profits.
Current Research Results
"Re-Weighted and Adaptive Morphology Separation," SIAM Image Processing, October 2014.
Authors: Guan-Ju Peng and Wen-Liang Hwang

Wen-LiangHwangWen-LiangHwangAbstract:
Morphological component analysis (MCA) for signal separation decomposes a signal into a super-position of morphological subcomponents, each of which is approximately sparse in a certain dictionary. Some of the dictionaries can also be modified to make them adaptive to local structure in images. We show that signal separation performance can be improved over the previous MCA approaches by replacing L1-norm optimization with “weighted” L1-norm optimization and replacing their dictionary adaptation with regularized dictionary adaptation. The weight on an atom for sparse coding is commonly derived from the corresponding coefficient's value. In contrast, the weight of an atom in a dictionary for signal separation is derived from the mutual coherence between the atom and the atoms in the other dictionaries. The proposed solution for regularized dictionary adaptation is an extension of the K-SVD method, where the dictionary and “weighted” sparse coefficients are estimated simultaneously. We present a series of experiments demonstrating the significant performance improvement of the proposed algorithm over the previous approaches for signal separation.
"Reliability-Aware Striping with Minimized Performance Overheads for Flash-based Storage Devices," ACM Symposium on Applied Computing (SAC), April 2015.
Authors: Ming-Chang Yang, Yu-Ming Chang, Po-Chun Huang, Yuan-Hao Chang, Lue-Jane Lee, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
As data retention and disturb problems of coming flash chips keep deteriorating, improving the reliability/endurance of flash devices has become a critical issue in many flash-based storage applications. Meanwhile, the increasing access parallelism of nowadays flash devices provides a possibility to adopt larger striping units in RAID-liked technology to furthermore enhance the data reliability, even though larger striping units might also bring more performance overheads. This work is thus motivated by the individual merits on reliability (resp. performance) provided by the larger (resp. smaller) striping units. Very different from many existing RAID-integrated flash management designs, this work proposes a reliability-aware striping design to adaptively determine a ``proper'' stripe size for data with different (updated) patterns to achieve reliability enhancement with minimized performance degradation. In particular, our design considers the special ``erase-before-write'' feature of flash chips to furthermore reduce the performance overheads. The experiments were conducted based on representative realistic workloads to evaluate the efficacy of the proposed scheme, for which the results are very encouraging.
"The Deployment of Shared Data Objects Among Handheld and Wearable Devices," ACM Symposium on Applied Computing (SAC), April 2015.
Authors: Sheng-Wei Cheng, Che-Wei Chang, Yuan-Hao Chang, Pi-Cheng Hsiu, and Chia-Heng Tu

Pi-ChengHsiuYuan-HaoChangAbstract:
With the great success on making phones smarter, vendors now plan on replicating the same idea on wearable accessories. Accordingly, applications on these devices are full of new possibilities to interact with users. However, in order to provide consistent user experience, it poses a major challenge on how to efficiently deploy shared application states among the devices. In this paper, we consider to minimize the data transmission latencies between the processes and the shared data objects on a set of mobile devices with distributed shared memory. The problem is proved to be ${\cal NP}$-hard. Nevertheless, efficient solutions can still be obtained when special cases are considered. On one hand, we propose a polynomial-time optimal algorithm when the memory of each mobile device is segmented into blocks and each of the shared data objects is of single block. On the other hand, in order to provide a practical way to address the problem, we then propose a $(1,\epsilon)$ asymptotic approximation algorithm, where $\epsilon > 0$ and can be arbitrarily small, with a 2-augmentation-bound of memory size. In the end, a series of simulations was conducted, and the results were very encouraging.
"A Progressive Wear Leveling to Minimize Data Migration Overheads for NAND Flash Devices," ACM/IEEE Design, Automation and Test in Europe (DATE), March 2015.
Authors: Fu-Hsin Chen, Ming-Chang Yang, Yuan-Hao Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
As the endurance of flash memory keeps deteriorating, exploiting wear leveling techniques to improve the lifetime/endurance of flash memory has become a critical issue in the design of flash storage devices. Nevertheless, the deteriorated access performance of high-density flash memory makes the performance overheads introduced by wear leveling non-negligible. In contrast to existing wear-leveling techniques that aggressively distributes the erases to all flash blocks evenly by a fixed threshold, we propose a progress wear leveling design to perform wear leveling in a ``progressive'' way to prevent any block from being worn out prematurely, and to ultimately minimize the performance overheads caused by the unnecessary data migration imposed by the aggressive wear leveling in the early stage of the device lifespan or imposed by the improper selection of victim blocks for erases. The experiments were conducted based on representative realistic workloads to evaluate the efficacy of the proposed scheme. The results reveal that instead of sacrificing the device lifetime, performing wear leveling in such a progressive way can not only minimize the performance overheads but even have potentials to extend the device lifetime.
"4-Round Resettably-Sound Zero Knowledge," The 11th IACR Theory of Cryptography Conference (TCC), February 2014.
Authors: Kai-Min Chung and Rafail Ostrovsky and Rafael Pass and Muthuramakrishnan Venkitasubramaniam and Ivan Visconti

Kai-MinChungAbstract:

While 4-round constructions of zero-knowledge arguments are known based on the existence of one-way functions, constuctions of *resettably-sound* zero-knowledge arguments require either stronger assumptions (the existence of a fully-homomorphic encryption scheme), or more communication rounds. We close this gap by demonstrating a 4-round resettably-sound zero-knowledge argument for NP based on the existence of one-way functions.
 
Current Research Results
Authors: Cheng-Wei Lee, Ming-Chin Chuang, Meng Chang Chen and Yeali S. Sun

Meng ChangChenImageAbstract:
High-speed rail systems are becoming increasingly popular among long-distance travelers.With the explosive growth in the number of mobile devices, the provision of high quality telecommunication and Internet access services on high-speed trains is now a pressing problem. Network mobility (NEMO) has been proposed to enable a large number of mobile devices on a vehicle to access the Internet; however, several issues must be solved before it can be put into practice, e.g., frequent handovers, long handover latency, and poor quality of service. To resolve the above problems, we propose an LTE femtocell-based network mobility scheme that uses Multiple Egress Network interfaces to support seamless handover for high-speed rail systems, called MEN-NEMO. The results of simulations show that the proposed MEN-NEMO scheme reduces the handover latency and transmission overhead of handover signaling substantially.
Current Research Results
Authors: Mayfield, Anderson; Wang, Yu-Bin; Chen, Chii-Shiarng; Lin, Chung-Yen; Chen, Shu-Hwa

Chung-YenLinandersonYu-Bin WangcylinAbstract:
Although rising ocean temperatures threaten scleractinian corals and the reefs they construct, certain reef corals can acclimate to elevated temperatures to which they are rarely exposed in situ. Specimens of the Indo-Pacific reef coral Pocillopora damicornis collected from upwelling reefs of Southern Taiwan were previously found to have survived a 36-week exposure to 30°C, a temperature they encounter infrequently and one that elicits the breakdown of the coral-dinoflagellate (genusSymbiodinium) endosymbiosis in many corals of the Pacific Ocean. In order to gain insight into the sub-cellular pathways utilized by both the coral hosts and their mutualistic Symbiodinium populations to acclimate to this temperature, mRNAs from both control (27°C) and high (30°C) temperature samples were sequenced on an Illumina platform and assembled into a 236,435-contig transcriptome. These P. damicornis specimens were found to be ~60% anthozoan and 40% microbe (Symbiodinium, other eukaryotic microbes, and bacteria), from an mRNA-perspective. Furthermore, a significantly higher proportion of genes from the Symbiodiniumcompartment were differentially expressed after two weeks of exposure. Specifically, at elevated temperatures Symbiodinium populations residing within the coral gastrodermal tissues were more likely to up-regulate the expression of genes encoding proteins involved in metabolism than their coral hosts. Collectively, these transcriptome-scale data suggest that the two members of this endosymbiosis have distinct strategies for acclimating to elevated temperatures that are expected to characterize many of Earth's coral reefs in the coming decades.
Current Research Results
"A Two-Stage Link Scheduling Scheme for Variable-Bit-Rate Traffic Flows in Wireless Mesh Networks," IEEE Transactions on Wireless Communications, November 2014.
Authors: Yung-Cheng Tu, Meng Chang Chen and Yeali S. Sun

Meng ChangChenAbstract:
Providing end-to-end QoS for delay sensitive flows with variable-bit-rate (VBR) traffic in wireless mesh networks is a major challenge. There are several reasons for this phenomenon, including time-varied bandwidth requirements, competition for transmission opportunities from flows on the same link, and interference from other wireless links. In this paper, we propose a flexible bandwidth allocation and uncoordinated scheduling scheme, called two-stage link scheduling (2SLS), to support flow delay control in TDMA-based wireless mesh networks. The scheme is implemented in two stages: slot allocation and on-thego scheduling. The slot allocation mechanism allocates contiguous time slots to each link in each frame based on pre-defined maximum and minimum bandwidth requirements. Then, each link’s on-the-go scheduling mechanism dynamically schedules the transmissions within the allocated time slots. The objective is to maximally satisfy the demand of all flows on the link according to the bandwidth requirements and channel condition. Compared to traditional slot allocation approaches, 2SLS achieves higher channel utilization and provides better end-to-end QoS for delay sensitive flows with VBR traffic.
"On the (Im)Possibility of Tamper-Resilient Cryptography: Using Fourier Analysis in Computer Viruses," The 34th International Cryptology Conference (CRYPTO), August 2014.
Authors: Per Austrin and Kai-Min Chung and Mohammad Mahmoody and Rafael Pass and Karn Seth

Kai-MinChungAbstract:
We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties. More precisely, we consider $p$-tampering attackers that may efficiently tamper with each bit of the honest parties' random tape with probability $p$, but have to do so in an "online"fashion. Our main result is a strong negative result: We show that any secure encryption scheme, bit commitment scheme, or zero-knowledge protocol can be "broken" with probability $p$by a $p$-tampering attacker. The core of this result is a new Fourier analytic technique for biasing the output of bounded-value functions, which may be of independent interest. We also show that this result cannot be extended to primitives such as signature schemes and identification protocols: assuming the existence of one-way functions, such primitives can be made resilient to  $(\frac{1}{\mbox{poly}(n)})$-tampering attacks where  $n$ is the security parameter.

相關資訊

Academia Sinica 資訊科學研究所 Academia Sinica