Institute of Information Science
近期研究成果
"Reliability-Aware Striping with Minimized Performance Overheads for Flash-based Storage Devices," ACM Symposium on Applied Computing (SAC), April 2015.
Authors: Ming-Chang Yang, Yu-Ming Chang, Po-Chun Huang, Yuan-Hao Chang, Lue-Jane Lee, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
As data retention and disturb problems of coming flash chips keep deteriorating, improving the reliability/endurance of flash devices has become a critical issue in many flash-based storage applications. Meanwhile, the increasing access parallelism of nowadays flash devices provides a possibility to adopt larger striping units in RAID-liked technology to furthermore enhance the data reliability, even though larger striping units might also bring more performance overheads. This work is thus motivated by the individual merits on reliability (resp. performance) provided by the larger (resp. smaller) striping units. Very different from many existing RAID-integrated flash management designs, this work proposes a reliability-aware striping design to adaptively determine a ``proper'' stripe size for data with different (updated) patterns to achieve reliability enhancement with minimized performance degradation. In particular, our design considers the special ``erase-before-write'' feature of flash chips to furthermore reduce the performance overheads. The experiments were conducted based on representative realistic workloads to evaluate the efficacy of the proposed scheme, for which the results are very encouraging.
"The Deployment of Shared Data Objects Among Handheld and Wearable Devices," ACM Symposium on Applied Computing (SAC), April 2015.
Authors: Sheng-Wei Cheng, Che-Wei Chang, Yuan-Hao Chang, Pi-Cheng Hsiu, and Chia-Heng Tu

Pi-ChengHsiuYuan-HaoChangAbstract:
With the great success on making phones smarter, vendors now plan on replicating the same idea on wearable accessories. Accordingly, applications on these devices are full of new possibilities to interact with users. However, in order to provide consistent user experience, it poses a major challenge on how to efficiently deploy shared application states among the devices. In this paper, we consider to minimize the data transmission latencies between the processes and the shared data objects on a set of mobile devices with distributed shared memory. The problem is proved to be ${\cal NP}$-hard. Nevertheless, efficient solutions can still be obtained when special cases are considered. On one hand, we propose a polynomial-time optimal algorithm when the memory of each mobile device is segmented into blocks and each of the shared data objects is of single block. On the other hand, in order to provide a practical way to address the problem, we then propose a $(1,\epsilon)$ asymptotic approximation algorithm, where $\epsilon > 0$ and can be arbitrarily small, with a 2-augmentation-bound of memory size. In the end, a series of simulations was conducted, and the results were very encouraging.
"A Progressive Wear Leveling to Minimize Data Migration Overheads for NAND Flash Devices," ACM/IEEE Design, Automation and Test in Europe (DATE), March 2015.
Authors: Fu-Hsin Chen, Ming-Chang Yang, Yuan-Hao Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
As the endurance of flash memory keeps deteriorating, exploiting wear leveling techniques to improve the lifetime/endurance of flash memory has become a critical issue in the design of flash storage devices. Nevertheless, the deteriorated access performance of high-density flash memory makes the performance overheads introduced by wear leveling non-negligible. In contrast to existing wear-leveling techniques that aggressively distributes the erases to all flash blocks evenly by a fixed threshold, we propose a progress wear leveling design to perform wear leveling in a ``progressive'' way to prevent any block from being worn out prematurely, and to ultimately minimize the performance overheads caused by the unnecessary data migration imposed by the aggressive wear leveling in the early stage of the device lifespan or imposed by the improper selection of victim blocks for erases. The experiments were conducted based on representative realistic workloads to evaluate the efficacy of the proposed scheme. The results reveal that instead of sacrificing the device lifetime, performing wear leveling in such a progressive way can not only minimize the performance overheads but even have potentials to extend the device lifetime.
"4-Round Resettably-Sound Zero Knowledge," The 11th IACR Theory of Cryptography Conference (TCC), February 2014.
Authors: Kai-Min Chung and Rafail Ostrovsky and Rafael Pass and Muthuramakrishnan Venkitasubramaniam and Ivan Visconti

Kai-MinChungAbstract:

While 4-round constructions of zero-knowledge arguments are known based on the existence of one-way functions, constuctions of *resettably-sound* zero-knowledge arguments require either stronger assumptions (the existence of a fully-homomorphic encryption scheme), or more communication rounds. We close this gap by demonstrating a 4-round resettably-sound zero-knowledge argument for NP based on the existence of one-way functions.
 
Current Research Results
Authors: Yung-Cheng Tu, Meng Chang Chen and Yeali S. Sun

Meng ChangChenAbstract:
Providing end-to-end QoS for delay sensitive flows with variable-bit-rate (VBR) traffic in wireless mesh networks is a major challenge. There are several reasons for this phenomenon, including time-varied bandwidth requirements, competition for transmission opportunities from flows on the same link, and interference from other wireless links. In this paper, we propose a flexible bandwidth allocation and uncoordinated scheduling scheme, called two-stage link scheduling (2SLS), to support flow delay control in TDMA-based wireless mesh networks. The scheme is implemented in two stages: slot allocation and on-thego scheduling. The slot allocation mechanism allocates contiguous time slots to each link in each frame based on pre-defined maximum and minimum bandwidth requirements. Then, each link’s on-the-go scheduling mechanism dynamically schedules the transmissions within the allocated time slots. The objective is to maximally satisfy the demand of all flows on the link according to the bandwidth requirements and channel condition. Compared to traditional slot allocation approaches, 2SLS achieves higher channel utilization and provides better end-to-end QoS for delay sensitive flows with VBR traffic.
Current Research Results
Authors: Cheng-Wei Lee, Ming-Chin Chuang, Meng Chang Chen and Yeali S. Sun

Meng ChangChenImageAbstract:
High-speed rail systems are becoming increasingly popular among long-distance travelers.With the explosive growth in the number of mobile devices, the provision of high quality telecommunication and Internet access services on high-speed trains is now a pressing problem. Network mobility (NEMO) has been proposed to enable a large number of mobile devices on a vehicle to access the Internet; however, several issues must be solved before it can be put into practice, e.g., frequent handovers, long handover latency, and poor quality of service. To resolve the above problems, we propose an LTE femtocell-based network mobility scheme that uses Multiple Egress Network interfaces to support seamless handover for high-speed rail systems, called MEN-NEMO. The results of simulations show that the proposed MEN-NEMO scheme reduces the handover latency and transmission overhead of handover signaling substantially.
Current Research Results
"Per-cluster Ensemble Kernel Learning for Multi-modal Image Clustering with Group-dependent Feature Selection," IEEE Transactions on Multimedia, To Appear.
Authors: J. T. Tsai, Y. Y. Lin, and H. Y. Mark Liao

MarkLiaoYenYuLinAbstract:
We present a clustering approach, MK-SOM, that carries out cluster-dependent feature selection, and partitions images with multiple feature representations into clusters. This work is motivated by the observations that human visual systems (HVS) can receive various kinds of visual cues for interpreting the world. Images identified by HVS as the same category are typically coherent to each other in certain crucial visual cues, but the crucial cues vary from category to category. To account for this observation and bridge the semantic gap, the proposed MK-SOM integrates multiple kernel learning (MKL) into the training process of self-organizing map (SOM), and associates each cluster with a learnable, ensemble kernel. Hence, it can leverage information captured by various image descriptors, and discoveries the cluster-specific characteristics via learning the per-cluster ensemble kernels. Through the optimization iterations, cluster structures are gradually revealed via the features specified by the learned ensemble kernels, while the quality of these ensemble kernels is progressively improved owing to the coherent clusters by enforcing SOM. Besides, MK-SOM allows the introduction of side information to improve performance, and it hence provides a new perspective of applying MKL to address both unsupervised and semisupervised clustering tasks. Our approach is comprehensively evaluated in the two applications. The superior and promising results manifest its effectiveness.
Current Research Results
"Cross-camera Knowledge Transfer for Multiview People Counting," IEEE Trans. on Image Processing, To Appear.
Authors: Nick C Tang, Y. Y. Lin, M. F. Weng, and H. Y. Mark Liao

MarkLiaoYenYuLinAbstract:
We present a novel two-pass framework for counting the number of people in an environment where multiple cameras provide different views of the subjects. By exploiting the complementary information captured by the cameras, we can transfer knowledge between the cameras to address the difficulties of people counting and improve the performance. The contribution of this work is threefold. First, normalizing the perspective of visual features and estimating the size of a crowd are highly correlated tasks. Hence we treat them as a joint learning problem. The derived counting model is scalable and it provides more accurate results than existing approaches. Second, we introduce an algorithm that matches groups of pedestrians in images captured by different cameras. The results provide a common domain for knowledge transfer, so we can work with multiple cameras without worrying about their differences. Third, the proposed counting system is comprised of a pair of collaborative regressors. The first one determines the people count based on features extracted from intra-camera visual information, while the second calculates the residual by considering the conflicts between inter-camera predictions. The two regressors are elegantly coupled and provide an accurate people counting system. The results of experiments in various settings show that, overall, our approach outperforms comparable baseline methods. The significant performance improvement demonstrates the effectiveness of our two-pass regression framework.
Current Research Results
Authors: Mayfield, Anderson; Wang, Yu-Bin; Chen, Chii-Shiarng; Lin, Chung-Yen; Chen, Shu-Hwa

Chung-YenLinandersonYu-Bin WangcylinAbstract:
Although rising ocean temperatures threaten scleractinian corals and the reefs they construct, certain reef corals can acclimate to elevated temperatures to which they are rarely exposed in situ. Specimens of the Indo-Pacific reef coral Pocillopora damicornis collected from upwelling reefs of Southern Taiwan were previously found to have survived a 36-week exposure to 30°C, a temperature they encounter infrequently and one that elicits the breakdown of the coral-dinoflagellate (genusSymbiodinium) endosymbiosis in many corals of the Pacific Ocean. In order to gain insight into the sub-cellular pathways utilized by both the coral hosts and their mutualistic Symbiodinium populations to acclimate to this temperature, mRNAs from both control (27°C) and high (30°C) temperature samples were sequenced on an Illumina platform and assembled into a 236,435-contig transcriptome. These P. damicornis specimens were found to be ~60% anthozoan and 40% microbe (Symbiodinium, other eukaryotic microbes, and bacteria), from an mRNA-perspective. Furthermore, a significantly higher proportion of genes from the Symbiodiniumcompartment were differentially expressed after two weeks of exposure. Specifically, at elevated temperatures Symbiodinium populations residing within the coral gastrodermal tissues were more likely to up-regulate the expression of genes encoding proteins involved in metabolism than their coral hosts. Collectively, these transcriptome-scale data suggest that the two members of this endosymbiosis have distinct strategies for acclimating to elevated temperatures that are expected to characterize many of Earth's coral reefs in the coming decades.
"On the (Im)Possibility of Tamper-Resilient Cryptography: Using Fourier Analysis in Computer Viruses," The 34th International Cryptology Conference (CRYPTO), August 2014.
Authors: Per Austrin and Kai-Min Chung and Mohammad Mahmoody and Rafael Pass and Karn Seth

Kai-MinChungAbstract:
We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties. More precisely, we consider $p$-tampering attackers that may efficiently tamper with each bit of the honest parties' random tape with probability $p$, but have to do so in an "online"fashion. Our main result is a strong negative result: We show that any secure encryption scheme, bit commitment scheme, or zero-knowledge protocol can be "broken" with probability $p$by a $p$-tampering attacker. The core of this result is a new Fourier analytic technique for biasing the output of bounded-value functions, which may be of independent interest. We also show that this result cannot be extended to primitives such as signature schemes and identification protocols: assuming the existence of one-way functions, such primitives can be made resilient to  $(\frac{1}{\mbox{poly}(n)})$-tampering attacks where  $n$ is the security parameter.
Current Research Results
Authors: Hong-Sheng Liao, Po-Yu Chen, and W. T. Chen

Wen-TsuenChenAbstract:
In long term evolution-advanced (LTE-A) networks, the carrier aggregation technique is incorporated for user equipments (UEs) to simultaneously aggregate multiple component carriers (CCs) for achieving higher transmission rate. Many research works for LTE-A systems with carrier aggregation configuration have concentrated on the radio resource management problem for downlink transmission, including mainly CC assignment and packet scheduling. Most previous studies have not considered that the assigned CCs in each UE can be changed. Furthermore, they also have not considered the modulation and coding scheme constraint, as specified in LTE-A standards. Therefore, their proposed schemes may limit the radio resource usage and are not compatible with LTE-A systems. In this paper, we assume that the scheduler can reassign CCs to each UE at each transmission time interval and formulate the downlink radio resource scheduling problem under the modulation and coding scheme constraint, which is proved to be NP-hard. Then, a novel greedy-based scheme is proposed to maximize the system throughput while maintaining proportional fairness of radio resource allocation among all UEs. We show that this scheme can guarantee at least half of the performance of the optimal solution. Simulation results show that our proposed scheme outperforms the schemes in previous studies.
"Leveraging Effective Query Modeling Techniques for Speech Recognition and Summarization," Conference on Empirical Methods in Natural Language Processing (EMNLP 2014), October 2014.
Authors: Kuan-Yu Chen, Shih-Hung Liu, Berlin Chen, Ea-Ee Jan, Hsin-Min Wang, Wen-Lian Hsu, and Hsin-Hsi Chen

Wen-LianHsuHsin-MinWangAbstract:
Statistical language modeling (LM) that purports to quantify the acceptability of a given piece of text has long been an interesting yet challenging research area. In particular, language modeling for information retrieval (IR) has enjoyed remarkable empirical success; one emerging stream of the LM approach for IR is to employ the pseudo-relevance feedback process to enhance the representation of an input query so as to improve retrieval effective-ness. This paper presents a continuation of such a general line of research and the main contribution is three-fold. First, we propose a principled framework which can unify the relationships among several widely-used query modeling for-mulations. Second, on top of the successfully de-veloped framework, we propose an extended query modeling formulation by incorporating critical query-specific information cues to guide the model estimation. Third, we further adopt and formalize such a framework to the speech recognition and summarization tasks. A series of empirical exper-iments reveal the feasibility of such an LM framework and the performance merits of the de-duced models on these two tasks.
Current Research Results
Authors: Roland Gilbert Remenyi , Hangfei Qi , Sheng-Yao Su , Zugen Chen , Nicholas C. Wu , Vaithilingaraja Arumugaswami , Shawna Truong , Virginia Chu , Tamar Stokelman , Hung-Hao Lo , C. Anders Olson ,Ting-Ting Wu , Shu-Hwa Chen , Chung-Yen Lin, Ren Sun

Chung-YenLinDanielSophiaAbstract:

Pairing high-throughput sequencing technologies with high-throughput mutagenesis enables genome-wide investigations of pathogenic organisms. Knowledge of the specific functions of protein domains encoded by the genome of the hepatitis C virus (HCV), a major human pathogen that contributes to liver disease worldwide, remains limited to insight from small-scale studies. To enhance the capabilities of HCV researchers, we have obtained a high-resolution functional map of the entire viral genome by combining transposon-based insertional mutagenesis with next-generation sequencing. We generated a library of 8,398 mutagenized HCV clones, each containing one 15-nucleotide sequence inserted at a unique genomic position. We passaged this library in hepatic cells, recovered virus pools, and simultaneously assayed the abundance of mutant viruses in each pool by next-generation sequencing. To illustrate the validity of the functional profile, we compared the genetic footprints of viral proteins with previously solved protein structures. Moreover, we show the utility of these genetic footprints in the identification of candidate regions for epitope tag insertion. In a second application, we screened the genetic footprints for phenotypes that reflected defects in later steps of the viral life cycle. We confirmed that viruses with insertions in a region of the nonstructural protein NS4B had a defect in infectivity while maintaining genome replication. Overall, our genome-wide HCV mutant library and the genetic footprints obtained by high-resolution profiling represent valuable new resources for the research community that can direct the attention of investigators toward unidentified roles of individual protein domains.

IMPORTANCE Our insertional mutagenesis library provides a resource that illustrates the effects of relatively small insertions on local protein structure and HCV viability. We have also generated complementary resources, including a website (http://hangfei.bol.ucla.edu) and a panel of epitope-tagged mutant viruses that should enhance the research capabilities of investigators studying HCV. Researchers can now detect epitope-tagged viral proteins by established antibodies, which will allow biochemical studies of HCV proteins for which antibodies are not readily available. Furthermore, researchers can now quickly look up genotype-phenotype relationships and base further mechanistic studies on the residue-by-residue information from the functional profile. More broadly, this approach offers a general strategy for the systematic functional characterization of viruses on the genome scale.

"Enhanced Language Modeling for Extractive Speech Summarization with Sentence Relatedness Information," Interspeech2014, September 2014.
Authors: Shih-Hung Liu, Kuan-Yu Chen, Yu-Lun Hsieh, Berlin Chen, Hsin-Min Wang, Hsu-Chun Yen, and Wen-Lian Hsu

Wen-LianHsuHsin-MinWangAbstract:
Extractive summarization is intended to automatically select a set of representative sentences from a text or spoken document that can concisely express the most important topics of the document. Language modeling (LM) has been proven to be a promising framework for performing extractive summarization in an unsupervised manner. However, there remain two fundamental challenges facing existing LM-based methods.
One is how to construct sentence models involved in the LM framework more accurately without resorting to external information sources. The other is how to additionally take into account the sentence-level structural relationships embedded in a document for important sentence selection. To address these two challenges, in this paper we explore a novel approach that generates overlapped clusters to extract sentence relatedness information from the document to be summarized, which can be used not only to enhance the estimation of various sentence models but also to allow for the sentencelevel structural relationships for better summarization performance. Further, the utilities of our proposed methods and several state-of-the-art unsupervised methods are analyzed and compared extensively. A series of experiments conducted on a Mandarin broadcast news summarization task demonstrate the effectiveness and viability of our method.
Current Research Results
"A Mobile Proxy Architecture for Video Services over High-Speed Rail Environments in LTE-A Networks," IEEE Systems Journal, To Appear.
Authors: Ming-Ching Chuang, Meng Chang Chen

Meng ChangChenImageAbstract:
With the rapid developments in wireless communication technology, people access the Internet applications anytime and anywhere. Recently, academics and industry researchers are paying increasing attention to the video streaming services over high speed rail environments. For providing the quality-of-service (QoS) of video streaming, we present a novel mobile proxy architecture for improving the system performance over high-speed trains in LTE-A networks. The mobile proxy not only saves the wireless bandwidth consumption between the base station and the high speed train but also reduces the starting delay time of the video. Moreover, we propose a score-based replacement (SBR) scheme in the memory management of the proxy server based on the feature of the video. Our simulation results demonstrate that the proposed scheme provides a better performance than other existing cache management schemes.
Current Research Results
"Slow-Paced Persistent Network Attacks Analysis and Detection Using Spectrum Analysis," IEEE Systems Journal, To Appear.
Authors: Li-Ming Chen, Meng Chang Chen, WanJiun Liao

Meng ChangChenAbstract:
A slow-paced persistent attack, such as slow worm or bot, can bewilder the detection system by slowing down their attack. Detecting such attacks based on traditional anomaly detection techniques may yield high false alarm rates. In this paper, we frame our problem as detecting slow-paced persistent attacks from a time series obtained from network trace. We focus on time series spectrum analysis to identify peculiar spectral patterns that may represent the occurrence of a persistent activity in the time domain. We propose a method to adaptively detect slow-paced persistent attacks in a time series and evaluate the proposed method by conducting experiments using both synthesized traffic and real-world traffic. The results show that the proposed method is capable of detecting slow-paced persistent attacks even in a noisy environment mixed with legitimate traffic.
"Warranty-Aware Page Management for PCM-Based Embedded Systems," ACM/IEEE International Conference on Computer-Aided Design (ICCAD), November 2014.
Authors: Sheng-Wei Cheng, Yu-Fen Chang, Yuan-Hao Chang, Shin-Wen Wei, and Wei-Kuan Shih

Yuan-HaoChangAbstract:
The thriving growth in mobile consumer electronics keeps the energy efficiency in the embedded system design a recurring theme. PCM main memory has shown its potential in replacing DRAM due to the huge amount of energy reduction, e.g. 65%. When considering the usage of PCM main memory, its write endurance becomes a critical issue, and wear leveling design is a common approach to resolve this issue. Even though the wear leveling design should emphasize on the operation efficiency and overhead reduction, existing wear leveling strategies designed for PCM main memory are usually enforced to prolong the lifetime of PCM in best effort. In this paper, we propose an idea that, instead of valuing PCM lifetime exploitation at first priority, we should turn to satisfy the least product time guarantee. To this end, further overhead reduction and operation efficiency enhancement are possible. We propose a warranty-aware page management design to mitigate the operation overhead required for managing the endurance issue in PCM. For overloaded pages, a cooling-based wear leveling is proposed to keep the pages resting for a period of time. In addition, an auxiliary wear-leveling is proposed to proactively free pages that are possessed by some processes but are rarely written. To show the effectiveness of the proposed design, we collect real traces on QEMU by running SPEC2006 benchmarks with different write intensity workloads. The experiment results showed the cooling-based design reduced the overhead to one third of that of the state-of-the-art designs while having the same level of performance.
"Verifying Curve25519 Software," ACMCCS 2014, November 2014.
Authors: Yu-Fang Chen, Chang-Hong Hsu, Hsin-Hung Lin, Peter Schwabe, Ming-Hsien Tsai, Bow-Yaw Wang, Bo-Yin Yang, and Shang-Yi Yang

Bo-YinYangBow-YawWangYu-FangChenAbstract:
This paper presents results on formal verification of high-speed cryptographic software.
We consider speed-record-setting hand-optimized assembly software for Curve25519 elliptic-curve key exchange   software was presented by Bernstein, Duif, Lange, Schwabe, and Yang in 2011 Two versions for different microarchitectures are available.
We successfully verify the core part of computation, and reproduce a bug in a previously published edition.
An SMT solver for the bit-vector theory is used to establish almost all properties. Remaining properties are verified in a proof assistant.
We also exploit the compositionality of Hoare logic to address the scalability issue. Essential differences between both versions of the software are discussed from a formal-verification perspective.
Authors: Wei-Chun Chung, Yu-Jung Chang, D. T. Lee, and Jan-Ming Ho

Jan-MingHoDer- TsaiLeeImageImageAbstract:
Next-generation sequencing (NGS) data is a typical type of big data in science. Data growth rapidly since the throughput of NGS doubles about every five months. NGS reads are getting longer with unavoidably sequencing errors. Error correction is one of the critical steps to the success of NGS applications such as de novo genome assembly and DNA resequencing. New improvements are demanded on both efficiency and effectiveness, especially for correcting the long reads. In this
paper, we study the design of efficient and effective computational strategies to improve performance of CloudRS, a open-sourc e MapReduce application designed to correct sequencing errors of NGS data. We introduce the RM diagram to represent the key-value pairs generated on each NGS read. We also present several schemes to trim off the RM diagram, and thus to reduce the size of each message. The experimental results show that our proposed schemes reduce the execution time, and improve the quality of the de novo genome assembly.
"Distributed Algorithms for the Lovász Local Lemma and Graph Coloring," ACM Symposium on Principles of Distributed Computing (PODC), July 2014.
Authors: Kai-Min Chung and Seth Pettie and Hsin-Hao Su

Kai-MinChungAbstract:

The Lovász Local Lemma (LLL), introduced by Erdos and Lovász in 1975, is a powerful tool of the probabilistic method that allows one to prove that a set of n "bad" events do not happen with non-zero probability, provided that the events have limited dependence.  However, the LLL itself does not suggest how to find a point avoidingall bad events. Since the work of Beck (1991) there has been a sustained effort in finding a constructive proof (i.e. an algorithm) for the LLL or weaker versions of it.  In a major breakthrough Moser and Tardos (2010) showed that a point avoiding all bad events can be found efficiently. They also proposed a distributed/parallel version of their algorithm that requires $O(log^2 n)$  rounds of communication in a distributed network.
 
In this paper we provide two new distributed algorithms for the LLL that improve on both the efficiency and simplicity 
of the Moser-Tardos algorithm.  For clarity we express our results in terms of the symmetric LLL though both algorithms deal with the asymmetric version as well. Let p bound the probability of any bad event and d be the maximum degree in the dependency graph of the bad events. When $epd^2 < 1$ we give a truly simple LLL algorithm running in $O(log_{1/epd^2} n)$ rounds. Under the tighter condition ep(d+1)<1, we give a slightly slower algorithm running in O(\log^2 d \cdot \log_{1/ep(d+1)} n)  rounds. Furthermore, we gave an algorithm that runs in sublogarithmic  rounds under the condition p \cdot  f(d) < 1, where $f(d)$ is an exponential function of d. Although the conditions of the LLL are locally verifiable, we prove that any distributed LLL algorithm requires \Omega(\log^* n) rounds.
 
In many graph coloring problems the existence of a valid coloring is established by one or more applications of the LLL. Using our LLL algorithms, we give logarithmic-time distributed algorithms for frugal coloring, defective coloring, coloring girth-4 (triangle-free) and girth-5 graphs, edge coloring, and list coloring.
"Screencast in the Wild: Performance and Limitations," Proceedings of ACM Multimedia 2014, November 2014.
Authors: Chih-Fan Hsu, De-Yu Chen, Chun-Ying Huang, Cheng-Hsin Hsu, and Kuan-Ta Chen

Sheng-Wei(Kuan-Ta)ChenAbstract:
Displays without associated computing devices are increasingly more popular, and the binding between computing devices and displays is no longer one-to-one but more dynamic and adaptive.
 
Screencast technologies enable such dynamic binding over ad hoc one-hop networks or Wi-Fi access points. In this paper, we design and conduct the first detailed measurement study on the performance of state-of-the-art screencast technologies. By varying the user demands and network conditions, we find that Splashtop and Miracast outperform other screencast technologies under typical setups. Our experiments also show that the screencast technologies either: (i) do not dynamically adjust bitrate or (ii) employ a suboptimal adaptation strategy. The developers of future screencast technologies are suggested to pay more attentions on the bitrate adaptation strategy, e.g., by leveraging cross-layer optimization paradigm. 
"An Engergy-efficient Task Scheduler for Multicore Platforms with per-core DVFS Based on Task Characteristics," International Conference on Parallel Processing (ICPP), September 2014.
Authors: Ching-Chi Lin, Chao-Jui Chang, You-Chen Syu, Jan-Jan Wu, Po-Wen Cheng, Wei-Te Hsu

Jan-JanWuImageImageAbstract:
Abstract—Energy-efficient task scheduling is a fundamental issue in many application domains, such as energy conservation for mobile devices and the operation of green computing data centers. Modern processors support dynamic voltage and frequency scaling (DVFS) on a per-core basis, i.e., the CPU can adjust the voltage or frequency of each core. As a result, the core in a processor may have different computing power and energy consumption. To conserve energy in multi-core platforms, we propose task scheduling algorithms that leverage per-core DVFS and achieve a balance between performance and energy consumption. We consider two task execution modes: the batch mode, which runs jobs in batches; and the online mode in which jobs with different time constraints, arrival times, and computation workloads co-exist in the system.
For tasks executed in the batch mode, we propose an algorithm that finds the optimal scheduling policy; and for the online mode, we present a heuristic algorithm that determines the execution order and processing speed of tasks in an online fashion. The heuristic ensures that the total cost is minimal for every time interval during a task’s execution.
"Comparing Profitability of Day Trading Using ORB Strategies on Index Futures Markets in Taiwan, Hong-Kong and USA," The 10th Annual Conference of the Asia-Pacific Association of Derivatives, August 2014.
Authors: Yi-Cheng Tsai, Mu-En Wu, Chin-Laung Lei, Chung-Shu Wu, and Jan-Ming Ho

Jan-MingHoAbstract:
In literature, it has been shown that market behavior is the aggregation of investors’ responses to information. Although the markets do not respond in real time to events occur after the market, it seems that information is picked up by the market in early stage of market opening, and has an impact on market dynamics over the same day. In this paper, we present our studies on the design of timing parameters of ORB strategies in index futures markets. We first show that, based on cross-sectional analysis on 10-year historical data, trading volume and fluctuation of return on each one-minute interval of trading hours of the futures markets reach their peaks at the opening and closing of the underlying stock markets. Based on these observations, we define the active period of an index futures market to align with the underlying stock market. We test ORB strategies on the index futures of DJIA, S&P, NASDAQ, HIS, and TAIEX from 2003 to 2013. We find that by optimizing the trading timing, ORB achieves over 10% annual return with p-value no greater than 2% in each futures market. The best performance, 22.6% annual return at p-value of 1×10-8, occurs in TAIEX.
Current Research Results
"Generalized k-Labelsets Ensemble for Multi-Label and Cost-Sensitive Classification," IEEE Transactions on Knowledge and Data Engineering, July 2014.
Authors: Hung-Yi Lo, Shou-De Lin, and Hsin-Min Wang

Hsin-MinWangAbstract:
Label powerset (LP) method is one category of multi-label learning algorithm. This paper presents a basis expansions
model for multi-label classification, where a basis function is an LP classifier trained on a random k-labelset. The expansion coefficients are learned to minimize the global error between the prediction and the ground truth. We derive an analytic solution to learn the coefficients efficiently. We further extend this model to handle the cost-sensitive multi-label classification problem, and apply it in social tagging to handle the issue of the noisy training set by treating the tag counts as the misclassification costs. We have conducted experiments on several benchmark datasets and compared our method with other state-of-the-art multi-label learning methods. Experimental results on both multi-label classification and cost-sensitive social tagging demonstrate that our method has better performance than other methods.
Current Research Results
"Garbage Collection for Multiversion Index in Flash-based Embedded Databases," ACM Transactions on Design Automation of Electronic Systems, June 2014.
Authors: Po-Chun Huang, Yuan-Hao Chang, Kam-Yiu Lam, Jian-Tao Wang, and Chien-Chin Huang

Yuan-HaoChangAbstract:
Recently, flash-based embedded databases have gained their momentum in various control and monitoring systems, such as cyber-physical systems (CPSes). To support the functionality to access the historical data, a multiversion index is adopted to simultaneously maintain multiple versions of data items, as well as their index information. However, maintaining a multiversion index on flash memory incurs considerable performance overheads on garbage collection, which is to reclaim the space occupied by the outdated/invalid data items and their index information on flash memory. In this work, we propose an efficient garbage collection strategy to solve the garbage collection issues of flash-based multiversion databases. In particular, a version-tracking method is proposed to accelerate the performance on the process on identifying/reclaiming the space of invalid data and their indexes, and a pre-summary method is also designed to solve the cascading update problem that is caused by the write-once nature of flash memory and is worsened when more versions refer to the same data item. The capability of the proposed strategy is then verified through a series of analysis.
Current Research Results
"Exploring Sequential Probability Tree for Movement-based Community Discovery," IEEE Transactions on Knowledge and Data Engineering, To Appear.
Authors: Wen-Yuan Zhu, Wen-Chih Peng, Po-Ruey Lei, and Ling-Jyh Chen

Ling-JyhChenAbstract:
In this paper, we tackle the problem of discovering movement-based communities of users, where users in the same community have similar movement behaviors. Note that the identification of movement-based communities is beneficial to location-based services and trajectory recommendation services. Specifically, we propose a framework to mine movementbased communities which consists of three phases: 1) constructing trajectory profiles of users, 2) deriving similarity between trajectory profiles, and 3) discovering movement-based communities. In the first phase, we design a data structure, called the Sequential Probability tree (SP-tree), as a user trajectory profile. SP-trees not only derive sequential patterns, but also indicate transition probabilities of movements. Moreover, we propose two algorithms: BF (standing for Breadth-First) and DF (standing for Depth-First) to construct SP-tree structures as user profiles. To measure the similarity values among users’ trajectory profiles, we further develop a similarity function that takes SP-tree information into account. In light of the similarity values derived, we formulate an objective function to evaluate the quality of communities. According to the objective function derived, we propose a greedy algorithm Geo-Cluster to effectively derive communities. To evaluate our proposed algorithms, we have conducted comprehensive experiments on two real datasets. The experimental results show that our proposed framework can effectively discover movement-based user communities.
Current Research Results
Authors: Chia-Hao Chin, Shu-Hwa Chen, Hsin-Hung Wu, Chin-Wen Ho, Ming-Tat Ko *, and Chung-Yen Lin

Chung-YenLinMing-TatKoSophiaJoviceAbstract:
Background
Network is a useful way for presenting many types of biological data including protein-protein interactions, gene regulations, cellular pathways, and signal transductions. We can measure nodes by their network features to infer their importance in the network, and it can help us identify central elements of biological networks.
Results
We introduce a novel Cytoscape plugin cytoHubba for ranking nodes in a network by their network features. CytoHubba provides 11 topological analysis methods including Degree, Edge Percolated Component, Maximum Neighborhood Component, Density of Maximum Neighborhood Component, Maximal Clique Centrality and six centralities (Bottleneck, EcCentricity, Closeness, Radiality, Betweenness, and Stress) based on shortest paths. Among the eleven methods, the new proposed method, MCC, has a better performance on the precision of predicting essential proteins from the yeast PPI network.
Conclusions
CytoHubba provide a user-friendly interface to explore important nodes in biological networks. It computes all eleven methods in one stop shopping way. Besides, researchers are able to combine cytoHubba with and other plugins into a novel analysis scheme. The network and sub-networks caught by this topological analysis strategy will lead to new insights on essential regulatory networks and protein drug targets for experimental biologists. Cytohubba is available as cytoscape plug-in and can be accessed freely at http://hub.iis.sinica.edu.tw/cytohubba/ for more detail. According to cytoscape plugin download statistics, the accumulated number of cytohubba is around 6,500 times since 2010.
"A PCM Translation Layer for Integrated Memory and Storage Management," ACM/IEEE International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), October 2014.
Authors: Bing-Jing Chang, Yuan-Hao Chang, Hung-Sheng Chang, Tei-Wei Kuo, and Hsiang-Pang Li

Yuan-HaoChangAbstract:
Phase change memory (PCM) is known for its potentials as main memory and as storage. In contrast to the past work, this work presents a PCM translation layer that considers how the main memory is used by the operating system together with the usage patterns of storage by trading their performance and reliability. In particular, a joint management scheme is proposed to improve the capability of PCM as both main memory and storage so as to enhance the performance of the entire system. The endurance issue of PCM for main memory is resolved by utilizing the potentially large capacity of the PCM storage space with limited modifications to the existing operating system implementations. Moreover, three commands are proposed to help operating system engineers to take advantage of PCM as main memory and storage by reducing I/O overheads and speeding up both the initialization and termination of program executions. The experimental results show that the performance of PCM as both main memory and storage can be significantly improved with reasonable lifetime, while the system overhead is very limited under coarse-grained wear leveling.
"Current-Aware Scheduling for Flash Storage Devices," IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA), August 2014.
Authors: Tzu-Jung Huang, Chien-Chung Ho, Po-Chun Huang, Yuan-Hao Chang, Che-Wei Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
As NAND flash memory has become a major choice of storage media in diversified computing environments, the performance issue of flash memory has been extensively addressed, and many excellent designs have been proposed. Among such designs, it is an effective strategy to adopt multiple channels and flash-memory chips to facilitate parallelized data accesses, so as to improve the throughput of the flash-memory storage device. However, the degree of data access parallelism cannot be increased by simply increasing the number of channels and chips in the storage device, because it is seriously limited by the maximum current constraint of the bus and affected by the access patterns of user data. As a consequence, to maximize the degree of access parallelism, it is of paramount significance to have a proper scheduling strategy to determine the order that read/write requests are served. In this paper, a current-aware scheduling strategy for read/write requests is proposed to maximize the performance of read requests, provided that the maximum current constraint of the bus is not violated and the deadline violations of write requests are minimized. The proposed strategy is then evaluated through a series of experiments, where the results are quite encouraging.
"Content-based Feature Matching for Fragment Reassembly of Ceramic Reconstruction," Proceedings International Agent Technology, IAT 2014 - The 2014 Web Intelligence Congress., August 2014.
Authors: Shu-Yu Lin, Han-Jen Ku, Chao-Lung Ting, Ray-I Chang, Yu-Chun Wang, and Jan-Ming Ho

Jan-MingHoAbstract:
A large amount of fragments of different ceramic objects are usually found during archaeological excavation.
Finding the relation between these fragments is time-consuming and demands expert knowledge. In this paper, we study a related problem known as ceramic reconstruction. We present a novel content-based feature matching method and develop the ceramic fragment reassembly system (CFRS) for ceramic reconstruction. CFRS can automatically reassemble fragments to make it easier for human experts to further improve the quality of reconstruction if necessary. CFRS consists of three modules: fragment preprocessing, candidate matching, and matching-candidate selection. The fragment preprocessing module extracts features from images of fragments. Then, the candidate-matching module picks up matching candidates based on curve matching. Finally, in the matching-candidate selection module, Binary Particle Swarm Optimization (BPSO) is applied to select the final matching results. Experimental results indicate that the proposed method attains not only higher precision but also less computation time than the previous approaches.
"An In-depth Study of Forecasting Household Electricity Demand using Realistic Datasets," ACM International Conference on Future Energy Systems (e-Energy'14), June 2014.
Authors: Chien-Yu Kuo, Ming-Feng Lee, Chia-Lin Fu, Yao-Hua Ho, and Ling-Jyh Chen

Ling-JyhChenAbstract:
Data analysis and accurate forecasts of electricity demand are crucial to help both suppliers and consumers understand their detailed electricity footprints and improve their awareness about their impacts to the ecosystem. Several studies of the subject have been conducted in recent years, but they are either comprehension-oriented without practical merits; or they are forecast-oriented and do not consider per-consumer cases. To address this gap, in this paper, we conduct data analysis and evaluate the forecasting of household electricity demand using three realistic datasets of geospatial and lifestyle diversity. We investigate the correlations between household electricity demand and different external factors, and perform cluster analysis on the datasets using an exhaustive set of parameter settings. To evaluate the accuracy of electricity demand forecasts in different datasets, we use the support vector regression method. The results demonstrate that the medium mean absolute percentage error (MAPE) can be reduced to 15.6% for household electricity demand forecasts when proper configurations are used.
Current Research Results
Authors: Nicholas C Wu, Arthur P Young, Laith Q Al-Mawsawi, C Anders Olson, Jun Feng, Hangfei Qi, Shu-Hwa Chen, I-Hsuan Lu, Chung-Yen Lin, Robert G Chin, Harding H Luan, Nguyen Nguyen, Stanley F Nelson, Xinmin Li, Ting-Ting Wu, Ren Sun

Chung-YenLinImageImageAbstract:
Genetic research on influenza virus biology has been informed in large part by nucleotide variants present in seasonal or pandemic samples, or individual mutants generated in the laboratory, leaving a substantial part of the genome uncharacterized. Here, we have developed a single-nucleotide resolution genetic approach to interrogate the fitness effect of point mutations in 98% of the amino acid positions in the influenza A virus hemagglutinin (HA) gene. Our HA fitness map provides a reference to identify indispensable regions to aid in drug and vaccine design as targeting these regions will increase the genetic barrier for the emergence of escape mutations. This study offers a new platform for studying genome dynamics, structure-function relationships, virus-host interactions, and can further rational drug and vaccine design. Our approach can also be applied to any virus that can be genetically manipulated.
"DBILL: An Efficient and Retargetable Dynamic Binary Instrumentation Framework using LLVM Backend," ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments (VEE 2014), March 2014.
Authors: Yi-Hong Lyu, Ding-Yong Hong, Tai-Yi Wu, Jan-Jan Wu, Pangfeng Liu, Wei-Chung Hsu, Pen-Chung Yew

Pen-ChungYewJan-JanWuImageImageImageAbstract:
Dynamic Binary Instrumentation (DBI) is a core technology for building debugging and profiling tools for application executables. Most state-of-the-art DBI systems have focused on the same instruction set architecture (ISA) where the guest binary and the host binary have the same ISA. It is uncommon to have a cross-ISA DBI system, such as a system that instruments ARM executables to run on x86 machines. We believe cross-ISA DBI systems are increasingly more important, since ARM executables could be more productively analyzed on x86 based machines such as commonly available PCs and servers.
In this paper, we present DBILL, a cross-ISA and retargetable dynamic binary instrumentation framework that builds on both QEMU and LLVM. The DBILL framework enables LLVM-based static instrumentation tools to become DBI ready, and deployable to different target architectures. Using address sanitizer and memory sanitizer as implementation examples, we show DBILL is an efficient, versatile and easy to use cross-ISA retargetable DBI framework.
Categories and Subject Descriptors D.2.5 [Software Engineering
Current Research Results
"Outstanding Principal as Prepayment Value: A Closed-Form Formula for Mortgage Pricing," Journal of Information Science and Engineering, 2014.
Authors: Yi-Cheng Tsai, Chin-Laung Lei, Jan-Ming Ho, Ming-Yang Kao, and Szu-Lang Liao

ImageImageJan-MingHoImageAbstract:
Mortgage is one of the most popular instruments in the financial markets. In this study, we consider three actions, i.e., to default, to prepay, and to maintain the mortgage, that a borrower may have in mortgage horizon. We provide an effective pricing formula, which not only considers the effect that default might affect the mortgage value, but also accurately computes the impact due to prepayment risk. Our model defines the prepayment value of the mortgage as the amount of outstanding principal, in contrast to defining prepayment value as a constant proportion of maintaining value of the mortgage. We present a new closed-form pricing
formula of risky mortgage and also derive its yield to maturity, duration and convexity to provide a framework for risk management.

相關資訊

Academia Sinica 資訊科學研究所 Academia Sinica