中央研究院 資訊科學研究所
中央研究院資訊科學研究所
  
近期研究成果
Current Research Results
Authors: Po-Ting Lai, Ming-Siang Huang, Ting-Hao Yang, Richard Tzong-Han Tsai, Wen-Lian Hsu

Wen-LianHsuAbstract:
The large number of chemical and pharmaceutical patents has attracted researchers doing biomedical text mining to extract valuable information such as chemicals, genes and gene products. To facilitate gene and gene product annotations in patents, BioCreative V.5 organized a gene- and protein-related object (GPRO) recognition task, in which participants were assigned to identify GPRO mentions and determine whether they could be linked to their unique biological database records. In this paper, we describe the system constructed for this task. Our system is based on two different NER approaches: the statistical-principle-based approach (SPBA) and conditional random fields (CRF). Therefore, we call our system SPBA-CRF. SPBA is an interpretable machine-learning framework for gene mention recognition. The predictions of SPBA are used as features for our CRF-based GPRO recognizer. The recognizer was developed for identifying chemical mentions in patents, and we adapted it for GPRO recognition. In the BioCreative V.5 GPRO recognition task, SPBACRF obtained an F-score of 73.73% on the evaluation metric of GPRO type 1 and an F-score of 78.66% on the evaluation metric of combining GPRO types 1 and 2. Our results show that SPBA trained on an external NER dataset can perform reasonably well on the partial match evaluation metric. Furthermore, SPBA can significantly improve performance of the CRF-based recognizer trained on the GPRO dataset.
"Unsupervised Meta-learning of Figure-Ground Segmentation via Imitating Visual Effects," Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), January 2019.
Authors: Ding-Jie Chen, Jui-Ting Chien, Hwann-Tzong Chen, and Tyng-Luh Liu

Tyng-LuhLiuAbstract:
This paper presents a `learning to learn' approach to figure-ground image segmentation. By exploring webly-abundant images of specific visual effects, our method can effectively learn the visual-effect internal representations in an unsupervised manner and uses this knowledge to differentiate the figure from the ground in an image. Specifically, we formulate the meta-learning process as a compositional image editing task that learns to imitate a certain visual effect and derive the corresponding internal representation. Such a generative process can help instantiate the underlying figure-ground notion and enables the system to accomplish the intended image segmentation. Whereas existing generative methods are mostly tailored to image synthesis or style transfer, our approach offers a flexible learning mechanism to model a general concept of figure-ground segmentation from unorganized images that have no explicit pixel-level annotations. We validate our approach via extensive experiments on six datasets to demonstrate that the proposed model can be end-to-end trained without ground-truth pixel labeling yet outperforms the existing methods of unsupervised segmentation tasks.
"Versatile Communication Optimization for Deep Learning by Modularized Parameter Server," 2018 IEEE International Conference on Big Data, December 2018.
Authors: Po-Yen Wu, Pangfeng Liu, and Jan-Jan Wu:

Jan-JanWuAbstract:
Deep learning has become one of the most promising approaches to solve the artificial intelligence problems. Training large-scale deep learning models efficiently is challenging. A widely used approach to accelerate the training process is by distributing the computation across multiple nodes with a centralized parameter server. To overcome the communication overhead caused by exchanging information between workers and the parameter server, three types of optimization methods are adopted – data placement, consistency control, and compression. In this paper, we proposed modularized parameter server, an architecture composed of key components that can be overridden without much effort. This allows developers to easily incorporate optimization techniques in the training process instead of using ad-hoc ways in existing systems. With this platform, the users can analyze different combinations of techniques and develop new optimization algorithms. The experiment results show that, compared with Google’s distributed Tensorflow, our distributed training system based on the proposed modularized parameter server can achieve near-linear speedup for computing and reduce half of the training time by combining multiple optimization techniques while maintaining the convergent accuracy.
Current Research Results
"A Progressive Performance Boosting Strategy for 3D Charge-trap NAND Flash," IEEE Transactions on Very Large Scale Integration Systems (TVLSI), November 2018.
Authors: Shuo-Han Chen, Yen-Ting Chen, Yuan-Hao Chang, Hsin-Wen Wei, and Wei-Kuan Shih

Yuan-HaoChangAbstract:
The growing demands of large-capacity flash-based storages have facilitated the downscaling process of NAND flash memory. However, the downscaling of traditional planar floatinggate flash memory faces several challenges. Therefore, new NAND flash technologies have been explored to provide larger capacity with low cost. Among these new technologies, the 3-D charge-trap flash is regarded as one of the most promising candidates. The 3-D charge-trap flash is composed of several gate-stack layers and vertical cylindrical channels to provide high-density and low cell-to-cell interference. Owing to the cylindrical geometry of vertical channels, the access performance of each page in one block is distinctive, and this situation is exacerbated in the 3-D charge-trap flash with the fast-growing number of gate-stack layers. In this paper, a progressive performance boosting strategy is proposed to boost the performance of 3-D charge-trap flash by utilizing its asymmetric page access speed feature. A series of experiments was conducted to demonstrate the capability of the proposed strategy on improving the access performance of 3-D charge-trap flash.
Current Research Results
"Coherent Deep-Net Fusion To Classify Shots In Concert Videos," IEEE Transactions on Multimedia, November 2018.
Authors: Jen-Chun Lin, Wen-Li Wei, Tyng-Luh Liu, Yi-Hsuan Yang, Hsin-Min Wang, Hsiao-Rong Tyan, and Hong-Yuan Mark Liao

MarkLiaoHsin-MinWangYi-HsuanYangTyng-LuhLiuAbstract:
Varying types of shots is a fundamental element in the language of film, commonly used by a visual storytelling director. The technique is often used in creating professional recordings of a live concert, but meanwhile may not be appropriately applied in audience recordings of the same event. Such variations could cause the task of classifying shots in concert videos, professional or amateur, very challenging. To achieve more reliable shot classification, we propose a novel probabilistic-based approach, named as coherent classification net (CC-Net), by addressing three crucial issues. First, we focus on learning more effective features by fusing the layer-wise outputs extracted from a deep convolutional neural network (CNN), pretrained on a large-scale data set for object recognition. Second, we introduce a frame-wise classification scheme, the error weighted deep cross-correlation model (EW-Deep-CCM), to boost the classification accuracy. Specifically, the deep neural network-based cross-correlation model (deep-CCM) is constructed to not only model the extracted feature hierarchies of CNN independently, but also relate the statistical dependencies of paired features from different layers. Then, a Bayesian error weighting scheme for a classifier combination is adopted to explore the contributions from individual Deep-CCM classifiers to enhance the accuracy of shot classification in each image frame. Third, we feed the frame-wise classification results to a linear-chain conditional random field module to refine the shot predictions by taking into account the global and temporal regularities. We provide extensive experimental results on a data set of live concert videos to demonstrate the advantage of the proposed CC-Net over existing popular fusion approaches for shot classification.
"Play As You Like: Timbre-Enhanced Multi-modal Music Style Transfer," 33rd AAAI Conference on Artificial Intelligence (AAAI 2019), 2019.
Authors: Chien-Yu Lu, Min-Xin Xue, Chia-Che Chang, Che-Rung Lee, and Li Su

LiSuAbstract:
Style transfer of polyphonic music recordings is a challenging task when considering the generation of diverse, imaginative, and reasonable music pieces in the style different from their original one. To achieve this, learning stable multi-modal representations for the domain-variant (i.e., style) and domain-invariant (i.e., content) information of music in an unsupervised manner is critical. In this paper, we propose an unsupervised music style transfer method without the need of parallel data. Besides, to characterize the multi-modal distribution of music pieces, we employ the Multi-modal Unsupervised Image-to-Image Translation (MUNIT) framework in the proposed system. This allows one to generate diverse outputs from learned latent distributions representing contents and styles. Moreover, to better capture the granularity of sound, such as the perceptual dimensions of timbre and the nuance in instrument-specific performance, cognitively plausible features including mel-frequency cepstral coefficients (MFCC), spectral difference, and spectral envelope, are combined with the widely-used mel-scale spectrogram into a timber-enhanced multi-channel input representation. The Relativistic average Generative Adversarial Networks (RaGAN) is also utilized to achieve fast convergence and high stability. We conduct experiments on bilateral style transfer tasks among three different genres, namely piano solo, guitar solo, and string quartet. Results demonstrate the advantages of the proposed method in music style transfer with improved sound quality and in allowing users to manipulate the output.
Current Research Results
"Scrubbing-aware Secure Deletion for 3D NAND Flash," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), November 2018.
Authors: Wei-Chen Wang, Chien-Chung Ho, Yuan-Hao Chang, Tei-Wei Kuo, and Ping-Hsien Lin

Yuan-HaoChangAbstract:
Due to the increasing security concerns, the conventional deletion operations in NAND flash memory can no longer afford the requirement of secure deletion. Although existing works exploit secure deletion and scrubbing operations to achieve the security requirement, they also result in performance and disturbance problems. The predicament becomes more severe as the growing of page numbers caused by the aggressive use of 3-D NAND flash-memory chips which stack flash cells into multiple layers in a chip. Different from existing works, this paper aims at exploring a scrubbing-aware secure deletion design so as to improve the efficiency of secure deletion by exploiting properties of disturbance. The proposed design could minimize secure deletion/scrubbing overheads by organizing sensitive data to create the scrubbing-friendly patterns, and further choose a proper operation by the proposed evaluation equations for each secure deletion command. The capability of our proposed design is evaluated by a series of experiments, for which we have very encouraging results. In a 128 Gbits 3-D NAND flashmemory device, the simulation results show that the proposed design could achieve 82% average response time reduction of each secure deletion command.
Current Research Results
"Hot-Spot Suppression for Resource-Constrained Image Recognition Devices with Non-Volatile Memory," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), November 2018.
Authors: Chun-Feng Wu, Ming-Chang Yang, Yuan-Hao Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
Resource-constrained devices with convolutional neural networks (CNNs) for image recognition are becoming popular in various Internet of Things and surveillance applications. They usually have a low-power CPU and limited CPU cache space. In such circumstances, nonvolatile memory (NVM) has great potential to replace DRAM as main memory to improve overall energy efficiency and provide larger mainmemory space. However, due to the iterative access pattern, performing CNN-based image recognition may introduce some write hot-spots on the NVM main memory. These write hot-spots may lead to reliability issues due to limited write endurance of NVM. In order to improve the endurance of NVM main memory, this paper leverages the CPU cache pinning technique and exploits the iterative access pattern of CNN to resolve the write hot-spot effect. In particular, we present a CNN-aware self-bouncing pinning strategy to minimize the maximal write cycles in NVM cells by proactively fastening CPU cache lines, so as to effectively suppress the write hot-spots to NVM main memory with limited performance degradation. The proposed strategy was evaluated by a series of intensive experiments and the results are encouraging.
Current Research Results
Authors: Tsung-Chieh Yao, Ren-Hua Chung, Chung-Yen Lin, et al.,

Chung-YenLinAbstract:
Background: Total immunoglobulin E (IgE) is an intermediate phenotype and a potential therapeutic target for allergic diseases.
Objective: We sought to identify single nucleotide polymorphisms (SNPs) associated with total IgE in an Asian pediatric population.
Methods: We performed a genome-wide association study of total IgE in 397 schoolchildren from the Prediction of Allergies in Taiwanese CHildren (PATCH) schoolchildren cohort. Replication was conducted in three independent cohorts: 838 schoolchildren, 431 birth cohort samples and 1,120 Caucasian adults. Multimarker modeling was employed to determine a minimum set of SNPs capturing total IgE. In silico functional annotation, gene ontology, network and pathway analysis were performed to mine potential functional relevant genes.
Results: We identified the association of rs660895 at 6p21.32 region with total IgE in schoolchildren (p-value =1.14x10-6); replicated the association in three independent samples; and provided supportive functional evidence of rs660895. Increasing total IgE levels was found among subjects carrying more numbers of risk alleles among 40 SNPs determined from multimarker modeling. Fourteen IgE related genes identified from gene-based analysis were suggested to be functional relevance to immunological diseases.
Conclusion: This study identifies rs660895 in the human lymphocyte antigen, class II, DR beta 1 (HLA-DRB1) gene associated with total IgE in newborns, schoolchildren and adults. Our results from multimarker modeling implicate a set of 40 SNPs jointly capturing total IgE; and 14 identified genes with potential relevance to immunological diseases. This study demonstrates that integrative approaches may leverage the capacity in searching for susceptibility genes to total IgE and related allergic diseases.
"Speed Reading: Learning to Read ForBackward via Shuttle," International Conference on EMNLP, October 2018.
Authors: Tsu-Hui Fu, Wei-Yun Ma

Wei-YunMaTsuJuiFuAbstract:
We present LSTM-Shuttle, which applies human speed reading techniques to natural language processing tasks for accurate and efficient comprehension. In contrast to previous work, LSTM-Shuttle not only reads shuttling forward but also goes back. Shuttling forward enables high efficiency, and going backward gives the model a chance to recover lost information, ensuring better prediction. We evaluate LSTM-Shuttle on sentiment analysis, news classification, and cloze on IMDB, Rotten Tomatoes, AG, and Children’s Book Test datasets. We show that LSTM-Shuttle predicts both better and more quickly. To demonstrate how LSTM-Shuttle actually behaves, we also analyze the shuttling operation and present a case study.
 
Current Research Results
Authors: Neha Warikoo, Yung-Chun Chang, and Wen-Lian Hsu

Wen-LianHsuYung-ChunChangAbstract:
Identifying the interactions between chemical compounds and genes from biomedical literatures is one of the frequently discussed topics of text mining in the life science field. In this paper, we propose LPTK, a linguistic interaction pattern learning method used in the CHEMPROT task of BioCreative VI, to capture chemical-protein interaction (CPI) patterns from biomedical literatures. We also present a framework to integrate these linguistic patterns with smooth partial tree kernel (SPTK) to extract the CPIs. To evaluate our system, two associated identification datasets were used. Corresponding experiment results demonstrate that our method is effective and outperforms several compared systems.
"Scrubbing-aware Secure Deletion for 3D NAND Flash," ACM/IEEE International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), September 2018.
Authors: Wei-Chen Wang, Chien-Chung Ho, Yuan-Hao Chang, Tei-Wei Kuo, and Ping-Hsien Lin

Yuan-HaoChangAbstract:
Due to the increasing security concerns, the conventional deletion operations in NAND flash memory can no longer afford the requirement of secure deletion. Although existing works exploit secure deletion and scrubbing operations to achieve the security requirement, they also result in performance and disturbance problems. The predicament becomes more severe as the growing of page numbers caused by the aggressive use of 3D NAND flash-memory chips which stack flash cells into multiple layers in a chip. Different from existing works, this work aims at exploring a scrubbing-aware secure deletion design so as to improve the efficiency of secure deletion by exploiting properties of disturbance. The proposed design could minimize secure deletion/scrubbing overheads by organizing sensitive data to create the scrubbing-friendly patterns, and further choose a proper operation by the proposed evaluation equations for each secure deletion command. The capability of our proposed design is evaluated by a series of experiments, for which we have very encouraging results. In a 128 Gbits 3D NAND flash-memory device, the simulation results show that the proposed design could achieve 82% average response time reduction of each secure deletion command.
"Hot-Spot Suppression for Resource-Constrained Image Recognition Devices with Non-Volatile Memory," ACM/IEEE International Conference on Embedded Software (EMSOFT), September 2018.
Authors: Chun-Feng Wu, Ming-Chang Yang, Yuan-Hao Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
Resource-constrained devices with Convolutional Neural Networks (CNNs) for image recognition are becoming popular in various IoT and surveillance applications. They usually have a low-power CPU and limited CPU cache space. In such circumstances, Non-Volatile Memory (NVM) has great potential to replace DRAM as main memory to improve overall energy efficiency and provide larger main-memory space. However, due to the iterative access pattern, performing CNN-based image recognition may introduce some write hot-spots on the NVM main memory. These write hot-spots may lead to reliability issues due to limited write endurance of NVM. In order to improve the endurance of NVM main memory, this work leverages the CPU cache pinning technique and exploits the iterative access pattern of CNN to resolve the write hot-spot effect. In particular, we present a CNN-aware self-bouncing pinning strategy to minimize the maximal write cycles in NVM cells by proactively fastening CPU cache lines, so as to effectively suppress the write hot-spots to NVM main memory with limited performance degradation. The proposed strategy was evaluated by a series of intensive experiments and the results are encouraging.
Current Research Results
Authors: Shiau, C.K., Huang, J.H. and Tsai, H.K.*

Huai-KuangTsaiAbstract:
Summary: In higher eukaryotes, the generation of transcript isoforms from a single gene through alternative splicing (AS) and alternative transcription (AT) mechanisms increases functional and regulatory diversities. Annotating these alternative transcript events is essential for genomic studies. However, there are no existing tools that generate comprehensive annotations of all these alternative transcript events including both AS and AT events. In the present study, we develop CATANA, with the encoded exon usage patterns based on the flattened gene model, to identify ten types of AS and AT events. We demonstrate the power and versatility of CATANA by showing greater depth of annotations of alternative transcript events according to either genome annotation or RNA-seq data.
Availability and Implementation: CATANA is available on https://github.com/shiauck/CATANA
Current Research Results
"Enhancing Flash Memory Reliability by Jointly Considering Write-back Pattern and Block Endurance," ACM Transactions on Design Automation of Electronic Systems (TODAES), August 2018.
Authors: Tseng-Yi Chen, Yuan-Hao Chang, Yuan-Hung Kuan, Ming-Chang Yang, Yu-Ming Chang, and Pi-Cheng Hsiu

Pi-ChengHsiuYuan-HaoChangAbstract:
Owing to high cell density caused by the advanced manufacturing process, the reliability of flash drives turns out to be rather challenging in flash system designs. To enhance the reliability of flash drives, error-correcting code (ECC) has been widely utilized in flash drives to correct error bits during programming/reading data to/from flash drives. Although ECC can effectively enhance the reliability of flash drives by correcting error bits, the capability of ECC would degrade while the program/erase (P/E) cycles of flash blocks is increased. Finally, ECC could not correct a flash page, because a flash page contains too many error bits. As a result, reducing error bits is an effective solution to further improve the reliability of flash driveswhen a specific ECC is adopted in the flash drive. This work focuses on how to reduce the probability of producing error bits in a flash page. Thus, we propose a pattern-aware write strategy for flash reliability enhancement. The proposed write strategy considers both the P/E cycle of blocks and the pattern of written data while a flash block is allocated to store the written data. Since the proposed write strategy allocates young blocks (respectively, old blocks) for hot data (respectively, cold data) and flips the bit pattern of the written data to the appropriate bit pattern, the proposed strategy can effectively improve the reliability of flash drives. The experimental results show that the proposed strategy can reduce the number of error pages by up to 50%, compared with the well-known DFTL solution. Moreover, the proposed strategy is orthogonal with all ECC mechanisms so that the reliability of the flash drives with ECC mechanisms can be further improved by the proposed strategy.
Current Research Results
Authors: Shih-Wei Hu, Gang-Xuan Lin, Sung-Hsien Hsieh, and Chun-Shien Lu

Chun-ShienLuAbstract:
In sparse signal recovery of compressive sensing, the phase transition determines the edge, which separates successful recovery and failed recovery.

The phase transition can be seen as an indicator and an intuitive way to judge which recovery performance is better.

 

Traditionally, the Multiple Measurement Vectors (MMVs) problem is usually solved via $\ell_{2,1}$-norm minimization, which is our first investigation via conic geometry in this paper.

Then, we are interested in the same problem but with two common constraints (or prior information): prior information relevant to the ground truth and the inherent low rank within the original signal.

To figure out which constraint is most helpful, the MMVs problems are solved via $\ell_{2,1}$-$\ell_{2,1}$ minimization and $\ell_{2,1}$-low rank minimization, respectively.

By theoretically presenting the necessary and sufficient condition of successful recovery from MMVs, we can have a precise prediction of phase transition to judge which constraint or prior information is better.

 

All our findings are verified via simulations and show that, under certain conditions, $\ell_{2,1}$-$\ell_{2,1}$ minimization outperforms $\ell_{2,1}$-low rank minimization.

Surprisingly, $\ell_{2,1}$-low rank minimization performs even worse than $\ell_{2,1}$-norm minimization.

To our knowledge, we are the first to study the MMVs problem under different prior information in the context of compressive sensing

"Dynamic Tuning of Applications using Restricted Transactional Memory," ACM Research in Adaptive and Convergent Systems, October 2018.
Authors: Shih-Kai Lin, Ding-Yong Hong, Sheng-Yu Fu, Jan-Jan Wu, Wei-Chung Hsu

Jan-JanWuDing-YongHongAbstract:
Transactional Synchronization Extensions (TSX) support for hardware Transactional Memory (TM) on Intel 4th Core generation processors. Two programming interfaces, Hardware Lock Elision (HLE) and Restricted Transactional Memory (RTM), are rovided to support software development using TSX. HLE is easy to use and maintains backward compatible with processors without TSX support while RTM is more flexible and scalable. Previous researches have shown that critical sections protected by RTM with a welldesigned retry mechanism as its fallback code path can often achieve better performance than HLE. More parallel programs may be programmed in HLE, however, using RTM may obtain greater performance. To embrace both productivity and high performance of parallel program with TSX, we present a framework built on QEMU that can dynamically transform HLE instructions in an application binary to fragments of RTM codes with adaptive tuning on the fly. Compared to HLE execution, our prototype achieves 1.15x speedup with 4 threads and 1.56x speedup with 8 threads on average. Due to the scalability of RTM, the speedup will be more significant as the number of threads increases.
"Newsfeed Filtering and Dissemination for Behavioral Therapy on Social Network Addictions," ACM International Conference on Information and Knowledge Management (ACM CIKM), October 2018.
Authors: H.-H. Shuai, Y.-C. Lien, D.-N. Yang, Y.-F. Lan, W.-C. Lee, and P. S. Yu

De-NianYangAbstract:
While the popularity of online social network (OSN) apps continues to grow, little attention has been drawn to the increasing cases of Social Network Addictions (SNAs). In this paper, we argue that by mining OSN data in support of online intervention treatment, data scientists may assist mental healthcare professionals to alleviate the symptoms of users with SNA in early stages. Our idea, based on behavioral therapy, is to incrementally substitute highly addictive newsfeeds with safer, less addictive, and more supportive newsfeeds. To realize this idea, we propose a novel framework, called Newsfeed Substituting and Supporting System (N3S), for newsfeed filtering and dissemination in support of SNA interventions. New research challenges arise in 1) measuring the addictive degree of a newsfeed to an SNA patient, and 2) properly substituting addictive newsfeeds with safe ones based on psychological theories. To address these issues, we first propose the Additive Degree Model (ADM) to measure the addictive degrees of newsfeeds to different users. We then formulate a new optimization problem aiming to maximize the efficacy of behavioral therapy without sacrificing user preferences. Accordingly, we design a randomized algorithm with a theoretical bound. A user study with 716 Facebook users and 11 mental healthcare professionals around the world manifests that the addictive scores can be reduced by more than 30%. Moreover, experiments show that the correlation between the SNA scores and the addictive degrees quantified by the proposed model is much greater than that of state-of-the-art preference based models.
"SeeTheVoice; Learning from Music to Visual Storytelling of Shots," IEEE International Conference on Multimedia and Expo (ICME 2018), July 2018.
Authors: Wen-Li Wei, Jen-Chun Lin, Tyng-Luh Liu, Yi-Hsuan Yang, Hsin-Min Wang, Hsiao-Rong Tyan, and Hong-Yuan Mark Liao

MarkLiaoHsin-MinWangYi-HsuanYangTyng-LuhLiuAbstract:
Types of shots in the language of film are considered the key elements used by a director for visual storytelling. In filming a musical performance, manipulating shots could stimulate desired effects such as manifesting the emotion or deepening the atmosphere. However, while the visual storytelling technique is often employed in creating professional recordings of a live concert, audience recordings of the same event often lack such sophisticated manipulations. Thus it would be useful to have a versatile system that can perform video mashup to create a refined video from such amateur clips. To this end, we propose to translate the music into a nearprofessional shot (type) sequence by learning the relation between music and visual storytelling of shots. The resulting shot sequence can then be used to better portray the visual storytelling of a song and guide the concert video mashup process. Our method introduces a novel probabilistic-based fusion approach, named as multi-resolution fused recurrent neural networks (MF-RNNs) with film-language, which integrates multi-resolution fused RNNs and a film-language model for boosting the translation performance. The results from objective and subjective experiments demonstrate that MF-RNNs with film-language can generate an appealing shot sequence with better viewing experience. I
Current Research Results
"A Collaborative CPU-GPU Approach for Principal Component Analysis on Mobile Heterogeneous Platform," Journal of Parallel and Distributed Computing (JPDC), October 2018.
Authors: Olivier Valery, Pangfeng Liu, Jan-Jan Wu

Jan-JanWuAbstract:
The advent of the modern GPU architecture has enabled computers to use General Purpose GPU capabilities (GPGPU) to tackle large scale problem at a low computational cost. This technological innovation is also available on mobile devices, addressing one of the primary problems with recent devices: the power envelope. Unfortunately, recent mobile GPUs suffer from a lack of accuracy that can prevent them from running any large scale data analysis tasks, such as principal component analysis (Shlens, 0000) (PCA). The goal of our work is to address this limitation by combining the high precision available on a CPU with the power efficiency of a mobile GPU. In this paper, we exploit the shared memory architecture of mobile devices in order to enhance the CPU–GPU collaboration and speed up PCA computation without sacrificing precision. Experimental results suggest that such an approach drastically reduces the power consumption of the mobile device while accelerating the overall workload. More generally, we claim that this approach can be extended to accelerate other vectorized computations on mobile devices while still maintaining numerical accuracy.
Current Research Results
"An Erase Efficiency Boosting Strategy for 3D Charge Trap NAND Flash," IEEE Transactions on Computers (TC), September 2018.
Authors: Shuo-Han Chen, Yuan-Hao Chang, Yu-Pei Liang, Hsin-Wen Wei, and Wei-Kuan Shih

Yuan-HaoChangAbstract:
Owing to the fast-growing demands of larger and faster NAND flash devices, new manufacturing techniques have accelerated the down-scaling process of NAND flash memory. Among these new techniques, 3D charge trap flash is considered to be one of the most promising candidates for the next-generation NAND flash devices. However, the long erase latency of 3D charge trap flash becomes a critical issue. This issue is exacerbated because the distinct transient voltage shift phenomenon is worsened when the number of program/erase cycle increases. In contrast to existing works that aim to tackle the erase latency issue by reducing the number of block erases, we tackle this issue by utilizing the “multi-block erase” feature. In this work, an erase efficiency boosting strategy is proposed to boost the garbage collection efficiency of 3D charge trap flash via enabling multi-block erase inside flash chips. A series of experiments was conducted to demonstrate the capability of the proposed strategy on improving the erase efficiency and access performance of 3D charge trap flash. The results show that the erase latency of 3D charge trap flash memory is improved by 75.76 percent on average even when the P/E cycle reaches 10^4.
Current Research Results
Authors: Tung-Shing Mamie Lih, Wai-Kok Choong, Yu-Ju Chen, Ting-Yi Sung

Ting-YiSungAbstract:
In proteogenomic studies, many genome-annotated events, for example, single amino acid variation (SAAV) and short INDEL, are often unobserved in shotgun proteomics. Therefore, we propose an analysis pipeline called LeTE-fusion (Le, peptide length; T, theoretical values; E, experimental data) to first investigate whether peptides with certain lengths are observed more often in mass spectrometry (MS)-based proteomics, which may hinder peptide identification causing difficulty in detecting genome-annotated events. By applying LeTE-fusion on different MS-based proteome data sets, we found peptides within 7–20 amino acids are more frequently identified, possibly attributed to MS-related factors instead of proteases. We then further extended the usage of LeTE-fusion on four variant-containing-sequence data sets (SAAV-only) with various sample complexity up to the whole human proteome scale, which yields theoretically ∼70% variants observable in an ideal shotgun proteomics. However, only ∼40% of variants might be detectable in real shotgun proteomic experiments when LeTE-fusion utilizes the experimentally observed variant-site-containing wild-type peptides in PeptideAtlas to estimate the expected observable coverage of variants. Finally, we conducted a case study on HEK293 cell line with variants reported at genomic level that were also identified in shotgun proteomics to demonstrate the efficacy of LeTE-fusion on estimating expected observable coverage of variants. To the best of our knowledge, this is the first study to systematically investigate the detection limits of genome-annotated events via shotgun proteomics using such analysis pipeline.
"Unifying and Merging Well-trained Deep Neural Networks for Inference Stage," International Joint Conference on Artificial Intelligence, IJCAI 2018, July 2018.
Authors: Yi-Min Chou, Yi-Ming Chan, Jia-Hong Lee, Chih-Yi Chiu, Chu-Song Chen

Chu-SongChenAbstract:
We propose a novel method to merge convolutional neural-nets for the inference stage. Given two well-trained networks that may have different architec-tures that handle different tasks, our method aligns the layers of the original networks and merges them into a unified model by sharing the representative codes of weights. The shared weights are further re-trained to fine-tune the performance of the merged model. The proposed method effectively produces a compact model that may run original tasks simultaneously on resource-limited devices. As it preserves the general architectures and leverages the co-used weights of well-trained networks, a substantial training overhead can be reduced to shorten the system development time. Experimental results demonstrate a satisfactory performance and validate the effectiveness of the method.
Current Research Results
"SLC-Like Programming Scheme for MLC Flash Memory," ACM Transactions on Storage (TOS), March 2018.
Authors: Chien-Chung Ho, Yu-Ming Chang, Yuan-Hao Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
Although the multilevel cell (MLC) technique is widely adopted by flash-memory vendors to boost the chip density and lower the cost, it results in serious performance and reliability problems. Different from past work, a new cell programming method is proposed to not only significantly improve chip performance but also reduce the potential bit error rate. In particular, a single-level cell (SLC)-like programming scheme is proposed to better explore the threshold-voltage relationship to denote different MLC bit information, which in turn drastically provides a larger window of threshold voltage similar to that found in SLC chips. It could result in less programming iterations and simultaneously a much less reliability problem in programming flash-memory cells. In the experiments, the new programming scheme could accelerate the programming speed up to 742% and even reduce the bit error rate up to 471% for MLC pages.
Current Research Results
"Boosting NVDIMM Performance with a Light-Weight Caching Algorithm," IEEE Transactions on Very Large Scale Integration Systems (TVLSI), August 2018.
Authors: Che-Wei Tsao, Yuan-Hao Chang, and Tei-Wei Kuo

Yuan-HaoChangAbstract:
In the big data era, data-intensive applications have growing demand for the capacity of DRAM main memory, but the frequent DRAM refresh, high leakage power, and high unit cost bring serious design issues on scaling up DRAM capacity. To address this issue, a nonvolatile dual inline memory module (NVDIMM), which is a hybrid memory module, becomes a possible alternative to replace the DRAM as main memory in some data-intensive applications. The NVDIMM that consists of a small-sized high-speed DRAM and a large-sized low-cost nonvolatile memory (i.e., flash memory) has the serious performance issue on accessing data stored in the flash memory because of the huge performance gap between the DRAM and the flash memory. However, there is limited room to adopt a complex caching algorithm for using the DRAM as the cache of flash memory in the NVDIMM main memory, because a complex caching algorithm itself would already cause too much performance degradation on handling each request to access the NVDIMM main memory. In this paper, we present a lightweight caching algorithm to boost NVDIMM performance by minimizing the cache management overhead and reducing the frequencies to access flash memory. A series of experiments was conducted based on popular benchmarks, and the results demonstrate that the proposed algorithm can effectively improve the performance of the NVDIMM main memory.
"Achieving Fast Sanitization with Zero Live Data Copy for MLC Flash Memory," ACM/IEEE International Conference on Computer-Aided Design (ICCAD), November 2018.
Authors: Ping-Hsien Lin, Yu-Ming Chang, Yung-Chun Li, Wei-Chen Wang, Chien-Chung Ho, and Yuan-Hao Chang

Yuan-HaoChangAbstract:
As data security has become the major concern in modern storage systems with low-cost multi-level-cell (MLC) flash memories, it is not trivial to realize data sanitization in such a system. Even though some existing works employ the encryption or the built-in erase to achieve this requirement, they still suffer the risk of being deciphered or the issue of performance degradation. In contrast to the existing work, a fast sanitization scheme is proposed to provide the highest degree of security for data sanitization; that is, every old version of data could be immediately sanitized with zero live-data copy overhead once the new version of data is created/written. In particular, this scheme further considers the reliability issue of MLC flash memories; the proposed scheme includes a one-shot sanitization design to minimize the disturbance during data sanitization. The feasibility and the capability of the proposed scheme were evaluated through extensive experiments based on real flash chips. The results demonstrate that this scheme can achieve the data sanitization with zero live-data-copy, where performance overhead is less than 1%.
"Learning Domain-adaptive Latent Representations of Music Signals Using Variational Autoencoders," International Society of Music Information Retrieval Conference (ISMIR), September 2018.
Authors: Yin-Jyun Luo and Li Su

LiSuYin-JyunLuoAbstract:
In this paper, we tackle the problem of domain-adaptive representation learning for music processing. Domain adaptation is an approach aiming to eliminate the distributional discrepancy of the modeling data, so as to transfer learnable knowledge from one domain to another. With its great success in the fields of computer vision and natural language processing, domain adaptation also shows great potential in music processing, for music is essentially a highly-structured semantic system having domain-dependent information. Our proposed model contains a Variational Autoencoder (VAE) that encodes the training data into a latent space, and the resulting latent representations along with its model parameters are then reused to regularize the representation learning of the downstream task where the data are in the other domain. The experiments on cross-domain music alignment, namely an audio-to-MIDI alignment, and a monophonic-to-polyphonic music alignment of singing voice show that the learned representations lead to better higher alignment accuracy than that using conventional features. Furthermore, a preliminary experiment on singing voice source separation, by regarding the mixture and the voice as two distinct domains, also demonstrates the capability to solve music processing problems from the perspective of domain-adaptive representation learning.
"Functional Harmony Recognition with Multi-task Recurrent Neural Networks," International Society of Music Information Retrieval Conference (ISMIR), September 2018.
Authors: Tsung-Ping Chen and Li Su

LiSuAbstract:
Previous works on chord recognition mainly focus on chord symbols but overlook other essential features that matter in musical harmony. To tackle the functional harmony recognition problem, we compile a new professionally annotated dataset of symbolic music encompassing not only chord symbols, but also various interrelated chord functions such as key modulation, chord inversion, secondary chords, and chord quality. We further present a novel holistic system in functional harmony recognition; a multi-task learning (MTL) architecture is implemented with the recurrent neural network (RNN) to jointly model chord functions in an end-to-end scenario. Experimental results highlight the capability of the proposed recognition system, and a promising improvement of the system by employing multi-task learning instead of single-task learning. This is one attempt to challenge the end-to-end chord recognition task from the perspective of functional harmony so as to uncover the grand structure ruling the flow of musical sound. The dataset and the source code of the proposed system is announced at https://github.com/Tsung-Ping/functional-harmony .
Current Research Results
"Monaural source separation using Ramanujan subspace dictionaries," IEEE Sig. Proc. Lett. (SPL), August 2018.
Authors: Hsueh-Wei Liao and Li Su

LiSuAbstract:
Most source separation algorithms are implemented as spectrogram decomposition. In contrast, time-domain source separation is less investigated, since there is a lack of an efficient signal representation that facilitates decomposing oscillatory components of a signal directly in the time domain. In this paper, we utilize the Ramanujan subspace (RS) and the nested periodic subspace (NPS) to address this issue, by constructing a parametric dictionary which emphasizes period information with less redundancy. Methods including iterative subspace projection and convolutional sparse coding (CSC) can decompose a mixture into signals with distinct oscillation periods according to the dictionary. Experiments on score-informed source separation show that the proposed method is competitive to the state-of -the-art, frequency-domain approaches when the provided pitch information and the signal parameters are the same.
Current Research Results
"wrJFS: A Write-Reduction Journaling File System for Byte-addressable NVRAM," IEEE Transactions on Computers (TC), July 2018.
Authors: Tseng-Yi Chen, Yuan-Hao Chang, Shuo-Han Chen, Chih-Ching Kuo, Ming-Chang Yang, Hsin-Wen Wei, and Wei-Kuan Shih

Yuan-HaoChangAbstract:
Non-volatile random-access memory (NVRAM) becomes a mainstream storage device in embedded systems due to its favorable features, such as small size, low power consumption and short read/write latency. Unlike dynamic random access memory (DRAM), the most NVRAM has asymmetric performance and energy consumption on read/write operations. Generally, on NVRAM, a write operation consumes more energy and time than a read operation. Unfortunately, current mobile/embedded file systems, such as EXT2/3 and EXT4, are very unfriendly for NVRAM devices. The reason is that in order to increase the reliability of file systems, current mobile/embedded file systems employ a journaling mechanism. Although a journaling mechanism raises the safety of data in a file system, it also writes the same data twice during data commitment and checkpoint. Though several related works have been proposed to reduce the size of a write operation, they still cannot effectively minimize the write amplification of a journaling mechanism. Such observations motivate this paper to design a 2-phase write reduction journaling file system called wrJFS. In the first phase, wrJFS classified data into two categories: metadata and user data. Because the size of metadata is usually very small (few bytes), the metadata will be handled by partial byte-enabled journaling strategy. In contrast, the size of user data is very large relative to metadata; thus, user data will be processed in the second phase. In the second phase, user data will be compressed by hardware encoder so as to reduce the write size, and managed compressed-enabled journaling strategy to avoid the write amplification. Moreover, we analyze the overhead of wrJFS and show that the overhead is negligible. Accroding to the experimental results, the proposed wrJFS can reduce the size of the write request by 89.7\% on average, compared with the original EXT3.
Current Research Results
Authors: Sung-Hsien Hsieh, Tsung-Hsuan Hung, Chun-Shien Lu, Yu-Chi Chen, and Soo-Chang Pei

Chun-ShienLuSung-Hsien HsiehAbstract:
Wireless sensors have been helpful and popular for gathering information, in particular in harsh environments. Due to the limit of computation power and energy, compressive sensing (CS) has attracted considerable attention in achieving simultaneous sensing and compression of data on the sensor/encoder side with cheap cost. Nevertheless, along with the increase of the data size, the computation overhead for decoding becomes unaffordable on the user/decoder side. To overcome this problem, by taking advantage of resourceful cloud, it is helpful to leverage the overhead. In this paper, we propose a cloud-assisted compressive sensing-based data gathering system with security assurance. Our system, involving three parties of sensor, cloud, and user, possesses several advantages. First, in terms of security, for any two data that are sparse in certain transformed domain, their corresponding ciphertexts are indistinguishable on the cloud side. Second, to avoid the communication bottleneck between the user and cloud, the sensor can encrypt data individually such that, once the cloud receives encrypted data from sensor, it can immediately carry out its task without requesting any information from the user. Third, we show that, even though the cloud knows the permuted support information of data, the security never is sacrificed. Meanwhile, the compression rate can be reduced further. Theoretical and empirical results demonstrate that our system is cost-effective and privacy guaranteed and that it possesses acceptable reconstruction quality.
Current Research Results
"UnistorFS: A Union Storage File System Design for Resource Sharing between Memory and Storage on Persistent RAM based Systems," ACM Transactions on Storage (TOS), February 2018.
Authors: Shuo-Han Chen, Tseng-Yi Chen, Yuan-Hao Chang, Hsin-Wen Wei, and Wei-Kuan Shih

Yuan-HaoChangAbstract:
With the advanced technology in persistent random access memory (PRAM), PRAM such as three-dimensional XPoint memory and Phase Change Memory (PCM) is emerging as a promising candidate for the nextgeneration medium for both (main) memory and storage. Previous works mainly focus on how to overcome the possible endurance issues of PRAM while both main memory and storage own a partition on the same PRAM device. However, a holistic software-level system design should be proposed to fully exploit the benefit of PRAM. This article proposes a union storage file system (UnistorFS), which aims to jointly manage the PRAM resource for main memory and storage. The proposed UnistorFS realizes the concept of using the PRAM resource as memory and storage interchangeably to achieve resource sharing while main memory and storage coexist on the same PRAM device with no partition or logical boundary. This approach not only enables PRAM resource sharing but also eliminates unnecessary data movements between main memory and storage since they are already in the same address space and can be accessed directly. At the same time, the proposed UnistorFS ensures the persistence of file data and sanity of the file system after power recycling. A series of experiments was conducted on a modified Linux kernel. The results show that the proposed UnistorFS can eliminate unnecessary memory accesses and outperform other PRAM-based file systems for 0.2–8.7 times in terms of read/write performance.