中研院資訊所

近期研究成果
Current Research Results
"Hardware-Assisted MMU Redirection for In-guest Monitoring and API Profiling," IEEE Transactions on Information Forensics & Security, To Appear.
Authors: Shun-Wen Hsiao, Yeali Sun, Meng Chang Chen

Meng ChangChen Abstract:
With the advance of hardware, network, and virtualization technologies, cloud computing has prevailed and become the target of security threats such as the cross virtual machine (VM) side channel attack, with which malicious users exploit vulnerabilities to gain information or access to other guest virtual machines. Among the many virtualization technologies, the hypervisor manages the shared resource pool to ensure that the guest VMs can be properly served and isolated from each other. However, while managing the shared hardware resources, due to the presence of the virtualization layer and different CPU modes (root and non-root mode), when a CPU is switched to non-root mode and is occupied by a guest machine, a hypervisor cannot intervene with a guest at runtime. Thus, the execution status of a guest is like a black box to a hypervisor, and the hypervisor cannot mediate possible malicious behavior at runtime. To rectify this, we propose a hardware-assisted VMI (virtual machine introspection) based in-guest process monitoring mechanism which supports monitoring and management applications such as process profiling. The mechanism allows hooks placed within a target process (which the security expert selects to monitor and profile) of a guest virtual machine and handles hook invocations via the hypervisor. In order to facilitate the needed monitoring and/or management operations in the guest machine, the mechanism redirects access to in-guest memory space to a controlled, self-defined memory within the hypervisor by modifying the extended page table (EPT) to minimize guest and host machine switches. The advantages of the proposed mechanism include transparency, high performance, and comprehensive semantics. To demonstrate the capability of the proposed mechanism, we develop an API profiling system (APIf) to record the API invocations of the target process. The experimental results show an average performance degradation of about 2.32%, far better than existing similar systems.
"Declarative pearl: deriving monadic Quicksort," Functional and Logic Programming (FLOPS 2020), 2020.
Authors: Shin-Cheng Mu and Tsung-Ju Chiang

Shin-ChengMu Abstract:
To demonstrate derivation of monadic programs, we present a specification of sorting using the non-determinism monad, and derive pure quicksort on lists and state-monadic quicksort on arrays. In the derivation one may switch between point-free and pointwise styles, and deploy techniques familiar to functional programmers such as pattern matching and induction on structures or on sizes. Derivation of stateful programs resembles reasoning backwards from the postcondition.
Current Research Results
"Un-rectifying Non-linear Networks for Signal Representation," IEEE Transactions on Signal Processing, To Appear.
Authors: Wen-Liang Hwang, Andreas Heinecke

Andreas Heinecke Wen-LiangHwang Abstract:
We consider deep neural networks with rectifier activations and max-pooling from a signal representation per- spective. In this view, such representations mark the transition from using a single linear representation for all signals to utilizing a large collection of affine linear representations that are tailored to particular regions of the signal space. We propose a novel technique to “un-rectify” the nonlinear activations into data-dependent linear equations and constraints, from which we derive explicit expressions for the affine linear operators, their domains and ranges in terms of the network parameters. We show how increasing the depth of the network refines the domain partitioning and derive atomic decompositions for the corresponding affine mappings that process data belonging to the same partitioning region. In each atomic decomposition the connections over all hidden network layers are summarized and interpreted in a single matrix. We apply the decompositions to study the Lipschitz regularity of the networks and give sufficient conditions for network-depth-independent stability of the representation, drawing a connection to compressible weight distributions. Such analyses may facilitate and promote further theoretical insight and exchange from both the signal processing and machine learning communities.
"Attractive or Faithful? Popularity-Reinforced Learning for Inspired Headline Generation," the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), February 2020.
Authors: YunZhu Song, Hong-Han Shuai, Sung-Lin Yeh, Yi-Lun Wu, Lun-Wei Ku, Wen-Chih Peng

Lun-WeiKu Image Image Abstract:
With the rapid proliferation of online media sources and pub-lished news, headlines have become increasingly importantfor  attracting  readers  to  news  articles,  since  users  may  beoverwhelmed with the massive information. In this paper, wegenerate inspired headlines that preserve the nature of newsarticles and catch the eye of the reader simultaneously. Thetask of inspired headline generation can be viewed as a specific form of Headline Generation (HG) task, with the em-phasis on creating an attractive headline from a given newsarticle. To generate inspired  headlines,  we propose a novelframework  called  POpularity-Reinforced  Learning  for  in-spired Headline Generation (PORL-HG). PORL-HG exploitsthe extractive-abstractive architecture with 1) Popular TopicAttention (PTA) for guiding the extractor to select the attrac-tive sentence from the article  and 2) a popularity predictorfor guiding the abstractor to rewrite the attractive sentence.Moreover, since the sentence selection of the extractor is notdifferentiable, techniques of reinforcement learning (RL) areutilized to bridge the gap with rewards obtained from a pop-ularity score predictor. Through quantitative and qualitativeexperiments,  we  show  that  the  proposed  PORL-HG  signif-icantly  outperforms  the  state-of-the-art  headline  generationmodels in terms of attractiveness evaluated by both human(71.03%) and the predictor (at least 27.60%), while the faith-fulness of PORL-HG is also comparable to the state-of-the-art generation model.
"Knowledge-Enriched Visual Storytelling," the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), February 2020.
Authors: Chao-Chun Hsu, Zi-Yuan Chen, Chi-Yang Hsu, Chih-Chia Li, Tzu-Yuan Lin, Ting-Hao Huang, Lun-Wei Ku

Lun-WeiKu Abstract:
Stories  are  diverse  and  highly  personalized,  resulting  in  alarge possible output space for story generation. Existing end-to-end approaches produce monotonous stories because theyare limited to the vocabulary and knowledge in a single training dataset. This paper introduces KG-Story, a three-stage framework that allows the story generation model to take advantage of external Knowledge Graphs to produce interesting  stories.  KG-Story  distills  a  set  of  representative  wordsfrom the input prompts, enriches the word set by using external knowledge graphs, and finally generates stories basedon the enriched word set. This distill-enrich-generate framework  allows  the  use  of  external  resources  not  only  for  the enrichment  phase,  but  also  for  the  distillation  and  generation  phases.  In  this  paper,  we  show  the  superiority  of  KG-Story for visual storytelling, where the input prompt is a sequence  of  five  photos  and  the  output  is  a  short  story.  Perthe human ranking evaluation, stories generated by KG-Storyare  on  average  ranked  better  than  that  of  the  state-of-the-art  systems.  Our  code  and  output  stories  are  available  athttps://github.com/zychen423/KE-VIST.
Current Research Results
"A Partial Page Cache Strategy for NVRAM-Based Storage Devices," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD), February 2020.
Authors: Shuo-Han Chen, Tseng-Yi Chen, Yuan-Hao Chang, Hsin-Wen Wei, and Wei-Kuan Shih

Yuan-HaoChang Abstract:
Non-volatile random access memory (NVRAM) is becoming a popular alternative as the memory and storage medium in battery-powered embedded systems because of its fast read/write performance, byte-addressability, and non-volatility. A well-known example is phase-change memory (PCM) that has much longer life expectancy and faster access performance than NAND flash. When NVRAM is considered as both main memory and storage in battery-powered embedded systems, existing page cache mechanisms have too many unnecessary data movements between main memory and storage. To tackle this issue, we propose the concept of 'union page cache,' to jointly manage data of the page cache in both main memory and storage. To realize this concept, we design a partial page cache strategy that considers both main memory and storage as its management space. This strategy can eliminate unnecessary data movements between main memory and storage without sacrificing the data \textbf{integrity} of file systems. A series of experiments was conducted on an embedded platform. The results show that the proposed strategy can improve the file accessing performance up to 85.62% when PCM used as a case study.
"Why Attention? Analyze BiLSTM Deficiency and Its Remedies in the Case of NER," the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), February 2020.
Authors: Peng-Hsuan Li, Tsu-Jui Fu, Wei-Yun Ma

Wei-YunMa Wei-YunMa Wei-YunMa Abstract:
BiLSTM has been prevalently used as a core module for NER in a sequence-labeling setup. State-of-the-art approaches use BiLSTM with additional resources such as gazetteers, language-modeling, or multi-task supervision to further improve NER. This paper instead takes a step back and focuses on analyzing problems of BiLSTM itself and how exactly self-attention can bring improvements.We formally show the limitation of (CRF-)BiLSTM in modeling cross-context patterns for each word – the XOR limitation. Then, we show that two types of simple cross-structures – self-attention and Cross-BiLSTM – can effectively remedy the problem. We test the practical impacts of the deficiency on real-world NER datasets, OntoNotes 5.0 and WNUT 2017, with clear and consistent improvements over the baseline, up to 8.7% on some of the multi-token entity mentions. We give in-depth analyses of the improvements across several aspects of NER, especially the identification of multi-token mentions. This study should lay a sound foundation for future improvements on sequence-labeling NER.
"Relation Extraction Exploiting Full Dependency Forests," the 34th AAAI Conference on Artificial Intelligence (AAAI 2020), February 2020.
Authors: Lifeng Jin, Linfeng Song, Yue Zhang, Kun Xu, Wei-Yun Ma, Dong Yu

Wei-YunMa Abstract:
Dependency syntax has long been recognized as a crucial source of features for relation extraction. Previous work considers 1-best trees produced by a parser during preprocessing. However, error propagation from the out-of-domain parser may impact the relation extraction performance. We propose to leverage full dependency forests for this task, where a full dependency forest encodes all possible trees. Such representations of full dependency forests provide a differentiable connection between a parser and a relation extraction model, and thus we are also able to study adjusting the parser parameters based on end-task loss. Experiments on three datasets show that full dependency forests and parser adjustment give significant improvements over carefully designed baselines, showing state-of-the-art or competitive performances on biomedical or newswire benchmarks.
"Query-Driven Multi-Instance Learning," Thirty-Fourth AAAI Conference on Artificial Intelligence, February 2020, February 2020.
Authors: Yen-Chi Hsu, Cheng-Yao Hong, Ming-Sui Lee, and Tyng-Luh Liu

Tyng-LuhLiu Abstract:
We introduce a query-driven approach (qMIL) to multi-instance learning where the queries aim to uncover the class labels embodied in a given bag of instances. Specifically, it solves a multi-instance multi-label learning (MIML) problem with a more challenging setting than the conventional one. Each MIML bag in our formulation is annotated only with a binary label indicating whether the bag contains the instance of a certain class and the query is specified by the word2vec of a class label/name. To learn a deep-net model for qMIL, we construct a network component that achieves a generalized compatibility measure for query-visual co-embedding and yields proper instance attentions to the given query. The bag representation is then formed as the attention-weighted sum of the instances' weights, and passed to the classification layer at the end of the network. In addition, the qMIL formulation is flexible for extending the network to classify unseen class labels, leading to a new technique to solve the zero-shot MIML task through an iterative querying process. Experimental results on action classification over video clips and three MIML datasets from MNIST, CIFAR10 and Scene are provided to demonstrate the effectiveness of our method.
Current Research Results
Authors: Thejkiran Pitti, Ching-Tai Chen, Hsin-Nan Lin, Wai-Kok Choong, Wen-Lian Hsu, and Ting-Yi Sung

Ting-YiSung Wen-LianHsu Hsin-NanLin Ching-TaiChen ThejkiranPitti Abstract:
N-linked glycosylation is one of the predominant post-translational modifications involved in a number of biological functions. Since experimental characterization of glycosites is challenging, glycosite prediction is crucial. Several predictors have been made available and report high performance. Most of them evaluate their performance at every asparagine in protein sequences, not confined to asparagine in the N-X-S/T sequon. In this paper, we present N-GlyDE, a two-stage prediction tool trained on rigorously-constructed non-redundant datasets to predict N-linked glycosites in the human proteome. The first stage uses a protein similarity voting algorithm trained both glycoproteins and non-glycoproteins to predict a score for a protein to improve glycosite prediction. The second stage uses a support vector machine to predict N-linked glycosites by utilizing features of gapped dipeptides, pattern-based predicted surface accessibility, and predicted secondary structure. N-GlyDE’s final predictions are derived from a weight adjustment of the second-stage prediction results based on the first-stage prediction score. Evaluated on N-X-S/T sequons of an independent dataset comprised of 53 glycoproteins and 33 non-glycoproteins, N-GlyDE achieves an accuracy and MCC of 0.740 and 0.499, respectively, outperforming the compared tools. The N-GlyDE web server is available at http://bioapp.iis.sinica.edu.tw/N-GlyDE/ .
"Achieving Lossless Accuracy with Lossy Programming for Efficient Neural-Network Training on NVM-Based Systems," ACM/IEEE International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), October 2019.
Authors: Wei-Chen Wang, Yuan-Hao Chang, Tei-Wei Kuo, Chien-Chung Ho, Yu-Ming Chang, and Hung-Sheng Chang

Yuan-HaoChang Abstract:
Neural networks over conventional computing platforms are heavily restricted by the data volume and performance concerns. While non-volatile memory offers potential solutions to data volume issues, challenges must be faced over performance issues, especially with asymmetric read and write performance. Beside that, critical concerns over endurance must also be resolved before non-volatile memory could be used in reality for neural networks. This work addresses the performance and endurance concerns altogether by proposing a data-aware programming scheme. We propose to consider neural network training jointly with respect to the data-flow and data-content points of view. In particular, methodologies with approximate results over Dual-SET operations were presented. Encouraging results were observed through a series of experiments, where great efficiency and lifetime enhancement is seen without sacrificing the result accuracy.
"Enabling Sequential-write-constrained B+-tree Index Scheme to Upgrade Shingled Magnetic Recording Storage Performance," ACM/IEEE International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), October 2019.
Authors: Yu-Pei Liang, Tseng-Yi Chen, Yuan-Hao Chang, Shuo-Han Chen, Kam-Yiu Lam, Wei-Hsin Li, and Wei-Kuan Shih

Yuan-HaoChang Abstract:
When a shingle magnetic recording (SMR) drive has been widely applied to modern computer systems (e.g., archive file systems, big data computing systems, and large-scale database systems), storage system developers should thoroughly review whether current designs (e.g., index schemes and data placements) are appropriate for an SMR drive because of its sequential write constraint. Through many prior works excellently manage data in an SMR drive by integrating their proposed solutions into the driver layer, an index scheme over an SMR drive has never been optimized by any previous works because managing index over the SMR drive needs to jointly consider the properties of B$^+$-tree and SMR natures (e.g., sequential write constraint and zone partitions) in a host storage system. Moreover, poor index management will result in terrible storage performance because an index manager is extensively used in file systems and database applications. For optimizing the B$^+$-tree index structure over an SMR storage, this work identifies performance overheads caused by the B$^+$-tree index structure in an SMR drive. By such observation, this study proposes a sequential-write-constrained B$^+$-tree index scheme, namely SW-B$^+$tree, which consists of an address redirection data structure, an SMR-aware node allocation mechanism, and a frequency-aware garbage collection strategy. According to our experiments, the SW-B$^+$tree can improve the SMR storage performance 55% on average.
Current Research Results
"Achieving Lossless Accuracy with Lossy Programming for Efficient Neural-Network Training on NVM-Based Systems," ACM Transactions on Embedded Computing Systems (TECS), October 2019.
Authors: Wei-Chen Wang, Yuan-Hao Chang, Tei-Wei Kuo, Chien-Chung Ho, Yu-Ming Chang, and Hung-Sheng Chang

Yuan-HaoChang Abstract:
Neural networks over conventional computing platforms are heavily restricted by the data volume and performance concerns. While non-volatile memory offers potential solutions to data volume issues, challenges must be faced over performance issues, especially with asymmetric read and write performance. Beside that, critical concerns over endurance must also be resolved before non-volatile memory could be used in reality for neural networks. This work addresses the performance and endurance concerns altogether by proposing a data-aware programming scheme. We propose to consider neural network training jointly with respect to the data-flow and data-content points of view. In particular, methodologies with approximate results over Dual-SET operations were presented. Encouraging results were observed through a series of experiments, where great efficiency and lifetime enhancement is seen without sacrificing the result accuracy.
Current Research Results
"Enabling Sequential-write-constrained B+-tree Index Scheme to Upgrade Shingled Magnetic Recording Storage Performance," ACM Transactions on Embedded Computing Systems (TECS), October 2019.
Authors: Yu-Pei Liang, Tseng-Yi Chen, Yuan-Hao Chang, Shuo-Han Chen, Kam-Yiu Lam, Wei-Hsin Li, and Wei-Kuan Shih

Yuan-HaoChang Abstract:
When a shingle magnetic recording (SMR) drive has been widely applied to modern computer systems (e.g., archive file systems, big data computing systems, and large-scale database systems), storage system developers should thoroughly review whether current designs (e.g., index schemes and data placements) are appropriate for an SMR drive because of its sequential write constraint. Through many prior works excellently manage data in an SMR drive by integrating their proposed solutions into the driver layer, an index scheme over an SMR drive has never been optimized by any previous works because managing index over the SMR drive needs to jointly consider the properties of B$^+$-tree and SMR natures (e.g., sequential write constraint and zone partitions) in a host storage system. Moreover, poor index management will result in terrible storage performance because an index manager is extensively used in file systems and database applications. For optimizing the B$^+$-tree index structure over an SMR storage, this work identifies performance overheads caused by the B$^+$-tree index structure in an SMR drive. By such observation, this study proposes a sequential-write-constrained B$^+$-tree index scheme, namely SW-B$^+$tree, which consists of an address redirection data structure, an SMR-aware node allocation mechanism, and a frequency-aware garbage collection strategy. According to our experiments, the SW-B$^+$tree can improve the SMR storage performance 55% on average.
Authors: Tsu-Hui Fu, Peng-Hsuan Li, Wei-Yun Ma

Wei-YunMa Peng-HsuanLi TsuJuiFu Abstract:
In this paper, we present GraphRel, an end-to-end relation extraction model which uses graph convolutional networks (GCNs) to jointly learn named entities and relations. In contrast to previous baselines, we consider the interaction between named entities and relations via a relation-weighted GCN to better extract relations. Linear and dependency structures are both used to extract both sequential and regional features of the text, and a complete word graph is further utilized to extract implicit features among all word pairs of the text. With the graph-based approach, the prediction for overlapping relations is substantially improved over previous sequential approaches. We evaluate GraphRel on two public datasets: NYT andWebNLG. Results show that GraphRel maintains high precision while increasing recall substantially. Also, GraphRel outperforms previous work by 3.2% and 5.8% (F1 score), achieving a new state-of-the-art for relation extraction.
"On the Robustness of Self-Attentive Models," the 57th Annual Meeting of Association for Computational Linguistics, July 2019.
Authors: Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, Cho-Jui Hsieh

Wen-LianHsuYULUNHSIEH Abstract:
This work examines the robustness of selfattentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims.
"UHop: An Unrestricted-Hop Relation Extraction Framework for Knowledge-Based Question Answering," Proceedings of 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2019), June 2019.
Authors: Zi-Yuan Chen, Chih-Hung Chang, Yi-Pei Chen, Jijnasa Nayak and Lun-Wei Ku

Lun-WeiKu Chih-HungChang Zi-YuanChen Abstract:
In relation extraction for knowledge-based question answering, searching from one entity to another entity via a single relation is called "one hop". In related work, an exhaustive search from all one-hop relations, two-hop relations, and so on to the max-hop relations in the knowledge graph is necessary but expensive. Therefore, the number of hops is generally restricted to two or three. In this paper,we propose UHop, an unrestricted-hop framework which relaxes this restriction by use of a transition-based search framework to replace the relation-chain-based search one. We conduct experiments on conventional 1- and 2-hop questions as well as lengthy questions, including datasets such as WebQSP, PathQuestion,  and Grid World. Results show that the proposed framework enables the ability to halt, works well with state-of-the-art models, achieves competitive performance without exhaustive searches, and opens the performance gap for long relation paths.
"A Bicameralism Voting Framework for Combining Knowledge from Clients into Better Prediction," IEEE International Conference on Big Data, December 2019.
Authors: Yu-Tung Hsieh, Chuan-Yu Lee, Ching-Chi Lin, Pangfeng Liu, and Jan-Jan Wu

Jan-JanWuLINCHING-CHI Abstract:
In this paper, we propose abicameralism votingtoimprove the accuracy of a deep learning network. After we traina deep learning network with existing data, we may want toimprove it with some newly collected data. However, it would betime consuming if we retrain the model with all the available data.Instead, we propose a collective framework that train models onmobile devices with new data (also collected from the mobiledevices) via transfer learning. Then we collect the predictionsfrom these new models from the mobile devices, and achieve moreaccurate predictions bycombiningtheir predictions viavoting.The proposed bicameralism voting is different from federatedlearning, since we do not average the weights of models frommobile devices, but let them vote bybicameralism.The proposed bicameralism voting mechanism has three ad-vantages. First, this collective mechanismimproves the accuracyof the deep learning model. The accuracy of bicameralism voting(VGG-19 on the data set Food-101 dataset) is 77.838%, higherthan that of a single model (75.517%) with the same amount oftraining data. Second, the bicameralism votingsaves computationresource, because it only updates an existing model, and canbe done in parallel by multiple devices. For example, in ourexperiments to update an existing model via transfer learningtakes about 10 minutes on a server, but to train a model fromscratch with both the original and the new data will take morethan a week. Finally, the bicameralism voting isflexible. Unlikefederated learning, bicameralism voting can use any architectureof model, any preprocessing of input data, and any format ofmodel when the models are trained on different mobile devices.
"Handling local state with global state," Mathematics of Program Construction (MPC 2019), October 2019.
Authors: Koen Pauwels, Tom Schrijvers and Shin-Cheng Mu

Shin-ChengMu Abstract:
Equational reasoning is one of the most important tools of functional programming. To facilitate its application to monadic programs, Gibbons and Hinze have proposed a simple axiomatic approach using laws that characterise the computational effects without exposing their implementation details.  At the same time Plotkin and Pretnar have proposed algebraic effects and handlers, a mechanism of layered abstractions by which effects can be implemented in terms of other effects.
 
This paper performs a case study that connects these two strands of research. We consider two ways in which the nondeterminism and state effects can interact: the high-level semantics where every nondeterministic branch has a local copy of the state, and the low-level semantics where a single sequentially threaded  state is global to all branches.
 
We give a monadic account of the folklore technique of handling local state in terms of global state, provide a novel axiomatic characterisation of global state and prove that the handler satisfies Gibbons and Hinze's local state axioms by means of a novel combination of free monads and contextual equivalence. We also provide a model for global state that is necessarily
non-monadic.
"Compacting, Picking and Growing for Unforgetting Continual Learning," Thirty-third Conference on Neural Information Processing Systems, NeurIPS 2019, December 2019.
Authors: Steven C. Y. Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen

Chu-SongChen Abstract:
Continual lifelong learning is essential to many applications. In this paper, we propose a simple but effective approach to continual deep learning. Our approach leverages the principles of deep model compression with weight pruning, critical weights selection, and progressive networks expansion. By enforcing their integration in an iterative manner, we introduce an incremental learning method that is scalable to the number of sequential tasks in a continual learning process. Our approach is easy to implement and owns several favorable characteristics. First, it can avoid forgetting (i.e., learn new tasks while remembering all previous tasks). Second, it allows model expansion but can maintain the model compactness when handling sequential tasks. Besides, through our compaction and selection/expanding mechanism, we show that the knowledge accumulated through learning previous tasks is helpful to adapt to a better model for the new tasks compared to training the models independently with tasks. Experimental results show that our approach can incrementally learn a deep model to tackle multiple tasks without forgetting, while the model compactness is maintained with the performance more satisfiable than individual task training.
Current Research Results
Authors: Sachit Mahajan, Yu-Siou Tang, Dong-Yi Wu, Tzu-Chieh Tsai, and Ling-Jyh Chen

Ling-JyhChen Abstract:
Transport related pollution is becoming a major issue as it adversely affects human health and one way to lower the personal exposure to air pollutants is to choose a health-optimal route to the destination. Current navigation systems include options for the quickest paths (distance, traffic) and least expensive paths (fuel costs, tolls). In this paper, we come up with the CAR (Cleanest Air Routing) algorithm and use it to build a health-optimal route recommendation system between the origin and the destination. We combine the open source PM2.5 (Fine Particulate Matter with diameter less than 2.5 micrometers) concentration data for Taiwan, with the road network graph obtained through OpenStreetMaps. In addition, spatio-temporal interpolation of PM2.5 is performed to get PM2.5 concentration for the road network intersections. Our algorithm introduces a weight function that assesses how much PM2.5 the user is exposed to at each intersection of the road network and uses it to navigate through intersections with the lowest PM2.5 exposures. The algorithm can help people reduce their overall PM2.5 exposure by offering a healthier alternative route which may be slightly longer than the shortest path in some cases. We evaluate our algorithm for different travel modes, including driving, cycling, walking and riding motorbikes. An analysis is done for 1500 real-world travel scenarios, which shows that routes recommended by our approach tend to have a lower PM2.5 concentration than those recommended by Google Maps.
Current Research Results
"mwJFS: A Multi-Write-Mode Journaling File System for MLC NVRAM Storages," IEEE Transactions on Very Large Scale Integration Systems (TVLSI), September 2019.
Authors: Shuo-Han Chen, Yuan-Hao Chang, Yu-Ming Chang, and Wei-Kuan Shih

Yuan-HaoChang Abstract:
At present, nonvolatile random access memory (NVRAM) is widely considered as a promising candidate for the next-generation storage medium due to its appealing characteristics, including short read/write latency, byte addressability, and low idle energy consumption. In addition, to provide a higher bit density, multilevel-cell (MLC) NVRAM has also been proposed. Nevertheless, when compared with conventional single-level-cell (SLC) NVRAM, MLC NVRAM has longer write latency and higher energy consumption. Hence, the performance of MLC NVRAM-based storage systems could be degraded due to the lengthened write latency. The performance degradation is further magnified by existing journaling file systems (JFS) on MLC NVRAM-based storage devices due to the JFS's fail-safe policy of writing the same data twice. Such observations motivate us to propose multiwrite-mode JFSs (mwJFSs) to alleviate the drawbacks of MLC NVRAM and boost the performance of MLC NVRAM-based JFS. The proposed mwJFS differentiates the data retention requirement of journaled data and applies different write modes to enhance the access performance with lower energy consumption. A series of experiments was conducted to demonstrate the capability of mwJFS on MLC NVRAM-based storage systems.
Current Research Results
Authors: Sheng-Yu Fu, Ding-Yong Hong, Yu-Ping Liu, Jan-Jan Wu, Wei-Chung Hsu

Jan-JanWuDing-YongHong Abstract:
More and more modern processors have been supporting non-contiguous SIMD data accesses. However, translating such instructions has been overlooked in the Dynamic Binary Translation (DBT) area. For example, in the popular QEMU dynamic binary translator, guest memory instructions with strides are emulated by a sequence of scalar instructions, leaving a significant room for performance improvement when the SIMD instructions are available on the host machines. Structured loads/stores, such as VLDn/VSTn in ARM NEON, are one type of strided SIMD data access instructions. They are widely used in signal processing, multimedia, mathematical, and 2D matrix transposition applications. Efficient translation of such structured loads/stores is a critical issue when migrating ARM executables to other ISAs. However, it is quite challenging since not only the translation of structured loads/stores is not trivial, but also the mapping of SIMD registers between guest and host is complicated. This paper presents the design of translating structured loads/stores in DBT, including target code generation, efficient SIMD register mapping, and optimizations for reducing data permutations. Our proposed register mapping mechanisms and optimizations are not limited to handle structured loads/stores, they can be extended to deal with normal SIMD instructions. This paper evaluates how different factors affect the translation performance and code size. These factors include guest SIMD register length, strides, and use cases for structured loads. On a set of OpenCV benchmarks, our QEMU-based system has achieved a maximum speedup of 5.03x, with an average improvement of 2.87x. On a set of BLAS benchmarks, our system has obtained a maximum speedup of 2.22x and an average improvement of 1.78x.