Institute of Information Science
Computer System Laboratory
Principal Investigators:
:::Jan-Jan Wu(Chair) :::Yuan-Hao Chang :::Sheng-Wei Chen
:::Pei-Zong Lee :::Chien-Min Wang

[ Group Profile ]
The Computer Systems Lab was established in 2009. Its primary research areas include multicore systems, virtualization, system software for cloud computing and related applications, and storage designs for embedded systems.
1. System Support for Virtualization
排版插圖
Fig.1: Dynamic Binary Translation for Virtualization on Multicores
Virtualization is very important for multicore computing and cloud computing. It allows applications running on such systems to be agnostic about the underlying hardware platforms. For example, applications compiled for a particular instruction-set architecture such as Intel’s x86 running under Window could be run on a system using the instruction set of ARM architecture under Linux. Dynamic binary translation (DBT) is the core technology for virtualization. To date, QEMU is the most popular retargetable DBT system that supports both process-level and system-level virtualization for multiple guest and host ISAs and OS’s. As these binary manipulation technologies are performed at runtime and most of them would expand the original binary code for their intended functions, they are notoriously time consuming. For example, QEMU requires 10X or even more runtime overhead in practice. It requires sophisticated and elaborate binary optimization schemes to cut down the overhead. In our current work, we developed a multi-threaded DBT prototype, called HQEMU, using QEMU and LLVM (Low Level Virtual Machine) as our building blocks. HQEMU improves QEMU process-level virtualization performance by a factor of 4X for emulation of single-thread applications and a factor of 25X for emulation of multi-threaded applications, respectively. We plan to extend our current work to develop binary manipulation technologies that are crucial to support system-level virtualization.
2. Design and Implementation of Cloud Gaming System
排版插圖
Cloud gaming systems render the game scenes on cloud servers and stream the encoded game scenes to thin clients over the Internet. The thin clients send the user inputs, from joysticks, keyboards, and mice, to the cloud servers. With cloud gaming systems, users can: (i) avoid upgrading their computers, for the latest games, (ii) play the same games using thin clients on different platforms, such as PCs, laptops, tablets, and smartphones, and (iii) play more games due to reduced hardware/software cost, while game developers may: (i) support more platforms, (ii) avoid hardware/software incompatibility issues, and (iii) increase net revenues. Therefore, cloud gaming systems have attracted attentions from users, game developers, and service providers. We have developed an open cloud gaming system, GamingAnywhere, which can be used by cloud gaming developers, cloud service providers, and system researchers for setting up a complete cloud gaming testbed. GamingAnywhere is the first open cloud gaming testbed in the literature. Also, we conduct extensive experiments using GamingAnywhere to quantify its performance and overhead. We also derive the optimal setups of system parameters, which in turn allow users to install and try out GamingAnywhere on their own servers. We expect that cloud game developers, cloud service providers, system researchers, and individual users will use GamingAnywhere to set up complete cloud gaming testbeds for different purposes. We firmly believe that the release of GamingAnywhere will stimulate more research innovations on cloud gaming systems, or multimedia streaming applications in general.
3. Storage and Processing of Streaming Data in Clouds
排版插圖
Fig.2: A Cloud Platform for Streaming Data Storage and Processing
As more and more streaming data applications are moved to Clouds, efficient parallel frameworks and distributed file systems are the key to meeting the scalability and performance requirements entailed in such streaming data applications. Our research focuses on the storage and processing of streaming data in Clouds. A critical requirement on the storage of streaming data is the provision of quality-of-service (QoS), which is the ability to guarantee a certain level of performance to an application. We aim at the provision of QoS in distributed file systems for clouds with the goals to meet the bandwidth/latency requirement for each access to QoS files and to improve the overall utilization of storage resources. In the field of text processing, MapReduce has been widely adopted because of the capability of exploiting the distributed resources and processing large-scale data. Nevertheless, such promise is accompanied by the difficulty of fitting streaming data applications into MapReduce. This is because MapReduce is limited to the kind of applications that every input key-value pair is independent of each other. We plan to extend the general applicability of MapReduce by allowing the dependence within a set of input key-value pairs. Based on this new model, we shall develop the corresponding methodology and software tools that are able to process the streaming applications.
4. Storage System Designs for Embedded Systems
排版插圖
Fig.3: System Architecture of 3D Flash Memory with Virtual Block Remapping
Embedded systems usually adopt flash memory as their storage medium. Because of the cost-reduction issue and the advance of manufacturing technologies, high-density multiple-level-cell and 3D flash memory chips are emerging as popular alternatives in embedded applications but also introduce new challenges with respect to reliability, performance, and endurance. To this end, we focus on how to adopt heal-leveling techniques to improve endurance of 3D flash, and explore the possibility of using software solution to reduce the write disturbance of real 3D flash memory. On the other hand, we also investigate the possibility to adopt the emerging byte-addressable non-volatile memories, e.g., PCM, ReRAM, and STT RAM, in embedded systems. Due to their non-volatility and byte-addressability, these emerging non-volatile memories could be both working memory and persistent store. Thus, we propose the concept of “one-memory system” that adopts non-volatile memory as its main memory and storage at the same time. However, existing operating systems consider storage as block devices and manage main memory and storage separately. In order to take advantage of the one-memory architecture, we are redesigning the memory management and storage systems of existing operating systems. At the same time, we are investigating new designs to optimize the storage capacity utilization and minimize redundant storage accesses for energy saving and performance improvement.

More

Academia Sinica Institue of Information Science Academia Sinica