Institute of Information Science
Network System and Service Laboratory
Principal Investigators:
:::Jan-Ming Ho :::Ling-Jyh Chen :::Meng-Chang Chen :::Sheng-Wei Chen
:::Wen-Tsuen Chen :::Tyng-Ruey Chuang :::Jane W. S. Liu

[ Group Profile ]

In this Laboratory, our research address several aspects of network systems and services, including improving wireless and delay-tolerant network protocols, leveraging human computation capabilities to address key challenges, developing critically needed information and communication technologies for disaster management, and seeking solutions to network computational problems on providing large-scale financial risk management services.

Our work on wireless communication and networking focuses on multi-hop wireless communication, mobility management and network forensics. We are developing techniques to provide end-to-end quality of service (QoS) guarantees for communication flows in wireless mesh networks. We are investigating several mobility and handover management schemes that support service at anytime and anywhere and have proposed architecture and protocols that support seamless handover for both terminal and network mobility. We also study the problem of detecting a slow attack by modeling outbound connection attempts of a host as a time series and use spectral analysis to discover recurrent events generated by the potential attack amid legitimate traffic. The regularity of these attack attempts is preserved in the time series and can be observed in the frequency domain.
In addition, we study networked sensing systems with focuses on energy efficiency and large-scale sensor data management. We have developed an adaptive GPS scheduling algorithm to prolong the lifespan of GPS-enabled mobile sensors, and designed a context-aware duty cycle algorithm to adjust duty cycles of GPS receivers and radio interfaces of mobile sensor devices in accordance with surrounding contexts inferred by low-cost sensors, e.g., accelerometers and thermometers. We have proposed a lightweight and lossless data compression algorithm for spatio-temporal data, and designed a set of data query algorithms to support spatio-temporal data calculation using the compressed data directly. The results of our research have been implemented in two real-world networked sensing systems: One is a mission-critical sensor network, called Yushan- Net, for hiker tracking, search, and rescuing in Yushan National Park, and the other is a mobile phone sensing system, called TPE-CMS, for comfort measurement of public transportation systems in the Taipei metropolitan area.

Applications of human computation range from the exploitation of unsolicited user contributions, such as using tags to aid understanding of the visual content of yet-unseen images, to utilizing crowdsourcing platforms and marketplaces (e.g., Amazon’s Mechanical Turk) to micro-outsource to a large population of workers tasks such as semantic video annotation. Further, crowdsourcing offers a time- and resource-efficient method for collecting large input for system design or evaluation purposes. We are applying crowdsourcing to optimize computer systems more rapidly and to address human factors more effectively. In the past few years, we have performed extensive studies on the performance of GWAP (Games with A Purpose) systems and designed a human computation game in order to collect diverse user annotations efficiently. In addition, we have proposed a cheat-proof framework that can be used to assess the quality of experience provided by multimedia content effectively. We have found that crowdsourcing is indeed a powerful strategy that can draw collective intelligence for AI-hard problems. In the future, we will continue to study how to use crowdsourcing well to overcome challenges in a variety of areas.

A disaster management information system (DMIS) facilitates the access, use and presentation of data and information by application systems and services that support decisions and operations during all phases of disaster management. State-ofthe- art systems have several common limitations: They cannot make good use of information sources owned by businesses, organization, communities, and so on during emergencies; do not exploit synergistically information from networks of things and crowd of people and are not sufficiently agile in response to changes in disaster situation. We are collaborating with researchers in Institute of Earth Science and Center for Climate Changes and Computer Science and Engineering faculty members from several leading universities in Taiwan and USA to develop an open framework for DMIS free of these limitations. A system built within the framework can support the access and use of data and information from independent sources by independently developed applications and services and can readily accommodate new information sources and applications as needed in response to unforeseen crisis situations. Our work is now supported by an Academia Sinica thematic project called OpenISDM (Open Information Systems for Disaster Management). The current works by members of our laboratory include the development of smart cyber-physical devices and applications as elements of disaster-prepared smart environment; strategies for crowdsourcing collection of sensor information to complement of data from in-situ physical sensors; methods and tools for reducing human efforts in collecting, validating and refining disaster related information in social reports; methods and tools supporting communication and computation infrastructures for gathering, caching, fusion and distribution of ubiquitous and heterogeneous real-time streaming sensor data and information to response centers and individual responders and volunteers during disasters; the exploitation of complementary merits of different network access technologies, approaches, and network types to make the physical connectivity as robust as possible during and after disasters.
We are also parsuevy in studying network computation problems on providing large-scale risk management services. Despite of the long history in the development of economic and financial theories and practices in financial risk management, the “worldwide credit crisis” in 2008 had manifested the vulnerability of the current financial industry in tolerating risks. Several examples show that even the three major rating agencies were unable to report major default events efficiently. In Enron’s bankruptcy case in 2001, its bonds had maintained “investment grade” ratings until five days before the company declared bankruptcy. In Lehman Brother’s case in 2008, they still received “investment grade” ratings on the morning they declared bankruptcy. The rating companies claim that their rating reports provide a long-term perspective rather than providing an up-to-minute assessment. In rating credit of a company, there are hundreds of firm-specific and macroeconomic variables. There is no doubt that assessing credit risk in real-time is indeed a task with high computational complexity. Nevertheless, it is an important foundation of maintaining stability of the financial market. Complementary to the research in economic and financial theories and practices in risk management, we aim at developing computing technologies based on cloud computing framework towards provisioning of large-scale real-time financial risk management services, including (1) real-time rating of company credit; (2) real-time rating of personal credits; and (3) pricing and risk measures of financial products.


Academia Sinica Institue of Information Science Academia Sinica