**Current
Research Projects:**

l
Design and Application of Software Agents : Logical
Systems for Multi-Agents

In this
project, we investigate agent-related logic systems. These systems can be mainly classified into two categories. One is the deductive logic and the other is
the inductive one.

By deductive logics, we will use doxastic logics to express the agents‘ belief base, dynamic logic to model the cause-effect of agents’ actions, and preference theory to explain the motivation of agents‘ behavior and the consideration behind their decisions.

Moreover, we will consider the factors of time and agents‘ capability, to extend the expressive power of our logics. Though, for the sake of clarity and simplicity, we consider belief , preference, and action as the main concerns of agents in decision making. From these three components, we can further model the autonomy, intention and norms of agents. This will result in a basic architecture of agent logic systems, and we will call it BPA architecture.

As for the inductive
logic, we will focus on the learning capability of agents. Since agents can induce the preference of user
requirements during the course of execution, we need an inductive logic to
model this phenomenon. In particular,
agents should induce regularity and relevance from the data collected by them
to form the basis of data classification so that the efficiency of information
retrieval can be improved. These
techniques of data mining will also be included in our logics。

Finally,
we want to combine these logic systems to provide a semantic basis for agent
description and communication languages and develop a symbolic reasoning
language appropriate for programming agents.

l Logical Model for Privacy Protection:

In this project, we consider a logical model for privacy protection problem in the database linking context. Assume in the data center, there are a large amount of data records. Each record has some public attributes the values of which are known to the public and some confidential attributes the values of which are to be protected. When a data table is released, the data manager must assure that the receiver would not know the confidential data of any particular individuals by linking the releasing data and the prior information he had before receiving the data. In this project, we will investigate both the qualitative and quantitative aspects of the privacy protection problem.

To solve the problem from the qualitative
aspect, we propose a simple epistemic logic to model the user’s knowledge. In
the model, the concept of safety is rigorously defined and an effective
approach is given to test the safety of the released data. Some generalization
operations can be applied to the original data to make them less precise and
the release of the generalized data may prevent the violation of privacy. Two
kinds of generalization operations are considered. The level-based one is more
restrictive, however, a bottom-up search method can be used to find the most
informative data satisfying the safety requirement. On the other hand, the
set-based one is more flexible, however, the computational complexity of
searching through the whole spaces of this kinds of operations is much higher
than the previous one though graph theory is used to simplify the discussion.
As a result, heuristic methods may be needed to improve the efficiency.

While the level-based and set-based
generalizations both replace a precise value by a subset of values, we can also
consider the possibility of replacing it by a linguistic label or a fuzzy
subset of values. This is called fuzzy generalizations. The logical model for
the safety criteria by allowing fuzzy generalizations will also be
investigated.

On the other hand, from the
quantitative aspect, we will consider the
information the user will obtain by receiving the data table. After receiving
the data table, even the user can not know any individual’s privacy with
certainty, he may learn the probability distribution of the confidential
attribute values among a group of individuals and this may result in a danger
of privacy invasion. To solve the problem, we will consider different
information measures associated with data tables, such as Shannon’s entropy,
Kolmogorov complexity, or the user’s posterior probing cost etc. These measures
may provide aid in deciding how dangerous the release of the data table will be
to the privacy protection policy.

l Information Fusion in Multi-Agent Systems:

In this project, we would like to develop logics for merging beliefs of agents with different degrees of reliability. The logics are obtained by combining the multi-agent epistemic logic from and multi-sources reasoning systems from. Every ordering for the reliability of the agents is represented by a modal operator, so we can reason with the merged results under different situations. The approach is conservative in the sense that if an agent's belief is in conflict with those of higher priorities, then his belief is completely discarded from the merged result. We consider two strategies for the conservative merging of beliefs. In the first one, if inconsistency occurs at some level, then all beliefs at the lower levels are discarded simultaneously, so it is called level cutting strategy. For the second one, only the level at which the inconsistency occurs is skipped, so it is called level skipping strategy. The formal semantics and axiomatic systems for these two strategies would be investigated.

l Information Retrieval by Possibilistic Reasoning

In this project, we would like to apply possibilistic reasoning to information retrieval for documents endowed with similarity relations. On the one hand, it is used together with some classical models for accommodating possibilistic uncertainty. The logical uncertainty principle is then interpreted in the possibilistic framework. On the other hand, possibilistic reasoning is integrated into terminological logic and its applications to some information retrieval problems, such as query relaxation, query restriction, and exemplar-based retrieval, would be investigated.