Randomness has become a valuable resource in computation. For many computational problems, randomized algorithms are simpler, more natural, or more efficient than the
deterministic ones known so far. However, randomized algorithms typically depend on the availability of a perfect random source, whose existence even in the nature is debatable. One approach then is to study whether or not randomized algorithms can be derandomized into deterministic ones without sacrificing the performance too much. The other approach is to design procedures, called extractors, which can extract almost perfect randomness
from slightly random sources. We have done some works on both approaches, which are described in the following. A standard approach to derandomization relies on constructing the so-called pseudorandom generators (PRG), which stretch a short random seed into a long pseudorandom string that looks random to circuits of polynomial size. So far, all known constructions of PRG are based on unproven assumptions of the nature that certain functions are hard to compute. To obtain a stronger result, we would like to start from a function which is only slightly hard to compute and convert it into a much harder one, before using it to construct a PRG. This is known as the task of hardness amplification, which plays a crucial role in the study of pseudo-randomness. All previous constructions of hardness amplification procedures have some unpleasant issues. The first is that when one wants to amplify the hardness substantially, the amplification procedures all require a high computational complexity. The second is that hardness amplification typically involves non-uniformity in the sense that hardness is measured against non-uniform circuits. Many researchers have tried to resolve these issues, but the progress is very limited. We observe that almost all previous hardness amplification procedures are done in a certain black-box way, and we show that when the hardness amplification is done in such a black-box way, both issues are in fact unavoidable. Extractors are functions which extract almost uniform bits from sources of biased and correlated bits, usually with the help of an additional short random seed as a catalyst.
In addition to extracting randomness for randomized algorithms to use, extractors turn out to have a wide range of applications in fields and have played a fundamental
and unifying role in the theory of pseudo-randomness. We provide the first explicit construction of extractors which are simultaneously optimal (up to constant factors) in both seed length and the number of bits extracted, settling an open problem which had been extensively studied for more than a decade. We also study the possibility of constructing extractors which do not need the additional seed, and we identify a general class of sources from which seedless extraction can be achieved. We also consider sources which are not random at all in the traditional, statistical setting, but look slightly random to computationally-bounded observers, and we show how to extract randomness from such sources. In addition to constructing extractors, we also find a surprising
application of extractors in cryptography. In particular, we use extractors to build an encryption scheme with an everlasting security, which will remain secure forever, even against future advances of any kind. |