Wen-Kuang Chou and David Y. Y. Yun*
Department of Information Science
Shalu 43309, Taiwan R.O.C.
*Department of Electrical Engineering
University of Hawaii at Manoa
Honolulu, Hawaii 96822 U.S.A.
It has been pointed out that current neural networks cannot learn unless extra computer resources are used . In this paper, a REchargeable Parallel Learning Architecture (REPLA) is proposed to help hidden Markov models learn faster. It can be regarded as a learning heuristics controller (LHC) in the model L3 . By introducing REPLA and parallel learning algorithms, the time complexity of learning has been reduced from O(max(M,N)NT) to O(max(M,N)T) for each iteration. For an N-parallelism architecture, REPLA has achieved 100 percent utilization of processors. The significance of REPLA lies in the faster learning and reusability.
Keywords: hierarchical neural models (L3), massively parallel architecture for recalling (MPAR), learning heuristics controller (LHC), hidden markov models (HMM), computer aided learning, parallel algorithms and architectures
Received July 29, 1991; revised March 31, 1993.
Communicated by Jun S. Huang.