New Neural Network Design for Approximate Dynamic Programming and Optimal Multiuser Detection
Masters Dissertation, Number: CSHCN MS 1998-1, Year: 1998, Advisor: John S. Baras
In this thesis we demonstrate that a new neural network design can be used to solve a class of difficult function approximation problems which are crucial to the field of approximate dynamic programming (ADP). Although conventional neural networks have been proven to approximate smooth functions very well, the use of ADP for problems of intelligent control or planning requires the approximation of functions which are not so smooth. As an example, this thesis studies the problem of approximating the $J$ function of dynamic programming applied to the task of navigating mazes, in general, without the need to learn each individual maze. Conventional neural networks, like multi-layer perceptrons (MLPs), cannot learn this task. But a new type of neural network, simultaneous recurrent networks (SRNs), can accomplish the required learning as demonstrated by successful initial tests. In this thesis we also investigate the ability of recurrent neural networks to approximate MLPs and vice versa. Moreover, we present a comparison between using SRNs and MLPs to implement the optimal CDMA multiuser detector (OMD). This example is intended to demonstrate that SRNs can provide fast suboptimal solutions to hard combinatorial optimization problems, and achieve better bit- error-rate (BER) performance than MLPs.