EXPLORATIONS OF THE MEAN FIELD THEORY LEARNING ALGORITHM
Eric Hartman and Carsten Peterson
Abstract: The mean field theory (MFT) learning algorithm is
elaborated and explored with respect to a variety of tasks.
MFT is benchmarked against the back propagation learning
algorithm (BP) on two different feature recognition problems:
two-dimensional mirror symmetry and multi-dimensional
statistical pattern classification. We find that while the
two algorithms are very similar with respect to generalization
properties, MFT normally requires a substantially smaller number
of training epochs than BP. Since the MFT model is bidirectional,
rather than feed-forward, its use can be extended naturally from
purely functional mappings to a content addressable memory. A
network with N visible and N hidden units can store up to
approximately 2N patterns with good content-addressability. We
stress an implementational advantage for MFT: it is natural for
VLSI circuitry.