Even though the target values **t** in a classifying problem are binary,
the output units in an MLP will not take values that are exactly
**1** or **0**. However, one can interpret these outputs in a very useful way;
they correspond to Bayesian * a posteriori* probabilities
[24] provided that:

- The training is accurate.
- The outputs are of
**1**-of-**M**-type (the task is coded such that only one output unit is ``on'' at a time). - A mean square, cross entropy or Kullback error function is used.
- Training data points are selected with the correct
*a priori*probabilities.

Fri Feb 24 11:28:59 MET 1995