Feed-forward ANN have become increasingly popular over the last couple of years in feature recognition and function mapping problems in a wide area of applications. High energy physics ( HEP) is no exception with its demanding on-line and off-line analysis tasks. To date, the most commonly used architectures and procedures are the Multilayer Perceptron ( MLP) with backpropagation updating and self-organizing networks. Both these approaches were implemented in JETNET 2.0. For the self-organizing networks nothing is changed in JETNET 3.0 and we refer the reader to refs. [1,4] for information on this part. For the MLP the most important additions and changes concern additional learning algorithm variants, learning parameters and various tools for gauging performance and estimating error surfaces.
The following learning algorithms are included in JETNET 3.0:
Besides a full description of the functionality and the use of the various JETNET 3.0 subroutines this writeup also contains a set of ``rules-of-thumb'' and guidelines on how to use the package in different situations.
However, we emphasize that in addition to feature recognition and function mapping there are ANN applications in HEP that require feed-back networks, which are not included in this package. In particular, we think of optimization networks used for track finding [11,12,13,14,15,16].
This write-up is organized as follows. In Sect. 2 we very briefly discuss the basic steps and variants when using feed-forward networks for learning. Discussions and prescriptions on what methods to use in various situations are found in Sect. 3. Some implementation issues with respect to JETNET 3.0 are contained in Sect. 4. The program components together with switch and parameter descriptions are listed in Sect. 5. Finally Sect. 6 contains a list of technical restricions and Sect. 7 a sample program.