Journal of Applied Mathematics and Computer Science, 7 (1997) 639-658
The choice of transfer functions in neural networks is of crucial
importance to their performance. Although sigmoidal transfer functions are
the most common there is no {\em a priori\/} reason why they should be
optimal in all cases. In this article advantages of various neural
transfer functions are discussed and several new type of functions are
introduced. Universal transfer functions, parametrized to change from
localized to delocalized type, are of greatest interest. Biradial
functions are formed from products or linear combinations of two sigmoids.
Products of $N$ biradial functions in $N$-dimensional input space give
densities of arbitrary shapes, offering great flexibility in modelling the
probability density of the input vectors. Extensions of biradial
functions, offering good tradeoff between complexity of transfer functions
and flexibility of the densities they are able to represent, are proposed.
Biradial functions can be used as transfer functions in many types of
neural networks, such as RBF, RAN, FSM and IncNet. Using such functions
and going into the hard limit (steep slopes) facilitates logical
interpretation of the network performance, i.e.\ extraction of logical
rules from the training data.
Paper in PDF format, 280 KB
Projects on similar subject and BACK to the on-line publications of W. Duch.