Extraction of logical rules from training data using backpropagation
networks
Wlodzislaw Duch, Rafal Adamczak and Krzysztof Grabczewski
Department of Computer Methods, Nicholas Copernicus University,
Grudziadzka 5, 87-100 Torun, Poland.
E-mail: duch,raad,kgrabcze@phys.uni.torun.pl
Simple method for extraction of logical rules from neural networks trained
with backpropagation algorithm is presented. Logical interpretation is
assured by adding an additional term to the cost function, forcing the
weight values to be +/-1 or zero. Auxiliary constraint ensures that the
training process strives to a network with maximal number of zero weights,
which augmented by weight pruning yields a minimal number of logical rules
extracted by means of weights analysis. Rules are generated consecutively,
from most general, covering many training examples, to most specific,
covering a few or even single cases. If there are any exceptions to these
rules, they are being detected by additional neurons.
The algorithm applied to the Iris classification problem generates 3 rules
which give 98.7% accuracy. The rules found for the three monks and mushroom
problems classify all the examples correctly.