**NOTE**

This is a repost of the article that I wrote for FreeBSD.MU because I think it really belongs here.

*Introduction*

Operation Flashpoint, published by CodeMasters, is one of my favourite games. It is very immersive thanks to an assortment of authentic weapons, vehicles and realistic sound effects. But, perhaps, its most important ingredient is the artificial intelligence engine. No wonder, then, that countless hours of gameplay have rekindled my dormant interest in artificial intelligence and artificial neural networks (ANNs).

In this article, I will document how I implemented an ANN using the PHP scripting language. Theories, formulas and respective proofs will not be covered; for details, please visit the links in the next section.

Download the source code here.

**Neural networks**

*What is an artificial neural network?*

An *artificial neural network* is a model of the organic brain. It attempts to reproduce the interactions between the neurons in the brain during the learning and thinking process. It works by applying mathematical formulae obtained from medical studies of how the actual brain works. For a more detailed definition, please see here.

*Types of ANNs*

There are several different types of ANNs. This implementation will model the **feed-forward, multi-layer neural network**. See here.

*Learning*

An ANN learns in the same way as the natural brain, that is, by reinforcing the connections between neurons. Several learning (or training) algorithms have been devised. The one my implementation uses is **backpropagation**, also known as **BACKPROP**. See here.

*Getting more information*

The reference is, without doubt, the comp.ai.neural-nets FAQ.

**PHP implementation**

*Choice of language*

Several tutorials for developing ANNs are already available on the Internet. However, most of these cover the usual languages for such a task, that is, C, C++ and Java. Also, a procedural approach is very often adopted instead of an object-oriented one, even for the tutorials using Java.

I chose to develop in PHP to take advantage of its diversity of vector manipulation functions and a shorter coding-debugging lifecycle while in the process of learning the algorithms thanks to the interpreted nature of PHP scripts.

This implementation makes extensive use of OOP techniques. I recommend that the reader familiarises himself or herself with these concepts before proceeding.

*Basics*

If you have skipped the theory explained at the web sites listed above, hopefully, the following will get you up to speed.

A *multi-layer* ANN consists of *at least* three layers: **one input layer**, a **hidden layer** and **one output layer**. There can be any number of hidden layers, each with any number of neurons *(hidden neurons)*.

In a *feed-forward* ANN, each input is fed into each neuron of the first hidden layer whose outputs are fed into the neurons of the next layer, and so on, until the output neurons receive the inputs and produce the final outputs.

Additionally, a *bias* input may be fed into each layer for better results. Please, see here for an explantion of the importance of the bias input.

Each input has a weight that is initially set to a random value — usually between -1.0 and 1.0; during training, the weights are adjusted using an error-correction algorithm until the ANN gives the desired output. In essence, the final weights make up the “knowledge” acquired by the ANN. It should be noted, however, that each set of weights will only work with an ANN having the same architecture (same number of inputs, layers, hidden neurons and outputs) as the one from which it was obtained.

The simplest definition of the output of an artificial neuron is the result obtained when the sum of its weighted inputs is passed through a stepping function. In our case, the sigmoid function will be used. Its formula is

f(x) = 1 / (1 + exp(-x) )

where *exp()* is an exponential function.

Therefore, given three inputs *x1*, *x2* and *x3*, with weights *w1*, *w2*, *w3*, respectively, to a neuron, the output can be obtained as follows:

**Step 1** – Calculating the sum of weighted inputs

sum = (x1 * w1) + (x2 * w2) + (x3 * w3)

**Step 2** – Calculating the output

output = 1 / (1 + exp(-1 * sum) )

Given an ANN with *n* output neurons, *n* outputs are expected. Each output is calculated using the formulae above to give the ANN’s final output — a vector of outputs, that is.

It should be noted that the above sigmoid function only outputs results between 0 and 1. Therefore, some kind scale should be applied to the results to give values outside this range.

An ANN, using the BACKPROP algorithm, is trained by recursively feeding a set of inputs into it and adjusting its weights according to the discrepancy between the actual outputs and desired outputs. The recursion lasts until an acceptable discrepancy is reached.

The adjustment (or weight change) for each input is proportional to its value. So,

weight change = learning rate * input * delta

The *learning rate* is an arbitrary value that dictates how fast the network should learn. The *delta* is the rate of change of the discrepancy with respect to the output for the neuron; it is determined by using the delta rule. For a general definition, see:

http://uhaweb.hartford.edu/compsci/neural-networks-delta-rule.html

http://diwww.epfl.ch/mantra/tutorial/english/apb/html/theory.html

Calculation of the delta for an output neuron is easily obtained by using the following formula.

delta = actual output * (1 – actual output) * (desired output – actual output)

Calculation of the delta for a hidden neuron is more complex because it depends on the delta values of the neurons of the previous layer as the adjustment proceeds from the output layer to the input layer.

To calculate the delta for a neuron which feeds its output to *n* neurons in the next layer, the following steps are required.

**Step 1** – Calculate the product of the weight [for the output] and the delta of each of the n neurons

sum += weight * delta_{n}

for each delta of the *n* neurons, where *delta _{n}* is the delta for the

*n*-th neuron.

**Step 2** – Calculate the delta for the hidden neuron

delta = actual output * (1 – actual output) * sum

The concepts and formulae above are sufficient for a successful implementation.

*Components*

The entire implementation consists of four classes.

### Maths

This class provides two static methods, *random()* and *sigmoid()* respectively.

*random()* generates random numbers within the limits specified

*sigmoid()* implements the sigmoid function described earlier

### Neuron

This class abstracts a neuron. It holds an array of inputs and weights; the output; and the calculated delta.

The output is calculated by calling the *activate()* method and read by calling the *getOutput()* accessor method. The method *setDelta()* sets the delta for the neuron, and *adjustWeights()* adjust the weights according to the delta and the learning rate.

### Layer

This class abstracts a network layer. It contains a vector of neurons and outputs. It also provides functions to calculate the deltas of each neuron according to the type of the layer. In the case of an output layer, the function *calculateOutputDeltas()* is used; in the case of a hidden layer, the function *calculateHiddenDeltas()* is used. These two functions set the delta of each neuron of the layer. The method *activate()* activates each neuron in turn. The accessor method *getOutputs()* returns the outputs of all the neurons as a vector; these are then either fed into the next layer’s neurons, or returned as the network output.

### Network

This class abstracts the artificial neural network. The constructor takes arguments for the number of hidden layers, the number of neurons per hidden layer and the number of outputs.

The most important methods of this class are *setInputs()*, *train()*, *activate()* and *getOutputs()*. The network takes a vector of values as input and outputs a vector of values as output.

The methods *save()* and *load()* save the network architecture and weights and load a stored network, respectively.

**Installation and usage**

*Contents of archive*

The archive contains the following files.

*nn.php* – the ANN implementation classes

*XOR_Training.php* – the sample script for training a XOR

*XOR_Run.php* – the sample script to evaluate XOR operations

*xor.dat* – the saved XOR network architecture and weights

*Creating a neural network*

To create an ANN, you will need to include the file *nn.php*. Using the classes, you can structure your ANN as you wish. Once you have trained your network, you can save it to a file.

To use your network for evaluations, you need to restore the network from the saved file, feed it with inputs and get the output.

*Training*

Training is achieved by feeding a well prepared set of inputs and desired outputs to the network.

For example,

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
... // training inputs $inputs = array( array(0, 0), array(0, 1), array(1, 0), array(1, 1) ); // desired outputs $outputs = array( array(0), array(1), array(1), array(0) ); // recurse the training until desired results are obtained for ($i = 0; $i < 10000; $i++) { $j = Maths::random(0, 3); $network->setInputs($inputs[$j]); $network->train(0.5, $outputs[$j]); } // save the network to a file $network->save("xor.dat"); |

It is recommended that the network be saved at regular intervals to avoid re-starting the network each time.

*Evaluating*

To evaluate a set of inputs, the network needs to be loaded from the file and fed with the inputs; and the activate() method called. The output is obtained by calling the getOutputs() method.

For example,

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
$network = Network::load("xor.dat"); if ($network == null) { echo "nNetwork not found. Creating a new one..."; $network =& new Network(1, 10, 1); } $inputs = array( array(0, 0), array(0, 1), array(1, 0), array(1, 1) ); for ($i = 0; $i < 4; $i++) { $network->setInputs($inputs[$i]); $network->activate(); echo "n"; print_r($network->getOutputs() ); } |

*UPDATE:*

Fixed bug caused by weights not being initialised in the hidden layers.

Technorati Tags: PHP, Neural Network

Hi,

PHP bindings has just been released for the Fast Artificial Neural Network Library (fann)

http://fann.sourceforge.net/

Thought you might want to know.

Regards,

Steffen

If you use PHP as an Apache-module, the following problem can appear:

When you load, train and save a network and in the same PHP-script you want the network to be loaded again, the network is not loaded correctly, because the function filesize() returns the old size of the serialised network (if the new filesize is larger than the old filesize, the network is broken and not loaded).

To fix this, add a clearstatcache() in the network-load-function (in nn.php):

hi

sir,

can u please send me the C code for XOr problem using ANN with back propagation.

The whole point of this article is to show how to write an ANN in PHP. There are many other examples in C that you can get elsewhere on the Internet. Unfortunately, I do not have any.

I think you can design it yourself. Just read carefully the XOR problem from any good book. For example you can go through Neural Networks of Simon Haykin

Where I can download the files?

nn.php – the ANN implementation classes

XOR_Training.php – the sample script for training a XOR

XOR_Run.php – the sample script to evaluate XOR operations

xor.dat – the saved XOR network architecture and weights

Fixed broken link to download the file.

Is it applicable for SOM?

Not proper SOM, but by combining two networks and training one as a trainer, you can achieve something close enough.