Kathleen Maffei
April 2006
CS 361 Emergence
Final Project

Evolving Neural Network Weights: Hunters & Prey

How do you evolve a predator?

I find evolutionary algorithms fascinating, so I wanted to evolve something. I decided to use Pyrobot because it provides a nice graphical interface in which results can be seen. I was interested in competition to compare not only a traditional program with something evolved, but also to compare different evolved robots. The predator / prey situation is a straightforward way in which success could be tested and in which two different evolved robots could compete.

The Plot

Since robots have color sensors, it made sense to create prey and predators of different colors to help them identify one another. I used blue for prey and green for predators because the color sensors return values for red, green, and blue and I wanted the detection to be as clear as possible rather than distributed across two or more color values (like, say, purple). I started with a simple world, chaseWorld.py, with two robots positioned to start at diagonally-opposed corners.

Basic Brains

First I wrote two basic robot brains in python code, brainAvoid.py and brainChase.py. Both robots needed to avoid getting stuck in corners and colliding with the walls. After handling walls and corners, the chase robot needed to head for the prey and the avoid robot needed to head away from the predator. At first I thought the chase robot would be harder to program, since - unlike the avoid robot - it needed to perform two types of tasks: avoid one thing (walls) but aim for another (prey). I soon discovered that the avoid robot was more challenging; once it avoided walls, it had to decide on a direction to go based on a snapshot of the world that only tells it where the predator is, but not which direction it.s heading. Real quarry might decide on an escape route based on the predator.s course. (Of course, some more complex programming can surmount this issue; the robot could spread out decision-making and take readings across several steps, but this slows down the response time and wasn.t necessary for this brain.) After a fair amount of tweaking, I had two reasonably well-performing robots.

To run them, download the chaseWorld.py, brainAvoid.py, and brainChase.py files. Type "pyrobot &" from a terminal window. Click the Server button, select .PyrobotSimulator. and click OK, click the Home button, and then navigate to where you saved the chaseWorld.py file; the Simulator window will pop up with the chase world. Click the Robot button, select .PyrobotRobot60000.py. and click OK. Click the Brain button and click Home to navigate to where you saved the brainAvoid.py file and click OK. Back in your terminal window, type in "pyrobot &" again. This time skip the Server button but click the Robot button to select .PyrobotRobot60001.py. and click OK. Click the Brain button and click Home to navigate to where you saved the brainChase.py file and click OK. Now you have both robots connected to their chase world. You can click run on each Pyrobot interface to get the robots started. Notice that the behavior is eventually predictable. After they run awhile, click File->Reset on the Simulator window to reset their positions. Under the same conditions and allowing for some variation due to sensor noise, the robots perform exactly the same moves.

(More information on running Pyrobot)

Evolving a Neural Network Brain

There are two more files involved in evolving a neural network brain. The first is a neural network brain, brainNN.py, to control a robot. It creates a neural network with 14 input layers that correspond to the sensor values I used with the basic brains: 8 range sensors and 6 color sensors (red, green, and blue on each side). The output layer produces two values, translate and rotate values to control the robot.s movement. On each step, the neural network brain grabs sensor data, feeds it into the network, propagates the network for an output, and uses the two output values to move the robot. All of the work of determining the robot.s movement is done by the network.

The second file is a genetic algorithm. I created two of these, GAChaseNN.py to evolve a predator and GAAvoidNN.py to evolve prey. Each file opens a Pyrobot engine and sets up the robots . one with the neural network brain and one as an opponent. In evolving a predator, I used my brainAvoid.py as the opponent. In evolving prey, I used my brainChase.py as the opponent. Each genetic algorithm is designed to evolve a set of weights for the neural network brain to use.

In a Genetic Algorithm, solutions to a problem are encoded as a list of data (genes) and a population of them is created with random data. During each generation, they are each tested for fitness, selected to survive (based on a preference for high fitness), mated to fill out the population again, and then randomly mutated. In my GA programs, the genes are sets of network weights. The fitness function sends the weights to the neural network brain, runs the two robots for awhile, tests to see how well the neural network robot has done, and awards it a fitness value. The early generations of weights produce some very odd behavior because the algorithm begins with random numbers. By the sixteenth generation, there is some very reasonable and even effective behavior.

To run a genetic algorithm, make sure that you have all the necessary files (the Pyrobot world file chaseWorld.py, the neural network brain brainNN.py, the genetic algorithm GAChaseNN.py or GAAvoidNN.py, and the opponent brain brainAvoid.py or brainChase.py) in the same folder. At a terminal window from that folder, type

or to begin the process. Evolution takes time!

The weights for each generation are saved into files named try1.wts, try2.wts, etc. Any of these can be renamed to save it for testing and comparing later. If you do not rename a weight file, it will be overwritten the next time you run one of these GAs.

Competition

To test the results and compare brains, I created a python program runtrial.py that takes 3 arguments: the name of a chase robot neural network weight file, the name of an avoid robot neural network weight file, and the number of trials you want to run. If you do not supply a weight file name with ..wts. at the end, the runtrial program will use the simple brain counterpart (brainAvoid.py or brainChase.py) instead.

To run this program, save it to the same folder as the other programs. At a terminal window from that folder, type

where {chase.wts} is the name of the weight file for a chase robot (or anything without ..wts. if you want to use brainChase.py), where {avoid.wts} is the name of the weight file for an avoid robot (or anything without ..wts. if you want to use brainAvoid.py), and where n is the number of trials you want to run.

Surprises

One thing I noticed is that evolved brains seemed to be less predictable than traditionally programmed brains. When they are reset into positions for trial after trial, they do not perform the same behavior as consistently.

One of the avoid brains that I evolved came up with what seemed to me to be an interesting solution. It spent most of the time traveling in tight circles. Since my simple chase brain, brainChase.py, didn.t make any sharp turns, the evolved brain was able to avoid it completely with its tight circles and a periodic short reverse whenever the green got somewhat close.

In fact, my evolved avoid brains turned out better than my evolved chase brains. I supposed this had to do with the quality of the opponents used to test their fitness; perhaps my traditionally programmed chase brain was better than my traditionally programmed avoid brain. On the contrary, in a 20 trial comparison between the two traditionally programmed brains, the avoid brain won 15 times. Maybe chasing is just a harder job after all.


* Note: Although GAChaseNN.py, GAAvoidNN.py, and runtrial.py all include the line os.putenv("PYROBOT","/usr/lib/python2.4/site-packages/pyrobot") to set the environment variable before opening Pyrobot, this line of code - for some unknown reason - only works on some workstations. The first time you try to run one of these programs, if the system returns this error:

then you will need to set it manually once for that terminal session. Type in and return and then you'll be able to run them.