Finally we have the Java framework for learning controllers ready, complete with some simple examples of hard-coded and neural network-driven controllers, and evolutionary algorithms to go with it.
To try evolving a controller for yourself, first download this Java package: Java client and learning framework (beta). Start TORCS, and start a practice race in results only mode. Go to the raceclient/classes directory. Type:
java raceclient.HillClimber raceclient.PerceptronController
…and evolution will happen before your eyes.
You might also try substituting raceclient.ES for raceclient.HillClimber, and raceclient.MLPController or raceclient.RMLPController for raceclient.PerceptronController.
To watch the crown of evolution in action, switch your TORCS display mode back to normal, and type:
java raceclient.Play best-climbed.xml
You could then compare the performance of your first-born controller to a hand-coded one thus:
java raceclient.Play raceclient.SimpleSoloController
The important parameter that decides for how long each trial will last is found in raceclient.SoloDistanceEvaluator. It is currently set to 1000, which is arguably a bit short, but allows for relatively quick evolutionary runs.
This is a beta version, and there are some things which are likely to change, e.g. the way the evaluator handles delays in communicating with the server. We also plan to introduce a multi-car evaluator. However, the core interfaces will almost certainly stay the same. In other words, you now have what you need to start evolving your own controllers!
Please mail us with installation problems, bugs, suggestions etc.