Competitive Relative Performance and Fitness Selection for Evolutionary Robotics

Show full item record

Title: Competitive Relative Performance and Fitness Selection for Evolutionary Robotics
Author: Nelson, Andrew Lincoln
Advisors: Edward Grant, Committee Chair
Mark White, Committee Member
Paul Ro, Committee Member
Wesley E Snyder, Committee Member
John Muth, Committee Member
Abstract: Evolutionary Robotics (ER) is a field of research that applies evolutionary computing methods to the automated design and synthesis of behavioral robotics controllers. In the general case, reinforcement learning (RL) using high-level task performance feedback is applied to the evolution of controllers for autonomous mobile robots. This form of RL learning is required for the evolution of complex and non-trivial behaviors because a direct error-feedback signal is generally not available. Only the high-level behavior or task is known, not the complex sensor-motor signal mappings that will generate that behavior. Most work in the field has used evolutionary neural computing methods. Over the course of the preceding decade, ER research has been largely focused on proof-of-concept experiments. Such work has demonstrated both the evolvablility of neural network controllers and the feasibility of implementation of those evolved controllers on real robots. However, these proof-of-concept results leave important questions unanswered. In particular, no ER work to date has shown that it is possible to evolve complex controllers in the general case. The research described in this work addresses issues relevant to the extension of ER to generalized automated behavioral robotics controller synthesis. In particular, we focus on fitness selection function specification. The case is made that current methods of fitness selection represent the primary factor limiting the further development of ER. We formulate a fitness function that accommodates the Bootstrap Problem during early evolution, but that limits human bias in selection later in evolution. In addition, we apply ER methods to evolve networks that have far more inputs, and are of a much greater complexity than those used in other ER work. We focus on the evolution of robot controllers for the competitive team game Capture the Flag. Games are played in a variety of maze environments. The robots use processed video data requiring 150 or more neural network inputs for sensing of their environment. The evolvable artificial neural network (ANN) controllers are of a general variable-size architecture that allows for arbitrary connectivity. Resulting evolved ANN controllers contain on the order of 5000 weights. The evolved controllers are tested in competitions of 240 games against hand-coded knowledge-based controllers. Results show that evolved controllers are competitive with the knowledge-based controllers and can win a modest majority of games in a large tournament in a challenging world configuration.
Date: 2003-05-21
Degree: PhD
Discipline: Electrical Engineering

Files in this item

Files Size Format View
etd.pdf 2.727Mb PDF View/Open

This item appears in the following Collection(s)

Show full item record