archive-edu.com ยป www.demo.cs.brandeis.edu



Detailed info for site: www.demo.cs.brandeis.edu

First archived on: 2012-12-24

Page Rank (on archived date): 4    

Favicon: -

Title: DEMO

See available archived versions here

Original web address: http://www.demo.cs.brandeis.edu/coev_CA/


Keywords:

Description:
... of rules density over several generations It can be observed that diversity is maintained for about 500 generations during which different alternatives are explored simultaneously by the population of rules Then a solution is discovered that moves the search in the area where density is close to 0 5 Afterward several niches might still be explored however the coding used to construct the figures doesn t allow the representation of those different niches since all the rules have a similar density Usually this technique results in better performance on average However it takes also more time to converge This is a trade off between speed and quality of solutions Figure 3 Evolution of the distribution of the density of rules over time Learning in an Adapting Environment Coevolution The idea of using coevolution in search was introduced by Hillis 1992 In his work a population of parasites coevolves with a population of sorters The goal of parasites is to exploit weaknesses of sorters by discovering input vectors that sorters cannot solve correctly while sorters evolve strategies to become perfect sorting networks In coevolution individuals are evaluated with respect to other individuals instead of a fixed environment or landscape As a result agents adapt in response to other agents behavior For the work presented in this document I will define coevolutionary learning as a learning procedure involving the coevolution of two populations a population of learners and a population of problems Moreover the fitness of an individual in a population is defined only with respect to members of the other population Two cases can be considered in such a framework depending on whether the two populations benefit from each other or whether they have different interests i e if they are in conflict Those two modes of interaction are called cooperative and competitive respectively In the following paragraphs those modes of interaction are described experimentally in order to stress the different issues related to coevolutionary learning Cooperation between populations In this mode of interaction improvement on one side results in positive feedback on the other side As a result there is a reinforcement of the relationship between the two populations From a search point of view this can be seen as an exploitative strategy Agents are not encouraged to explore new areas of the search space but only to perform local search in order to further improve the strength of the relationship Following a natural extension of the evolutionary case to the cooperative model the fitness of rules and ICs can be defined as follows for the rho c 1 2 task For our experiments the same setup as the one described previously is used The population size is n R 200 for rules and n IC 200 for ICs The population of rules and ICs are initialized according to a uniform distribution over 0 0 1 0 for the density At each generation the top 95 of each population reproduces to the next generation and the remaining 5 is the result of crossover between parents from the top 95 selected using a fitness proportionate rule Figure 4 presents the evolution of the density of rules and ICs for one run using this cooperative model Without any surprise the population of rules and ICs quickly converge to a domain of the search space where ICs are easy for rules and rules consistently solve ICs There is little exploration of the search space Figure 4 Evolution of the distribution of the density of rules left and the density of ICs right over time Competition between populations In this mode of interaction the two populations are in conflict Improvement on one side results in negative feedback for the other population Two outcomes are possible Unstable behavior alternatively agents in one population outperform members of the other population until those discover a strategy to defeat the first one and vice versa This results in an unstable behavior where species appear then disappears and reappears again in a later generation a Stable state is reached this happens if one population consistently outperforms the other For instance a very strong strategy is discovered by agents in one population For the rho c 1 2 task the fitness defined in the cooperative case can be modified as follows to implement the competitive model Here the goal of the population of rules is to discover rules that defeat ICs in the other population by allowing the CA to relax to the correct state while the goal of ICs is to defeat the population of rules by discovering initial configurations that are difficult to classify Figure 5 describes an example of a run using this definition of the fitness In a first stage the two populations exhibit a cyclic behavior Rules with low density have an advantage because there are more ICs with low density In response to this ICs with high density have an evolutionary advantage and have more representatives in the population In turn rules with high density get an evolutionary advantage ldots This unstable behavior exhibited by the two populations is an instance of the Red Queen effect Cliff Miller 1995 fitness landscapes are changing dynamically as a result of agents from each population adapting in response to the evolution of members of the other population The performance of individuals is evaluated in a changing environment making continuous progress difficult A typical consequence is that agents have to learn again what they already knew in the past In the context of evolutionary search this means that domains of the state space that have already been explored in the past are searched again Then a stable state is reached in this case the population of rules adapts faster than the population of ICs resulting in a population focusing only on rules with high density and eliminating all instances of low density rules a finite population is considered Then low density ICs exploit those rules and overcome the entire population Figure 5 Evolution of the distribution of the...

Other info
   Used charset:UTF-8

   Author: -
   (The author of the document)

   Generator: -
   (Name and version number of the publishing tool used to create the page)

Hosting server info (can be different or more than one):
IP address: 129.64.46.116
Country code: US
Country: UNITED STATES
Region: MASSACHUSETTS
City: WALTHAM
ZIP: 02454
Time Zone: -05:00
Latitude: 42.3765 Longitude: -71.2356