“What I cannot create, I do not understand”
— Richard Feynman
In the Devolab, we use artificial life systems to improve our understanding of evolutionary dynamics. Specifically, we perform experiments on populations of self-replicating digital organisms that evolve in a natural and unconstrained manner. But why did we decide to focus on artificial life? What are the advantages and drawbacks of using these relatively complex computational systems?
Artificial life testbeds sit along a much broader continuum of evolutionary study systems that, at one extreme, include natural organisms in the real world (field studies) and experimental evolution in laboratories, while at the other extreme are traditional simulations and more applied evolutionary computation techniques. There are pros and cons of using each of these types of systems for testing evolutionary hypotheses, but many of them boil down to a tradeoff between “How realistic is evolution in the system?” and “How easy is it to perform controlled experiments in the system?”
Here’s how I’d characterize “typical” instances of these systems:
Clearly, I am over generalizing, but this diagram provides reasonable context for how these systems typically relate to each other. Of course, each has their important uses.
Simulations tend to be programmed to investigate a single, specific dynamic; when you have such a targeted focus, they are usually the right way to go. They are extremely fast (typically on the order of seconds to hours for a run), and have few categories of numbers being output, so the results are easy to analyze. The main concern with simulations is how limited their scope is. It can be easy to miss a change in a dynamic that occurs when an evolving population is given more degrees of freedom. For example, a common criticism of ecological simulations is that they assume a static set of species; the moment evolution is allowed, many of the dynamics will change. However, the core intuition gained from a simulation can be critical for designing good experiments with any of these other approaches.
Evolutionary algorithms are typically focused on solving a computational or engineering problem, rather than on elucidating principles of the natural world. These systems build a population of random solutions, evaluate the quality of each, and then select the “best” to populate the next generation (with new variation), often while incorporating other evolutionary principles intended to improve the final results. Of course, in order to best harness evolution, one must first understand it. As such, evolutionary computation research treats the process of evolution as an algorithm to be studied, optimizing the underlying representations and parameters. The fact that these systems focus on a targeted result makes them somewhat less open-ended than they might otherwise be — extraneous dynamics are removed if they don’t contribute to that ultimate goal. As such they can be quite fast, with typical experiments taking hours to days.
Artificial Life is an offshoot from evolutionary computation, with more of a focus on achieving open-ended evolutionary dynamics and realistic outcomes. Many experiments are performed with the goal of teasing apart how evolution works in nature, taking inspiration from experiments on biological organisms. The individuals in artificial life systems tend to be able to interact and have more realistic life cycles, including resource collection and replication. These extra details can make the system slower than more traditional evolutionary algorithms; typical experiments take days, though it’s not too uncommon to extend into weeks. However, the relatively quick experimental times and more open-ended nature of these systems allow researchers to gain hands-on experience with real evolving systems and develop a broad intuition about the evolutionary process.
Experimental evolution brings rapidly replicating organisms from nature into a much more controlled laboratory environment where they can be used to test specific evolutionary hypotheses. Usually these experiments focus on microbial organisms (e.g., viruses or bacteria) where many generations can occur in a single day, but in some cases macroscopic organisms will be used, such as plants, insects, or mice. Experimental evolution systems remain surprisingly tractable: experiments may take months to years, but substantial data can be collected in that time and in the case of microbes, cross-sections of the populations can be frozen at different time points for later revival. Plus, the fact that these are experiments with natural organisms means that there is more of an expectation of open-endedness and any results have a natural link back to the real world.
Field studies are the most direct way of studying evolution in the real world, but unfortunately they are also the most time and labor intensive. Experiments will typically take years — and sometimes even decades — to produce informative results. Even then, the data is labor intensive to collect, researchers must take care not to influence the organisms, and external factors are always altering the system (often in unknown ways). And yet, fundamentally, this is the only way to really learn about how evolution works in the real world. We struggle to obtain a fraction of the incredible complexity of behaviors, structures, and interactions found in nature using any of these other techniques. Of course, the challenges of field work mean that developing our theoretical frameworks in other ways is all the more important to guide these projects into the most productive directions and to maximize the value of any results produced.
As you might be able to tell, I’m a fan of all of these approaches, and I think every one of them is critical, in combination, to further our overall understanding of how evolution works. My personal inclination toward artificial life is threefold: (1) I see it as the perfect mixture of experiments that are fast enough to provide real-time interaction with evolving systems, (2) evolution in artificial life systems is sufficiently open-ended to provide a rich set of evolutionary dynamics to learn about, and (3) data collection and simple analyses can be automated (though there’s a ton of it to sift through!). Another plus is that it’s a relatively small field compared to any of the others, and as such there’s still a lot of valuable low-hanging fruit.
I’m also an engineer at heart, and feel like I’m strongest at generating hypotheses when I employ the mindset of “Now, how would I build something that did that?” (hence the Feynman quote at the beginning). Constructing a system teaches you about how that system could work and how it can’t possibly work. For example, if you’re trying to understand how altruism evolves in a system, you have to first be able to build a system where altruism does evolve in the first place. Every time you try a new environmental configuration, you’re going to learn something regardless of whether or not your experiment works as expected — often the failures give the greatest insights. Once you really understand the underlying system, it will be much easier to build the environment that you’re looking for, which, in turn, will allow you to further explore the broader range of dynamics in question.
Some people don’t like computational approaches of any kind because they feel that a skilled researcher can make a computational system do just about anything they want it to. While this is not true (life would be easier if it were!), the real science comes in doing proper experimental studies to understand why you’re seeing a certain effect. As long as you properly explore your experimental parameters and build proper controls, you should be able to determine the range of conditions necessary to see the targeted effect (I’ll talk about experimental design in a future post).
Even so, once you do witness a dynamic, no matter how well you believe you understand it, you only have evidence that this dynamic works in YOUR system. You might be able to make strong mathematical arguments for why this dynamic should be universal, but the strongest evidence will come when you actually observe it in multiple other systems. The narrowness of a single-system result is true no matter whether your original experiments were in the field, in the lab, or in a computer. As such, one can’t underestimate the value of having good collaborators across fields.
[Edit: This was previously listed as part 1 of a series, but all previously planned posts are now being made stand-alone]