The other day, I was reading a paper (paywall) on using graph and network theory to quantify properties of ecological landscapes, by Rayfield et al. It is a review summarizing: what properties of landscape networks we might want to measure, structural levels within networks that we might want to measure these properties at (e.g. node, neighborhood, connected component, etc), and metrics that can be used to measure a given property at a given structural level. The authors found that there was dramatic variation in the number of metrics available in these different categories. I was particularly struck by this comment, offering a potential explanation for the complete lack of component-level route redundancy metrics: “This omission could be attributed to, first, the importation of measures from other disciplines that prioritized network efficiency over network redundancy…”
In a later post I’m going to introduce a new artificial life system which I’m working on. This new system is based on Markov Network Brains so I figured I’d take a little time to talk about them. Markov Brains use binary variables and arbitrary logic to implement deterministic or probabilistic finite state machines. They have been used to study behavior, character recognition and game theory among other topics. The majority of the work with Markov Brains has been done in Chris Adami’s lab. A Markov Brain consists of 3 parts : a set of binary variables called the Brain State a collection of logic gates connections between the variables and the gates
One of my stepping stones towards becoming an evolutionary biologist was playing with engaging programs that combine evolution and artificial life. Although these “games” are often neither intended for educational or research purposes, I found them instrumental in developing an appreciation for the power and creativity of evolution by natural selection. Here is a collection of my favorites in no particular order: Bitozoa http://www.bpp.com.pl/bitozoa2/bitozoa2.html
I recently discovered the Paper Machines add-on to Zotero, which allows you to perform visualizations and topic modeling analyses on papers in your Zotero collection. I just so happened to have the complete proceedings of both GECCO 2014 and ALife 2014 kicking around in my Zotero database, so I decided to try comparing them. As a quick background, GECCO, which focuses on Genetic and Evolutionary Computation, and ALife, which focuses on Artificial Life, are the two main computer science* conferences that we in the Devolab tend to go to. There is substantial overlap between these conferences (GECCO has an Artificial Life track, after all), but there are also some fundamental differences in approach and focus. GECCO
“What I cannot create, I do not understand” — Richard Feynman In the Devolab, we use artificial life systems to improve our understanding of evolutionary dynamics. Specifically, we perform experiments on populations of self-replicating digital organisms that evolve in a natural and unconstrained manner. But why did we decide to focus on artificial life? What are the advantages and drawbacks of using these relatively complex computational systems?