In my quest to create computer programmes that generate aesthetic systems with emergent complexity, I thought it would be interesting to create software that mutates computer code, as opposed to software that “just” mutates data. However, the immediate challenge that presented itself was that most mutations of computer code cause infinite loops, stack overflows, rainbow spinners , blue screens of death, and core dumps. I basically stumbled upon the notorious halting problem.
In 2019 I encountered an algorithm developed by Julian F. Miller and Peter Thomson, which was later dubbed “Cartesian Genetic Programming” (1999). After some studying I was able to write my own implementation in TypeScript. By defining a visual programme as a two-dimensional grid (layers and nodes) I could safely mutate without ever getting stuck in an infinite loop.
I was delighted when Julian agreed to an interview!
Dear Julian, could you tell me about your personal, academic and technical background?
When I was a young man, I wanted to be a theoretical physicist. My boyhood hero was Albert Einstein. My bachelor’s degree was in Physics. After that I obtained a PhD in mathematics. The subject of my PhD was the interaction of strange waves called solitons. I thought they were good candidates for elementary particles. The equations describing how solitons interacted could often only be solved numerically on computers. So, I wrote computer programs to do this in Fortran and Basic. After my PhD I became gloomy that this line of research would be fruitful.
For various reasons, I moved to Scotland and obtained a job in Napier University helping staff in the electronic engineering department developing computer programs mainly to create algorithms to build electronic circuits efficiently. One day in 1994 a colleague mentioned Genetic Algorithms. I liked the idea and tried it on circuit optimization. It immediately worked well and soon led to many publications. This was a life-changing moment for me.
Could you explain what genetic programming is, and more specifically “Cartesian Genetic Programming”?
Genetic programming (and CGP) uses a simplified model of Darwinian evolution to design computational structures that solve many kinds of problems. The designs are represented as a string of information a bit like a biological chromosome. Like biology, these computational structures are collections of smaller calculations which are called genes. The artificial chromosomes can be slightly altered – this is called mutation. Also, parts of artificial chromosomes representing two different designs can be mixed rather like sexual recombination in biology. A population of these chromosomes is created (randomly at first) and at each generation all the chromosomes are tested to see how well they solve a problem. The ones that are not so good are rejected and replaced by fitter chromosomes some of which have been created by mutation or recombination. Over many generations one obtains chromosomes that can solve problems very well. GP allows one to solve problems using computational structures that have been automatically evolved and solutions can emerge that humans would not think of. In this sense it is a creative process.
There are a number of “flavours” of GP: tree, graph, stack and grammar. Each represents a computational structure in a different way. Cartesian GP is a graph based form of GP. Graphs are networks of computational nodes (like add, subtract, multiply etc.). The calculations done in the nodes can be re-used by other nodes. Also the graphs can have many outputs meaning that more than one problem can be solved at the same time. In CGP, there is a decoding process that can ignore some genes (this also happens in biology), these genes are called non-coding and they do not affect the computation carried out in the graph. However, the non-coding genes are still very useful as mutation changes them and after a few generations new graphs emerge.
Even though CGP is very versatile, were you developing CGP with a specific application in mind?
As I mentioned earlier, when I first heard of genetic algorithms I was working on ways of making electronic circuits using as few logic gates as possible. My colleague Peter Thomson and I came up with a simple way to represent circuits as a graph. I gradually developed this into a mature form that could solve many different kinds of problems. The computational structures used a Cartesian grid of computational nodes so I called the technique Cartesian Genetic Programming.
If we consider your algorithm itself to be part of a lineage, or even genealogy, could you point at its predecessors and descendants? Has your algorithm replaced others, and does CGP still have a place in a world of AI dominated by neural networks?
Genetic programming originated from a number of early pioneers. In 1958, R. Friedberg created an algorithm that could make a few random changes to a computer program and then test the mutated programs to see how they performed. In 1980, S. F. Smith utilized a form of GP in his PhD work. In 1981, Forsyth created a GP system called BEAGLE which could predict recovery after heart disease, and British football results. There are many early pioneers. There is a brief account of this lineage in the book Cartesian Genetic Programming which I edited and was published in 2011. However, when I proposed CGP, the dominant forms of GP were tree-based as described by John Koza in 1992 and Linear GP as described by Peter Nordin and colleagues in 1994-1995. In the 1990s other forms of GP were proposed: Push (1996), Grammatical evolution (1998) and Cartesian (1997). GP took off after John Koza’s book and the first international conference in GP was in 1996.
Since its inception CGP has given rise to a number of variants. These have been detailed in a recent paper of mine called “Cartesian Genetic Programming: its status and future”. Self-modifying CGP where the evolved programs can change themselves (SMCGP). Differentiable CGP, modular CGP and positional CGP are all variants of CGP. Thinking about CGP and Art, SMCGP could be a method of evolving an entire gallery of art rather than one at a time as SMCGP produces a sequence of programs from one chromosome. Each of these could generate art.
CGP is certainly the most widely used graph-based form of GP and its popularity continues to grow. CGP definitely has a place in the neural network dominated world of AI. Firstly, CGP can encode neural networks and there has been quite a bit of recent work on that. Also, currently I am working on a CGP system that builds multiple neural networks so that it can solve a number of problems at the same time. It is called DEMANNED which stands for designing multiple artificial neural networks using evolutionary development. It evolves CGP chromosomes that represent a neuron and its dendrites and when the evolved program is run it builds multiple neural networks each solving a different problem. I believe CGP has a bright future in AI.
As a contemporary fine artist, I’m often asked about the moment the “inspiration” behind the artwork “came to me”. I’m using quotation marks here, because I feel that people working in other fields sometimes attribute too much magic to inspiration, while in reality my creative process hinges on the “Try again. Fail again. Fail better.” adage. I personally experience the most creative magic in this feedback loop of trying and failing,
I am convinced that computational science is a highly creative field as well. Could you tell me how the concept of CGP “revealed itself” to you? And more general, what is your attitude towards freely having novel ideas vs. disciplined implementation?
I totally agree that the creative process is a series of inspirational moments and a huge amount of experimentation with many cycles of hard work. CGP started with a discussion between myself and my colleague Peter Thomson while we were at Napier University. Initially it was intended only for evolving electronic circuits. The way that non-coding genes are used in CGP was originally there purely because it made it easier for me to write the program. I only discovered the extreme importance of non-coding genes for evolution later on. Within a couple of years, I realised that CGP could be applied to many kinds of problems and that graphs have many advantages over other representations. Over the early years I simplified the representation a bit and I and my students and colleagues began to analyse many aspects of CGP. There were many “Aha” moments in the development of CGP. The idea behind capturing and evolving subfunctions or modules came to me in a moment of inspiration. Also how to create self-modifying CGP was an inspirational moment. Luckily, I am still having these inspirational moments and a future form of CGP may look quite different. For instance, in a promising new form of CGP all genes are floating point numbers (positional CGP), and the positions of computational nodes can be moved by evolution.
Science proceeds by occasional inspiration and long periods of analysis and development. The development of CGP followed this model. Thomas Kuhn described eloquently how Science happens in his wonderful book “The structure of scientific revolutions”.
The idiom surrounding genetic algorithms and evolutionary programming has imported a lot of jargon from evolutionary biology and genetics. This is helpful, because many terms such as crossover, mutation, breeding, generations, are mechanically similar to how they are coded in algorithms. However, sometimes I feel that by using these terms, we suggest that the systems we build are more “alive” than they actually are. Would you agree with that?
Hmmm, interesting point. Yes, lots of biological jargon has been imported into evolutionary algorithms. A little of this is fine, however, there are many bio-inspired methods these days that obscure the algorithms with too much jargon so much so that some of them are essentially not very different, but they sound different! I agree it can seem that we are building systems that are alive to some extent. Indeed in the field of artificial life researchers are deliberately trying to create systems with life-like properties. A classic example of this is the work of Carl Sims in 1994. Some of the evolved “creatures” seemed quite life-like.
Explaining algorithms with these terms from evolutionary biology has a didactic benefit, but there might be more at play ideologically or philosophically. Could our quest for artificial life and intelligence reveal the human desire to create - or at least simulate - life in an effort to deal with our own mortality?
I think that the quest for artificial life is more about trying to understand what life is. Either we can study and analyse biological life (a top-down process) or try to build life-like systems from the bottom up. Personally, my work in the field of evolutionary algorithms has led me to have a huge amount of respect and wonder at all living things. Life is a 3.5 billion year old technology!
Displayed left is the work Mutant Garden Tracer (not shown on mobile devices).
Thank you Wassim Z. Alsindi for proofreading.