A complexity theory of generative art

Beauty lies at the intersection of order and disorder

Is there a theory that guides the (generative) artist to generate interesting artworks?

A first guiding principle to produce interesting generative art might be the following: view an artwork as a message (to beholders) and try to maximize the amount of information that the message conveys. This happens to be the same as minimizing the compression rate of the message. An artwork rich of information allows low compression, while a poorly informative artwork shrinks even more when compressed.

Claude Shannon, an American mathematician, electrical engineer and cryptographer known as the father of information theory, founded the discipline with a landmark paper, A Mathematical Theory of Communication, which he published in 1948. In particular, Shannon developed information entropy as a measure of the information content in a message.

For instance, consider the two messages:

AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

and

D4E56740F876AEF8C010B86A40D5F56745A118D0906A34E69AEC8C0DB1CB8FA3

Everyone can agree that the first is not particularly informative. It simply repeats character A 64 times and can be represented in a highly compressed form as A^64 (i.e., A repeated 64 times). The second message, of the same length, was produced by my cat walking blindly on my keyboard (read: it’s actually the hash of Ethereum’s genesis block!). It is a highly informative message which essentially cannot be compressed: its smallest representation is the message itself.

Under Shannon’s theory, we say that the entropy (information content) of the first message is low, while that of the second message is high.

Is Shannon’s entropy a good measure to guide the generative artist to generate interesting art? In other words, is chaos esthetically more interesting than order?

The answer is, undoubtedly, negative. Let's see why.

There exists a general consensus in aesthetics - the philosophical study of art, beauty, and taste - that beauty lies at the intersection of order and disorder.

Perfect order is tedious and therefore not attractive. Chaos is incomprehensible to the human brain and therefore is equally unappetizing. When we depart from order without resulting in complete chaos, maintaining an unstable balance between regularity and mess, often we get a result that surprises and thrills, so that we may define it as beautiful.

Consider a performance of contemporary dance. Each dancer involved typically follows specific choreography, determined a priori by the choreographer. On the other hand, each dancer interprets the choreography according to their inclinations, history, and mood. Not infrequently, there is also room for improvisation. These elements - interpretation and improvisation - add a disorderly contribution to the choreographed, pre-given movements. It follows that every staging is the same but also subtly different from the others; it is (partially) unpredictable.

Architect Richard Padovan describes order and complexity as twin poles of the same phenomenon. Neither can exist without the other - order needs complexity to become manifest; complexity needs order to become intelligible - and aesthetic value is a measure of both.

He beautifully expresses this concept with the following words:

Delight lies somewhere between boredom and confusion. If monotony makes it difficult to attend, a surfeit of novelty will overload the system and cause us to give up; we are not tempted to analyze the crazy pavement. Richard Padovan

There have been attempts to devise a measure of information or complexity that is maximized when order blends with disorder. One notable example is Murray Gell-Mann’s effective complexity.

Murray Gell-Mann was an American physicist who received the 1969 Nobel Prize in Physics for his work on the theory of elementary particles. In 1984 Gell-Mann was one of several co-founders of the Santa Fe Institute, a much renowned place in complexity theory.

In the definition of effective complexity Gell-Mann refers to algorithmic information content (AIC) or Kolmogorov complexity.

In algorithmic information theory, the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object.

Notice that Kolmogorov complexity is in fact related to Shannon’s entropy described above. More precisely, the Kolmogorov complexity of the output of a Markov information source, normalized by the length of the output, converges almost surely (as the length of the output goes to infinity) to the entropy of the source.

Murray Gell-Mann puts it like this:

This property of AIC [algorithmic information content] reveals the unsuitability of the quantity as a measure of complexity, since the works of Shakespeare have a lower AIC than random gibberish of the same length that would typically be typed by the proverbial roomful of monkeys. [...]

A measure that corresponds much better to what is usually meant by complexity in ordinary conversation, as well as in scientific discourse, refers not to the length of the most concise description of an entity (which is roughly what AIC is), but to the length of a concise description of a set of the entity’s regularities.

Thus something almost entirely random, with practically no regularities, would have effective complexity near zero. So would something completely regular, such as a bit string consisting entirely of zeroes. Effective complexity can be high only a region intermediate between total order and complete disorder. Murray Gell-Mann

To measure the effective complexity, Gell-Mann proposes to split a given system into two algorithmic terms:

  1. a first algorithm capturing structure, and

  2. a second algorithm capturing random deviation.

The effective complexity would then be proportional to the size of the optimally compressed first algorithm that captures structure.

However, there are objections to this approach. Some maintain that this notion of structure is subjective and remains in the eye of the beholder, as much as beauty is.

Last updated