Computing Machinery and Creativity

March 2008

In an article entitled Computing Machinery and Intelligence, Alan Turing describes the Turing test, his famous criterion for machine intelligence: a computer can be considered a thinking machine if a human interlocutor, after asking the computer a series of questions, cannot tell whether he is conversing with a machine or with another human. After describing the details of this test, Turing discusses a handful of arguments that deny the possibility of thinking machines. Turing gives special treatment to “Lady Lovelace’s Objection” (450), an argument formed around a declaration that computers cannot be creative. I will explain Lovelace’s objection, offer an interpretation of what Lovelace means by “creativity,” and argue that computers are not creative in this sense. For another perspective on the possibility of creative machines, I will frame Tom Mueller’s article, How computer chess programs are changing the game, as an argument for why some computers ought to be considered creative. I will go on to argue that notwithstanding Lovelace’s objection, Mueller’s examples amount to “Turing tests” for creativity in which the chess programs pass. Finally, I will argue that these “Turing tests” for creativity are crucially different from Turing tests simpliciter, and therefore we cannot conclude from Mueller’s article that chess-playing programs think.

Turing introduces Lady Lovelace’s objection by quoting her critique of Babbage’s “Analytical Engine,” an early computer that was fully mechanical: “the Analytical Engine has no pretensions to originate anything. It can do [only what] we know how to order it to perform” (450). Lovelace argues that, in the case of Babbage’s engine, we (humans) supply the input, the instructions, and the machine itself, and the role of the machine is to simply run the program on the input, without error, until the program terminates. There is no point in this sequence at which the computer may exercise creativity—it is just blindly running the program. To use Turing’s terms, Lovelace is saying that the total behavior of a discrete state machine is strictly a function of its input1. It is the absolute predictability of this behavior and the apparent lack of agency on behalf of the computer that Lovelace means to draw our attention to with her objection. In order for the computer to be creative in Lovelace’s sense of the term, it would have to produce some output that is not wholly determined by its input. I would add, and Turing intimates (459), that this output would have to be meaningful to us so that it does not appear to be mere randomness or error. Lovelace’s objection holds because Turing’s idealized computer—the Turing machine—by definition cannot produce the creative output that Lovelace is looking for. Turing admits that, “it is an essential property of…[discrete state machines] that this phenomenon does not occur… Reasonably accurate knowledge of the state at one moment yields reasonable accurate knowledge any number of steps later” (440).

Turing takes Lovelace’s objection to be either that computers cannot learn; that computers cannot produce original works; or that computers cannot surprise us with results that we could not have predicted (450). He responds to each of these interpretations in turn, but I believe he must respond differently to Lovelace’s objection as I present it. Turing proposes the Turing test to avoid the problems presented by our tenuous understanding of the nature of thought (433), so it is reasonable to think that Turing would propose a similar “imitation game” in response to Lovelace: a computer may be considered creative if it can convince a human judge that it is so. Babbage’s Analytical Engine failed to convince Lovelace that it was creative, but more advanced computers might be able to pass a “Turing test” for creativity. Lovelace has not shown that human creativity is not determined strictly by our “inputs,” so she would have a reason to accept this imitation game as a test for machine creativity.

It is difficult to appreciate just how much more powerful modern computers are than Babbage’s Engine or the early digital computers that Turing designed. Tom Mueller’s article, How computer chess programs are changing the game, features some of today’s most powerful computers running cutting-edge chess-playing software. These systems are so sophisticated that their programmers are usually unable to explain why the programs make the moves that they do; the article begins, “Chrilly Donniger prefers to watch from a distance when Hydra, his computer chess program, competes… because he rarely understands what Hydra is doing” (1). Mueller’s article presents an argument that Donniger’s Hydra and other chess-playing computers are creative machines.

The first form this argument takes is that the best human chess players do not calculate their next move as amateurs do; instead, the most experienced players use “intuition” (2) to exclude weak moves from consideration. Mueller states that “to produce world-class chess of the sort that Hydra [plays]… programmers must somehow teach their machines intuition” (2), and he suggests that chess-playing machines have been taught intuition by their programmers. If these machines use intuition, and intuition is not mere calculation, then the behavior of these machines cannot be described solely in terms of discrete state machines and their input; therefore, these machines are creative in the sense that Lovelace had in mind. Unfortunately, it is overwhelmingly likely that Hydra and the other chess computers are Turing-equivalent discrete state machines; therefore, either the premise that intuition is not mere calculation is false, or the premise that the machines use intuition to make their moves is false. In either case, the argument is unsound.

The second form of the argument is much more interesting, and it deals with the inability of programmers to predict or explain the hyper-sophisticated moves that chess programs make. “Moves and tactics seem to arise spontaneously from intricacies of the computer code, which the programmer himself often cannot explain” (3), describes Mueller. “Hydra, like all other chess software, has hundreds of heuristics woven into the code… How the heuristics interact, reinforcing and overriding one another, is mysterious; even a slight adjustment…can produce side effects that the programmer could not predict” (5). The results of these mysterious interactions are moves that even grandmasters call “outrageously creative” (7). If we agree that chess is a good “Turing test” for machine creativity, and that grandmasters are competent judges of creative chess-playing, it is obvious that these programs pass such tests and should be considered creative machines.

In response to this argument, consider how Mueller’s intuitions about Hydra as a creative machine would change if Hydra were implemented in a similar fashion as Babbage’s Engine, with electricity, semiconductors, and silicon replaced by cranks, cards, and wood. This is certainly possible, since both Babbage’s Engine and Hydra are universal computers capable (with great effort) of simulating each other. Hydra would not seem very creative if it took years of cranking and metric tons of paper cards to calculate the next move. Mueller might respond that expedience is a necessary component of creative responses, but it is not only the slow speed of a fully mechanical Hydra that causes us to doubt its creativity; the crude mechanisms of the transformed Hydra would make its operation completely transparent to us, and the previously inexplicable interactions among heuristics would reveal themselves as trivial tabulations that we could roughly follow with our minds. The creativity would be completely absent because we would be incapable of understanding Hydra as anything more than a giant, antiquated calculator.

The modern Hydra takes as its input a set of heuristics for static evaluation of game states, and new moves as the opponent makes them. These heuristics are applied rapidly and simultaneously to each state, and they enable Hydra to calculate the utility of possible future states to many decimal places. Human minds are incapable of consciously following this complexity as it happens, so we resort to more familiar terms like “creative,” “daring,” “deceptive,” etc. to describe Hydra’s behavior. We intuit personality, genius, and creativity into Hydra’s moves in order to compensate for our inability to make sense of the gross computations behind them. Furthermore, the human subjects in Mueller’s article are biased towards believing that Hydra is creative. In the original Turing test, the interlocutor is not interested in the outcome of the experiment. When computers are programmed to play chess, it is reasonable to assume that many programmers want to believe that their programs are so spectacular that they exhibit “emergent phenomena” (7). Also, towards the end of Mueller’s article, a grandmaster discusses believing that a computer opponent who beat him anonymously was really chess genius Bobby Fischer—clearly, the grandmaster would rather it be true that he played and was beaten by Fischer than by a chess program.

I have shown that the stakeholders in computer chess may be too biased for their games to constitute adequate tests of machine creativity, and that their beliefs about machine creativity could change tremendously under different but Turing-equivalent circumstances. Turing states that “intelligent behavior presumably consists in a departure from the completely disciplined behavior involved in computation,” (459) so a machine that behaves intelligently enough to pass the Turing test must depart from computation and exhibit the kind of creative behavior that Lady Lovelace is looking for. This does not, however, imply that where there is creative behavior, there is also intelligent behavior. After these considerations, it is clear that chess is a much weaker test for computer intelligence than a properly-administered Turing test. On the other hand, a “Turing test” for creativity seems like a useful thing to have in light of our nebulous understanding of human creativity. I still hesitate to equate the apparent creativity of a computer with that of a human, though, due to computer creativity’s reducibility to computation.


  1. Turing and Mueller mention computers that use randomness in their computation, but random numbers are also input.