Google’s Brain AI research team unveiled robot “art” in June last year that shows that machine learning has reached a level where it can see one thing as something else. Human imagination, for instance, saw earthly shapes in star clusters and constellations came about
By Sarthak Ray
It is getting increasingly clearer that artificial intelligence (AI)-guided automation would make human labour redundant for a number of jobs, from driving to financial advisory services. But can machine learning and robotics — key elements of AI — replace human creativity as well? A highly-cited Oxford University report on automation and job loss — which enumerated at least 70 jobs, from driving to financial advisory services, were at the risk of getting automated — was sceptical of this, saying occupations that require high creative intelligence are unlikely to get automated in the next couple of decades.
But the MIT Technology Review, in an essay by art historian Martin Gayford, has revived the debate by posing a key question in the matter: Can art already being created by machines be treated on a par with art created by humans — that is, are the principles of human creativity getting increasingly programmable?
Competitions of robot art are being held in the hope that they throw up some answers or perhaps even a “Turing test” — a test, proposed by Alan Turing in 1950, of a computer’s ability to exhibit intelligent behaviour indistinguishable from a human — of creativity. US-based Dartmouth College has instituted the first annual Neukom Institute Prizes in Computational Arts, for the 2015-2016 academic year. Its “Turing Tests in Creativity” include three different competitions: For algorithms that create “human-quality” short stories, sonnets and dance music sets. Similarly, the first annual RobotArt competition — with $100,000 in prizes for creating paintings in two categories, solely software-generated and tele-robotically generated — is on at present. What is important here is that the quality of art generated, while graded, will not be the only or even the most important aspect that will be assessed. It will be how a programme arrived at a particular piece of art.
In his essay, Gayford looks at two specific programs that have simulated human creativity in paintings — The Painting Fool, a program developed by Simon Colton, professor of computational creativity at Goldsmiths College, London, and AARON, a program with which Harold Cohen, a painter and professor at the University of California, San Diego, has been collaborating since 1973. Both programs now make pieces of art autonomously.
Interestingly, Colton believes a true test of artificial creative intelligence would lie in how a program would demonstrate being “imaginative” and “appreciative” — in the sense of being responsive to emotions, as Gayford clarifies.
Measured against these abstract metrics, The Painting Fool has — obviously programmed to do so — been pushing the envelope for some time. After reading a piece in the Guardian on the war in Afghanistan, the program picked out words such as “troops”, “Nato” and “bombing” and painted a water-colour composite that seemed to adequately reflect the mood of the news report. Similarly, it duplicated various paintings on various media and assessed results. It even made the comment “This is a miserable failure” for one particular attempt. In an exhibition of its works at Paris in 2013, the program painted visitors in different moods, correlating expressions with emotional keywords dug out from 10 articles from the Guardian. In fact, when the overall tally of negative keywords passed a threshold, The Painting Fool refused to paint, replicating the temperamental nature artists are often associated with.
While all of this is of course programmable, Google’s Brain AI research team unveiled robot “art” in June last year that shows that machine learning has reached a level where it can see one thing as something else. Human imagination, for instance, saw earthly shapes in star clusters and constellations came about. Brain AI reached this stage after being trained in identifying objects from visual clues. From photographs of skies and random-shaped stuff, the program began generating digital images where it superimposed other objects that nearly resembled the shape of the subject of the photograph. These latter images were created autonomously by the program, using an artificial neural net — software emulating neurons processing information. Thus, the program was able to “imagine” objects absent in the picture. The resulting “art” genre has been dubbed “Inceptionism” by the Google team, after the 2010 movie Inception.
While all this represents vast strides in machine learning, critics of robot art say that machines are far away from being truly “creative”; the advances are merely the melding of rules of creativity with available technology. This effectively means that all machine-generated art would have to be seen as the product of the creativity of the human artist who sets the rule for the program. But with Google Brain AI showing that a rudimentary form of imagination can be programmed, replicating the complex interplay between social, emotional, historical and cultural factors that inspire human painters shouldn’t seem as daunting as before.