by Stephen Luecking in Chicago
There is a romantic notion that art is the last bastion of truly human expression. Consequently, news of inroads of artificial intelligence into art invariably generate apprehensions of AI’s deleterious effect on all things human. At the center of that phobia is the algorithm. Somehow this seemingly mysterious entity has taken the role of a Frankenstein of the soul that is destined to replace the human in human endeavor. This fear propels sales of books that recklessly flog AI as an existential threat.
A little understanding of AI and the function of an algorithm can go a long way toward easing such fears.
Most of what the general public regards as artificial intelligence is not intelligence at all, but simply brute force computing that enables the analysis of monstrously large sets of data in a split second. Based on instructions (algorithms) written into the program, the program seeks out patterns in the mountain of data and then processes them as instructed. One example of brute force computing occurred when IBM’s Deep Blue super-computer defeated chess master Garry Kasparov. Deep Blue’s processing speed, employing 120 parallel chips, could run through all possible scenarios for outcomes up to 20 moves ahead. It did so using a library of 700,000 grand master games. Human chess champions on the other hand can barely consider seven moves ahead.
Deep Blue lost the first round of matches in February of 1996 after some surprise moves by Kasparov. Consequently, programmers beefed up Deep Blue under the tutelage of grand master Joel Benjamin. The super computer then edged past Kasparov for the win in May 1997.
Since these matches, the field of artificial intelligence has added machine learning, wherein programs have added layers of processors capable of tweaking algorithms that had failed a task in their first running. Processing speeds permitted many, many test runs and many, many tweaks until the algorithms, rewritten by the machine itself, succeeded. Success usually arrived after many millions of attempts. The new algorithms might be chaotic clusters of code, but they could work.
A recent case of brute force computing applicable to the arts has been the animation of scanned photographic portraits to speak while executing convincing facial expressions. If the computer can scan a number of photographs from different views, the resulting video appears completely natural to human perception.
The program is, in part, a massive upgrade of facial recognition programs which identify 20-25 points on a face and then compares these with faces kept in databases. Facial recognition programs come in two basic types, both with their origins in the arts. One based on methods developed by Albrecht Durer uses the location of facial features on a grid; the other, created by Leonardo da Vinci, denotes these positions as a network of linked triangles. Either one of these systems can categorize up to 17 quadrillion faces.
By contrast, the animation program begins by stipulating 10,000 points on a face to establish the full array of bones and muscles needing animation. Like Deep Blue, it references a huge library, in this case faces in motion, to maintain accuracy in the animation. (AI libraries typically run into the millions, even billions of items related to the program’s task. There are now companies dedicated solely to providing data sets for AI.)
Another incursion of AI into the art world has garnered more attention since it made it to the top of the world art market. The ink on paper Portrait of Edmond Bellamy looked enough like the market’s conception of art to sell at Christie’s in 2018 for $432,500. This marker of success generated further interest, such as the subsequent exhibit ‘Faceless Portraits Transcending Time’ by Dr.Ahmed Elgammal at HG Contemporary gallery in New York City’s Chelsea district.
Like the ‘Bellamy’ portrait Elgammal, who directs Rutgers University’s Art and Artificial Intelligence Lab, used GAN – an acronym for ‘generative adversarial network’ – technology. Under GAN the AI program harnesses one set of algorithms to generate forms derived from a library of faces or other data while commandeering a second set of discriminating algorithms programmed to accept or reject the offering and sending the test image back to the drawing board. Hundreds of layers of processing structures known as neural nets modify the generating algorithms and offer new results. Millions of these exchanges can take place every minute before the generating algorithms can satisfy the discriminators.
The gallery touted the ‘Faceless’ show as the first dedicated to an artist working in AI. In fact, the show may have come 50 years too late to claim this honor. In the late 60s Harold Cohen stood out from his fellow pioneers of computer graphics by plotting drawings that appeared drawn by the hand of a human artist. Although the computer that produced Bellamy possessed 34,000,000 times the processing power of the mainframes employed by Cohen, Cohen’s work was far superior as art … and considerably more intelligent.
Cohen had diligently studied children’s drawings and Native American rock art, as well as interviewing artists, with the goal of ascertaining the minimum conditions by which marks can function as art. He translated these conditions into his program Aaron and subsequently provided the program choices by randomizing the algorithms as to what shape to draw and where. With the first shape drawn the algorithm is still random but with a bit more limits – based on the principles Cohen adopted from his research. Meanwhile the movements of the plotter were themselves randomized to waver unpredictably in simulation of the human hand as the pen headed down its path.
Aaron was, by today’s definition, not an artificial intelligence because the algorithms did not rewrite themselves: they did not learn. However, in Aaron’s time artificial intelligence research concentrated on heuristics, that is, the machine logic was based on if/then statements. Programs were loaded with sets of if/then assertions with some expert programs referencing up to 4,000 such statements. In Aaron’s case the ‘if’ comprised the previously drawn elements. The ‘then’ was a number of options to be chosen randomly, but with certain options assigned a greater likelihood of selection.
In 1974 Cohen presented Aaron’s drawings at a national conference at Purdue University on computer art organized by Dr. Aldo Giorgini. The effect of these drawings was stunning. Amidst presentations of highly mechanical and repetitive patterns Aaron’s work stood out, appearing hand drawn and not at all the output of a machine.
Soon other programs followed to create machine–generated art. These exploited genetic algorithms, which could couple two clusters of code to reassemble into new sets of code. This created the effect of two graphic forms meeting and breeding new forms. The most robust of these breeding algorithms grew under John Holland and his students at the University of Michigan, who were developing genetic algorithms in the late 1960s. These became the precursor to machine learning and could yield results that mimicked evolution.
By the 1980s artist/researcher Karl Sims was delighting in the penchant of these algorithms to generate unexpected new forms from parent forms and, further, that through randomization each offspring could be distinctive and unique. Of these the artist chose a select few that would then survive to breed a new generation. In enough generations forms could eventually evolve to satisfy the artist.
Sims was unique in that he wielded expertise in both art and computer science. Most artists, such as William Latham, collaborated with advanced technicians. In Latham’s case he, with Stephen Todd at IBM Science UK, fashioned virtual sculpture that resembled the bizarre organisms that had evolved just after the Pre-Cambrian extinction. Latham went on to design the games Evolva and later, The Thing in which creatures evolved through the course of play.
AI researchers however, sought to script more autonomous programs where the artist need not interpose. This was the role of the discriminating algorithms cited above: to eliminate the artist’s husbanding of the generating process.
Some artists/researchers such as Dr. Elgammal saw this addition as shifting the authorship of the art toward AI. Elgammal based his discriminating algorithms on principles of formal analysis as set down by art historian Heinrich Wölfflin (1846–1945). In order to reinforce the role of art history, Elgammal assigned value premised on the importance and historical longevity of images bearing these principles.
Reasoning that art historical precedents might better simulate the choices artists make, Elgammal opined that this procedure moved his program, AICAN, toward autonomous art-making. In some exhibits he lists AICAN as his collaborator and in other exhibits he grants the program chief authorship of the images.
Most researchers and users of AI would sharply disagree with Elgammal’s assigning creative agency to AICAN. Janelle Shane in her book You Look Like a Thing and I Love You avers, for example, that since the choice of database images, the program used, and the principles of discrimination are all in control of the artist, then the artist clearly retains authorship of the work. Most importantly, it is the artist who chooses which works to hang. However, the artist does not control the image processing once the program is running.
In general researchers and practiced users of AI regard these programs as assistants with the skill to carry out an expert’s grunt work. In fact, most consider the human component essential to the practical operation of AI. In this regard, data guru Nate Silver has cited how accuracy of weather forecasting as offered by AI has revolutionized the field for the better, but its effectiveness increases by 20% after review by weather scientists. The same holds true when sports teams use AI to select recruits: the intervention of a seasoned coach ensures a 20% greater success in predicting a rookie’s successful career.
Ellammal’s assertion that AICAN is the prime creative agent is in line with the hyperbole that has always seemed to permeate discourse on AI. In the summer of 1957, for instance, the first research team to investigate AI assembled at Dartmouth and coined the term of artificial intelligence. They disliked the term because they believed that they were to create real intelligence, but chose the qualifier artificial so as not to put people off. They believed that their task would be completed by 1962.
Now nearly 60 years later such intelligence is not even close. Current optimistic predictions set that achievement for the decade of the 2040s. This is the timeline for what Ray Kurtzweil, director of AI for Google, has dubbed the ‘singularity’: the point at which AI meets and then exceeds human intelligence. However, most serious researchers doubt that this will ever happen.
True intelligence requires that the machine gain ‘common sense’ awareness, such as arises in humans as a result of their interaction with the world from infancy on. This in turn requires ‘embodiment’, that is, a brain linked to the world by an operational carrier with a system of sensing paraphernalia (eyes, ears, etc.) and a network (nervous system) to transmit and respond to information garnered from the world. By means of these the brain becomes a mind.
Much of the hyperbole surrounding AI is founded on the faith that digital processes can be made to imitate the analog processing of the human brain, and that a faithful enough imitation of intelligence equates to the real thing. One famous test for machine intelligence is, for example, premised on such imitation. The Turing test, suggested by Alan Turing, has a person typing up an extensive chat with a hidden machine. If the person is fooled into believing the chat is with another person, then the machine is likely intelligent. Faith in digital imitation disregards, however, the gross complexity of the human brain: 86 billion neurons with 10,000 dendrites, each connected to another neuron to yield 860 trillion connections, plus another 900 billion glia cells to assist the neurons.
Due to mindless processing, much of the artistic imagery of AI has an uncanny sense of disconnect as if the machine ‘just doesn’t get it.’ GAN imagery, like that of the Portrait of Edmond Bellamy, imitates art without a feel for art. The images develop from algorithms batted back and forth between a database and a set of discriminating algorithms without referencing a meaningful context or purposeful expression. The elements are sliced and diced to be reassembled with a tattered logic that is completely internal to the data processing.
An example of AI logic would be grouping teeth and fingers because both sets feature parallel repeated alignments. An image might result featuring a mouth with fingers protruding or a hand sprouting a row of teeth.
The machine’s imaging always produces logic. This logic is often unpredictable and therefore surprising, catching the fancy of AI artists. Consequently, practitioners point to surrealism as sanction for this image–making as well as to Francis Bacon’s haunting figuration. Both assertions fall short. Surrealism sports connections that at first blush appear absurd, but go on to unveil deeper meanings. Bacon’s rending of the human figure, while disturbing, opens to haunted expression. AI’s gratuitous slashing and reassembly of the figure are simply disturbing.
The machine bears no ideas, metaphors or symbols. The deep learning process requires a homogenous data set with no divergent image, further narrowing expressive possibilities.
Real image making begins with ideas that ultimately govern the modes of execution and the imagery referenced in the art, not to mention metaphors and symbols. Further, in the case of painting, there is the physical actuality of the work: textural effects and the practiced indications of the artist’s hand. This yields the complexity and richness of a human mind and not an artifice of intelligence.
Ah Stephen… I find comfort in the common belief that artificial intelligence is machine behavior limited to it’s programming, which still lacks the complexity of biology. When living cells are used in CPUs then we may come close but even then…
Of course the theme is the myth of a perfect machine intelligence that would make the human mind redundant, that would think for us and teach us all we need to know. Nature’s progressive design of human brains gives nature a 5 million year head start that we’d need to catch up with, doubt we can do that in the 250 years since Charles Baggage and his computing machine.
What is fascinating is that we’re only now becoming aware of the non-verbal languages that occupy a larger part of thinking, with intellect only the final expression of unconscious mental processes. So we have body language, acoustic language, and of course visual language, worth a thousand words.
Visual language is innocuous in that it generates a feeling response before those feelings are articulated into the type of conscious thought that allows for verbal expression. Feelings themselves are highly compressed thoughts, we may think of feelings as algorithm for the sake of this argument. To understand the meaning of an AI generated images, we need to know the programmer’s feelings, for AI is but a machine following a programmer’s orders, no matter how complex the machine.
Now here’s my take. In the beginning were the robots, machine intelligence, silica based. In their creativity these machines experimented with organic cellular intelligence, and so created the first life forms, that eventually evolved into dinosaurs. Dinosaurs were fast, robots were slow, so dinosaurs with their big feet squished all the robots, flattened them and with heavy footsteps pushed them down into the ground. We know this from the large deposits of metals that we mine today.
Eventually these organic animal brains evolved into human beings, who, created in their maker’s image, experimented with creating an artificial intelligence with a silica based mind chip. And so the robots are reborn, and the great cycle of intelligence covering trillions of years of nature’s experiments starts once again, to continue through infinity.