In part 2 of this series, Gordon Collins explores whether computers can write fiction, looking at recurrent neural networks. Part 1 focused on James Meehan’s TALESPIN.

Recurrent Neural Networks – Andrej Karpathy 2015

Perhaps the first and simplest algorithm for automatic generation of literature is the old “Infinite monkeys will eventually type the complete works of Shakespeare”. Borges, who might need these erratically typing primates to fill his Library of Babel, traces the algorithm’s origins through Aristotle, Pascal and Swift. American 1960s comedian Bob Newhart identifies the problem with the algorithm by pointing out the unfeasible burden put on the operators who would have to check the monkeys’ output:

“Hey, Harry! This one looks a little famous: ‘To be or not to be – that is the gggzornonplatt.”

In the Library of Babel there would be shelves and shelves of books concerning “gggzornonplatts”. There would be new classics, revealed secrets and proofs (and their refutations) of the most amazing theorems, but we would never find them. Even the most experienced librarians wander helplessly, searching for some meaning among the monkey-twaddle.

The infinite monkey algorithm shows us that there is an equivalent “dual” problem to automatic literature generation, and that is automatic literature understanding. It is easy to make an algorithm to write literature if most of it is nonsense. In that case, the problem of writing is replaced with the problem of reading. Bad writers are redeemed by good readers.

Of course, these days we may use computers instead of monkeys to generate random sequences of letters. But, even then, it is still utterly unlikely that even a sentence of Shakespeare could be written at random, and then who would find it, anyway? Well, the computer would have to find it, and so the computer must be able to read and write.

Adrej Karpathy, in this blog, demonstrates how a very simple program can generate writing by using Recurrent Neural Networks. Neural networks? They are programs which do not progress in a linear way but which are networks of artificial neurons which communicate to each other via artificial synapses and in this way they can “learn”. Recurrent? That means a program that can run itself. Recurrent Neural Network? That’s a neural network that uses what it has already learnt to learn new things better — like a brain!

Karpathy gives an example of a program that can “read” (be trained on) the complete works of Shakespeare letter by letter (only 4.4 mb) over and over again and “learn” which letters are likely to go next to each other. It can then output Shakespeare-sounding text based on what it has “learnt”. Karpathy’s program is amazingly small and not designed as a serious attempt at literature generation but the simplicity of the program is the key to its impressive results:

PANDARUS:
Alas, I think he shall be come approached and the day
When little srain would be attain’d into being never fed,
And who is but a chain and subjects of his death,
I should not sleep.

Second Senator:
They are away this miseries, produced upon my soul,
Breaking and strongly should be buried, when I perish
The earth and thoughts of many states.

DUKE VINCENTIO:
Well, your wit is in the care of side and that.

The program has learned how to format scripts and that the characters’ names go above their lines. Amazingly, without any human input other than the Bard’s and a very simple program it has generated something that reads like Shakespeare … except that it is meaningless. It is monkey Shakespeare.

Karpathy shows us the results of training the program on an English translation of War and Peace. The program has to read and re-read many times. After one reading it produces:

tyntd-iafhatawiaoihrdemot lytdws e ,tfti, astai f ogoh eoase rrranbyne ‘nhthnee e
plia tklrgd t o idoe ns,smtt h ne etie h,hregtrs nigtike,aoaenns lng

It has learnt how to space words. After 300 readings it makes:

“Tmont thithey” fomesscerliund
Keushey. Thom here
sheulke, anmerenith ol sivh I lalterthend Bleipile shuwy fil on aseterlome
coaniogennc Phe lism thond hon at. MeiDimorotion in ther thize.”

It has now learned some common combinations of letters like “tion” and its punctuation is improving.

After 2000 readings you get proper words:

“Why do what that day,” replied Natasha, and wishing to himself the fact the
princess, Princess Mary was easier, fed in had oftened him.
Pierre aking his soul came to the packs and drove up his father-in-law women.

It hasn’t understood question marks or which characters are female, so can we believe that it understands that Princess Mary was somehow “easier” than Natasha ,or that Pierre’s soul might ak?

Kapathy says that “topics and themes that span multiple words” are learned as the machine rereads War and Peace even more and considers longer and longer chunks of the text. Then it could learn which characters are most often followed by “he” and which by “she”, but could it learn topics and themes? Could it learn that global politics can be reflected in the characters’ changing beliefs? Or is “drove up his father-in-law women” just a higher-level “gggzornonplat”?

As Duke Vincentio would say, “Your wit is in the care side and that.”

About the author

Gordon Collins has been a market risk analyst, a maths lecturer, an English teacher in Japan and a computer graphics researcher specialising in virtual humans. He has degrees in mathematics as well as an MA in Creative Writing from the University of East Anglia. He has been longlisted for the Fish short story prize, and twice for the Galley Beggar Press short story prize. He has had short stories published in Riptide Vol 3, UEA Creative Writing Anthology 2010, Infinity’s Kitchen, Liars’ League, Unthology 3, 6 and 9 and Unthank Books’ The End. See www.zipple.co.uk more.