This is a tool to generate and analyze language used to describe fine art and other cultural expression. It takes a set of words as a prompt and then generates a medium length set of sentences that approximate the training data, which is 57 years of actual art reviews from a Famous Art Magazine.
It's not really artifical intelligence, but tools like this get used as examples of AI in mainstream news articles. The interpreter has no idea what the prompt or the training data mean. What it does have is a billion examples of what words get use next to each other in typical sentences. It generates new text based on deep matrices of probability.
I chose art reviews to train with because the language used is distinctive and unique to human expression. The descriptions of art include intent, emotion, technique, and impact if any. Art reviews can also veer off into heavy academic and esoteric jargon that is a challenge for traditionally trained language models.
The mistakes it makes are just as interesting as any successes. Generating plausible sentences has been in reach for a while. What I see in the results here is a beginning of larger constructs of language that simulate a thesis and supporting statements. The loops and glitches can be poetic.
There are also new lessons to be learned about the original texts through use of something like this. Bias, prejudice, and judgement can be found in some of the generated results because the orginals use the same language. Also, the training process may have its own effect. The language of art reviews has changed of the years as much as human culture has changed. This generator ends up combining perspectives from multiple decades in novel and sometimes problematic ways.