Advertisement

The Future of Content Is Intelligent: But Is It Artificial?

By on

Click to learn more about author Dave Davis.

In 1980, Daniel J. Boorstin, a great scholar and historical polymath, held the office of Librarian of Congress. While serving in this capacity, Boorstin gave a speech entitled “Gresham’s Law: Knowledge or Information?” which was printed in a little booklet I still have and treasure. 

In economics, Gresham’s Law refers to the observation that “bad money drives out good,” meaning that  — if both kinds are available — a debased currency will tend to win out over a more valuable one, leading to baleful effects for the consuming public. Dr. Boorstin applied this observation to what he saw as the declining value of much that was being written and published in his time. Essentially, the reading public was being “informed” at the expense of gaining reliable, useful knowledge. He argued that the English language itself supported this distinction: “We expect to be entertained and also to be informed. But we cannot be knowledged!” This was a brilliant little speech (it’s only about 10 pages) and will reward re-reading even in our time.

I took the liberty of using the following abstract from Boorstin’s speech:

“There is a distinction to be made between knowledge and information. Knowledge is orderly and cumulative; information is random and miscellaneous and may be collected simply because it is there. The information industry is flourishing, but knowledge institutions — colleges, universities, and libraries — go begging. The knowledge industry is actually being transformed and, to some extent, displaced by the information industry. Libraries must make use of computer technology, but they must also remain fortresses of knowledge which is still preserved mainly in books. Similarly, libraries must be repositories of information but also places of refuge from the tidal waves of information — and misinformation. The autonomous reader, amusing and “knowledging” himself, should be the be-all and end-all of libraries.”

And I supplied it as the seed text to the text generator made available at zyro.com to answer whether autonomous AI is still too much in the future, particularly when it comes to content automation. Here’s the result:

“He can take care not only about his library collection but also how he treats those who disagree with him on matters important enough that this problem has arisen: what are your views? What kind(s) do you think people ought/need or seek for themselves before considering their questions put forward as matter of discussion at conventions like GenCon? It would have been great if John Hopkins had published such an article as Dr. Bell’s on this subject … The National Library Association has come down very hard against them for their attitude toward free speech. They are not just wrong about censorship (which includes almost all other forms), rather they callously destroy private property … they don’t think that any work will ever stand up at Librarians’ Corner when printed out after having gone through hundreds or thousands test presses before publication!”

The tone and the subtlety of Boorstin’s prose stands out in sharp contrast to this essentially robotic, babble-y speech. Boorstin wrote in the prophetic mode and offered hard-won wisdom. This AI typically offers nothing but stilted prose. It may inform, but it adds nothing to knowledge.

While I was brainstorming the topic, I threw in another seed text:

“AI is the content writer of the future. Blog posts, baseball game writeups, and marketing collateral are already being written by AI programs, while user manuals and introductory courses can’t be far behind. Copyright is king! I’ll be out of a job.”

And here’s what that text generator at zyro offered back:

“The value of information: No other subject has become so vital to us that we’ve turned into mindless consumers because what’s important enough to sell hasn’t been sufficiently valued as an objective source for thinking or buying anything in modern society. In my book “Brain Food” (see link at bottom), Robert Daubert* outlines this relationship between knowledge production — I) more people think there will inevitably come a time when their brain stores data about them without ever requiring further analysis; II) technology makes it possible now for you alone on earth — your child, spouse … to analyze virtually any available dataset with relative ease once they get beyond age 3.”

*Note: As far as I can tell, there’s no such person who is also the author of a book by that title.

Although I’ve seen worse — and possibly have written worse — I have certainly seen better expository prose that is much better written. At best, this is what I’d call “starter text” — stuff you’d throw up on the whiteboard in a brainstorming session. I’m not knocking it; maybe there’s a nugget or two in there that one could follow up on. Also, the price is right — it is free and open for anyone to use.

I’m obviously not the first person to take up this topic. In August, New Yorker contributor John Seabrook published an intriguing article on whether a machine can learn to write for the New Yorker, which looked at some of the more powerful AI writers, including one called GPT-2 from OpenAI. GPT-2 uses extensive datasets — in this case, the entirety of the non-fiction prose published in the New Yorker over the past few decades — to “train” itself to produce text which closely emulates that found in the dataset. In other words, towards writing second-rate New Yorker articles, which were very much in need of a heavy editorial hand. 

While the most powerful tools will continue to be developed, AI will not write me — or anyone else — out of a job. Overall, these examples support the proposition that Boorstin was on to something important regarding whether we can gain knowledge from these machines. Simply put, we can be informed — even by machines — but we cannot gain knowledge, let alone wisdom, from them. Not now, and maybe not ever.

Leave a Reply