
Non-Computable You: What you do that artificial intelligence never will, by Robert J. Marks ★★★★
Artificial intelligence (AI) is a hot topic in the news and on the internet. This has been true for at least the last 50 years. Now, with more powerful computer systems and the development of more sophisticated algorithms that allow for incrementally more powerful programs which feign the appearance of being a sentient machine, the question about the capabilities of AI has become a more serious consideration. As a leading developer of “intelligent” systems, Robert Marks quickly puts to rest any notion that machines could actually think. Simply stated, machines will only be able to process algorithmic instructions, which thus excludes the ability of the machine to show creativity, ingenuity, or thinking “outside of the box”. Thus, the sci-fi fears of Terminator-style robots taking rule over humans should remain within the realm of fiction.
Marks does a masterful job of showing how computers will never be able to compete with humans on the thinking tasks that matter most. After quickly putting to rest notions that AI will someday become creative, he offers 12 filters to quell the hype of the AI movement; actually, these 12 filters apply to much of life and to discerning truth from fiction. There follows a section where he discusses the history of AI, which was both informative as well as enjoyable to read. Next, a section follows that explores the thinking of Gödel, Turing, and Chaitin, which is relevant in grasping the more theoretical aspects of thinking through AI, though sometimes a bit muddy. The discussion of the Halting Oracle, or of elegant systems was intriguing but not something I would challenge my mind with, even on a rainy day. I felt that the Marks Tax Collector example had faulty logic that produced an impossible answer.
The ethics of AI was most intriguing to me, and I’m thankful that there are those that are asking these questions. If an AI machine makes an “error” (such as an automatic guidance automobile that hits and kills a pedestrian), who is to blame? The human mind shows a vastly greater ability to manage ambiguous situations than any algorithmic device would ever possess. Thus, caution in the excess use of AI must be exercised. We probably won’t be seeing robots taking over the world and achieving independence from man, but it would be expected that other sorts of challenges will arise when AI becomes more commonplace in society.
This text has brought back to mind a book I read many years ago, and which I hope Marks has read, called Technopoly by Neal Postman. Postman describes how technology is used to solve problems that man has asked, such as, how can I travel somewhere faster than present, or, how could I communicate with someone on the other side of the planet. With technopoly, technology is now used to create solutions where there is no problem. Postman offers multiple examples in his book. Perhaps AI has migrated from simply being a technological tool to a technopoly issue that provides solutions to issues that are not a problem. Perhaps.
This book was a delightful read, and very thought-provoking. For those curious about where AI might be headed, this would be the book of choice for exploring those curiosities.
I have been involved (not so much nowadays) in AI, or what I prefer to call “robotics (with Internet-oriented AI being the cognitive part of robotics) and have published a couple of papers on AI and wider issues in the ASA journal in the past.
These kinds of arguments about whether a machine can be made that can think have been around since the early days of AI. Joseph Weisenbaum in the MIT AI Lab in the 1970s wrote the ELIZA program that simulated a Rogerian psychologist. That has progressed immensely to gtpChat. He wrote a book critical of AI, Computer Power and Human Reason (Freeman, 1976) that activated the issue.
“Simply stated, machines will only be able to process algorithmic instructions”
Looking at this from the other end, the way the life sciences are progressing, it appears that biochemical and genetic descriptions can also be expressed in the form of information, and it is not inconceivable that eventually algorithmic descriptions of the function of the human brain could be given. Such algorithms will be quite different than the ones that implement present-day AI programs.
The deeper issue is not the nuts-and-bolts needed to sustain intelligence physically or even at the software or informational level, but is the quest for an answer to the question of what constitutes the essence of intelligence, of sentience. We know it involves self-awareness. Can it occur algorithmically? It would involve a higher form of logic than the first-order predicate calculus that is common in AI and in culture generally. Self-reference brings in a different kind of logic – the paradox of the liar. Mathematically, it involves the kind of self-referencing in Goedel’s Incompleteness proof. I see no reason, in principle, why self-awareness might not be possible for an artificial intelligence, but to try to envision it in the context of the present state of the art in AI can be difficult (or such AI would be here already), and that appears to be the point of this book. If anything, it serves as a counter to a second wave of overblown expectations for the homonculus, the image of the beast”. Yet, AI is advanced enough to have immense social implications, including for deception.
Another aspect to this is that in the 1990s, the artificial neural network (ANN) efforts began to dominate AI research, as people like Geoff Hinton made progress on machine learning by ANNs. This kind of learning, however, is not observable and unlike earlier AI expert system programs, you can’t ask an ANN why or how it knows what it knows. (You can with expert systems programs, but they are hardly “intelligent”.) So this issue will continue on for some time.