If you were reading the Kitchener Waterloo Record this past weekend, you might have seen a piece by UWaterloo English’s Dr. Marcel O’Gorman. Titled “In defence of uselessness,” the piece builds on his comments at the recent TrueNorth conference. The full text of the keynote is reproduced below:
In Defence of Uselessness
“In such a state, nothing useless exists.”
– Jacques Ellul, The Technological Society
In his 2016 book La mélodie du tic-tac et autres bonnes raisons de perdre son temps, Pierre Cassou-Noguès provides a literary and intellectual history of how to waste time. Or in his words, how to “traîne” — comment traîner. Smoking, daydreaming, listenting intently to the tick-tock of a clock. These are all examples he gives of traîner, which is a very difficult word to translate. Slacking or loafing don’t quite fit the bill.
As Cassou-Noguès wrote to me in an e-mail conversation: ”I am not sure there is an English equivalent to « traîner ». It would be at the center of the constellation [of] procrastination (not answering an email right now), lounging (doing nothing at home), hanging around (teenagers in the street), going slow (on a bicycle).”
More than a discourse on laziness, this speculative book transforms traîner into a political act, and Cassou-Noguès ultimately calls for a movement to preserve the right to traîne.
Avons-nous toujours traîné et continuerons-nous à traîner? Je n’en suis pas certain. “Traîner” est un phénomène moderne, corrélative d’un certain stade de la technologie, un temps perdu dans une économie des machines et qui tend peut-être à disparaître aujourd’hui. Où d’autres machines se développent qui ne nous laisseront bientôt plus traîner. Il faudrait alors defendre notre droit à traîner.
Have we always traîned and will we continue to traîne? I’m not sure. Traîning is a modern phenomenon that correlates with a certain technological stage, a lost time in an economy of machines, and today it may well be on the verge of disappearance. Or other machines will emerge to keep us from traîning. We must therefore defend our right to traîne.
Cassou-Noguès’ words seem to address Jacques Ellul’s ultimate conviction that in a techno-capitalist state, “nothing useless exists.” Even the traîning of Cassou-Noguès makes uselessness useful by transforming uselessness into a form of resistance against technocapitalism.
Needless to say, uselessness is very difficult to achieve. Take for example, the useless machine proposed by Marvin Minsky and ultimately fabricated by Claude Shannon while they were working at Bell Labs in 1952. As Minsky describes it, this “ultimate machine” is a small box with a single switch. When the switch is flipped on, a hand emerges from the box and flips it back off.
Minsky’s reference to the box as the “ultimate machine” can only be taken as irony. As he revealed in a 2011 video interview, “Somehow this machine got a lot of publicity, because most people thought that it was perhaps the most useless machine ever made so far.” But indeed, there is something “ultimate” about this machine, which points to a final and fundamental principle. As many observers have pointed out, Minsky’s useless box may be viewed as a machine that thinks for itself, an embodiment of the very field that Minsky pioneered at Bell Labs, the field of Artificial Intelligence.
Minsky’s Ultimate Machine is useful then as a model. But such a view is only possible if one ascribes a certain intelligence to this very simple gizmo. And that requires a leap of the imagination. This is where Arthur C. Clarke comes in. He, more than anyone, has played a crucial role in defining the narrative and legacy of the Ultimate Machine. After encountering the useless box on Shannon’s desk, Clarke wrote the following: “The psychological effect, if you do not know what to expect, is devastating. There is something unspeakably sinister about a machine that does nothing — absolutely nothing — except switch itself off.” It remains a mystery why Clarke saw the box as sinister. Did it strike him as an uncanny breach of the line between living and non-living entities? Or was it simply that the machine seemed to be resisting human will? Or was it both?
The concept of a machine that turns itself off was not new when Arthur C. Clarke encountered the useless box for the first time. Consider for example the bi-metallic thermostat or the steam engine governor, both of which served as examples of negative feedback in Norbert Weiner’s treatise on cybernetics. But what Clarke observed in the Ultimate Machine was more than control and feedback; he thought he was witnessing something sinister, a machine with its own volition.
In fact, Clarke claimed that the sinister box literally drove people mad: “distinguished scientists and engineers have taken days to get over it,” he wrote. “Some retired to professions which still had a future, such as basket-weaving, bee-keeping, truffle-hunting, or water-divining.” Perhaps the Ultimate Machine is simply an object of transference for Clarke’s frustrating attempt to fully grasp cybernetics. As Marvin Minsky put it in a paper entitled “Matter, Mind, and Models,” “If one thoroughly understands a machine or a program, he finds no urge to attribute ‘volition’ to it. If one does not understand it so well, he must supply an incomplete model for explanation.”
What is most pertinent here about Clarke’s tale of the Ultimate Machine driving scientists to distraction is that it addresses, if only obliquely, the usefulness of different types of activities, be they mathematics, tinkering, basket-weaving, or truffle-hunting.
Ironically, designing useless machines was precisely the sort of usefully useless activity that Minsky and Shannon were supposed to be undertaking at Bell Labs, which still supports curiosity-driven research to this day. In his book Kitten Clone: Inside Alcatel-Lucent, Douglas Coupland describes his own visit to Bell Labs, where he observed a small gathering of scientists tackling the complexity of how to win a hot dog eating contest. As Coupland put it, “this is just the sort of problem scientifically gifted people take and solve, and then they extrapolate what they learned from the process and convert the knowledge into a useful project.”
Creating a space for useless pursuits seems to be a good idea. Bell Labs’ commitment to supporting pure research has led to the invention of the transistor, the laser, information theory, the Unix operating system, and the programming languages C, and C++.
But Bell Labs is not the only institution that has famously supported what might be viewed as useless research. In 1939, the educational reformer Abraham Flexner published an article entitled “The Usefulness of Useless Knowledge” in Harper’s Magazine, in which he defended the work of the Princeton Institute for Advanced Study. Founded in 1930, the Institute has fostered the work of Albert Einstein, Kurt Gödel, Erwin Panofsky, and Clifford Geertz, among others. Wary of the growing specialization of knowledge, Flexner asks in his essay whether the “conception of what is useful may not have become too narrow to be adequate to the roaming and capricious possibilities of the human spirit” (544).
Flexner provides several examples of how pure research, untrammelled by a specific disciplinary field, has led to groundbreaking results, including the work of Clerk Maxwell and Heinrich Hertz on the mystery of electromagnetic waves. This work allowed Marconi to invent the radio, and Flexner questions where the credit is due. “Hertz and Maxwell could invent nothing,” he wrote, “but it was their useless theoretical work which was seized upon by a clever technician and which has created new means for communication, utility, and amusement by which men whose merits are relatively slight have obtained and earned millions” (544). “Who were the useful men,” Flexner asks. “Not Marconi, but Clerk Maxwell and Heinrich Hertz. Hertz and Maxwell were geniuses without thought of use. Marconi was a clever inventor with no thought but use” (545).
A central problem with Flexner’s treatise on pure research becomes apparent when we consider who directed the Institute for Advanced Study from 1947 to 1966: J. Robert Oppenheimer. Known for his work on The Manhattan Project, Oppenheimer provides a stellar example of how supposedly useless theoretical knowledge – such as the attempt to understand how uranium is converted into barium – can lead to catastrophic forms of utilitarianism.
When Flexner wrote the Harper’s article in 1939, he could not have anticipated how the supposedly useless work of his colleagues at Princeton would lead to the mass destruction at Hiroshima and Nagasaki only six years later. But Flexner was not entirely naïve; he did anticipate the potential for pure research to be applied in warfare. Referring to such discoveries as dynamite and the airplane, Flexner suggests, “the folly of man, not the intention of the scientists, is responsible for the destructive use of the agents employed in modern warfare” (546).
Untroubled by this nagging detail, Flexner concludes his essay by describing the Institute as “a paradise for scholars who, like poets and musicians, have won the right to do as they please and who accomplish most when enabled to do so” (552). Flexner’s comparison of scientists to poets and musicians calls for closer scrutiny.
In his aptly titled book, The Usefulness of the Useless, Nuccio Ordine, inspired by Flexner’s essay, argues that “together with humanists, scientists have also played, and still do, a most important role in the battle against the dictatorship of profit, to defend the liberty and gratuitousness of knowledge and research” (7). Ordine’s book is a small encyclopedic catalog of the value of uselessness as prescribed by such luminaries as Ovid, Plato, Dante, Shakespeare, Baudelaire, Borges, and more. Like Pierre Cassou-Noguès, Ordine understands uselessness as a form of resistance, “an antidote to the barbarism of profit that has gone so far as to corrupt our social relations and our most intimate affections” (23).
What Ordine’s book makes clear is that there is a vast difference between a useless poem and a useless science experiment. In fact, the very idea of a “useless science experiment” seems almost unthinkable in today’s research climate, where science experiments are always conducted in an innovation ecology designed to shepherd research toward the market. My own institution, the University of Waterloo, prides itself on the swiftness of its innovation system, as evidenced in the institution’s massive investment in student-run tech start-ups. You would be hard pressed to find such investments in the arts and humanities, which by comparison, are relatively useless and immune to innovation.
In Part 2 of Ordine’s book, entitled “The University of Company, the Student as Client,” he cites at length work of Alexis de Tocqueville, who wrote Democracy in America between 1835 and 1840. A key feature of Tocqueville’s evisceration of the Land of the Free is his observation that in this new capitalist democracy, there is no time for theory. “[H]ardly anyone in the United States,” he proclaims, “devotes himself to the essentially theoretical and abstract portion of human knowledge.” They consult only those books which are “speedily read and which require no scholarly investigation to be understood.” This allows them to gain whatever knowledge is useful in the moment for the purpose of economic gain.
For Tocqueville, an antidote for the unreflective, unrestrained life of the capitalist is the meditative life of the humanist, who is more likely to consult less useful books that are not speedily read.
The concept of uselessness is of course culturally constructed. If today, the arts and humanities are seen as useless, it is because they do not possess the necessary cultural capital to be considered as profitable. This is precisely what makes the arts and humanities usefully useless. By maintaining a theoretical distance from the innovation ecosystem, non-STEM disciplines provide a safe space from which to examine the impacts and predict the potential impacts of research conducted in the innovation ecology.
The problem, of course, is that the useless mode of reflection afforded by the humanities would risk a slow-down of the innovation ecology. Don Ihde quite aptly anticipated this problem in his book Bodies in Technology, where he writes,
A first response to this proposal might well be: but who wants any philosophers among the generals? the research and development team? the science policy boards? The implication is, of course, that philosophers will simply ‘gum up the works.’ And the excuse will be that philosophers are not technical experts, and any normative considerations this early will certainly slow things down—a sort of Amish effect. Of course, the objections in turn imply the continuance of a status quo among the technocrats, who remain free to develop anything whatsoever and free from reflective considerations (105).
In spite of Ihde’s cynicism, for the past decade in the Critical Media Lab, I have been asking what it would it mean to inject “reflective considerations” into the design processes of technological development. This work led to an invitation recently, to deliver a keynote address at the True North Conference in Kitchener-Waterloo, a region with aspirations of being Silicon Valley North.
With a theme of “Tech for Good,” the goal of this conference was for the Canadian tech community to “examine the values that guide technology innovation. And to redefine tech as a force for good.” In this context, I was asked to deliver a very focused talk on “A.I. and Ethics.” I was also asked to finish on a positive note. “Build a bridge,” they said. That was my directive, and they laughed genially when I accused them of being too pushy.
But as a scholar of subjects that are useless to the tech community – e.g., philosophy, rhetoric, literature – I couldn’t help but seize this rare opportunity to address some big questions about tech from the perspective of these seemingly useless disciplines, beginning with the question of uselessness itself. What resulted was not a talk about A.I. and ethics, but a reflection on the ethos of the tech community.
In my talk, I introduced yet another useless box. In a blog post entitled “AI Behaving Badly,” Matthew Biggins looks at the problems that can arise when pure AI research is applied in what seem to be useful contexts. His examples include the following: Microsoft’s Twitter-bot Tay, which almost immediately devolved into a sexist Hitler supporter; autonomous vehicles that must decide which human beings to spare in a crash; and military AI that threaten to start another arms race.
In order to illustrate the problem that each of these examples embodies, Biggins embedded a video into his blog post of the “Arduino Knife-Wielding Tentacle” developed by a mysterious tinkerer known as Outa Space Man. This is a box containing a microcontroller and servo motor that power a mechanical pink tentacle wielding a Swiss army knife. Once the box is turned on, the tentacle flails about erratically, and there is no easy way to turn it off without risking injury. This by all means is a useless box in its own right, but it is also single-mindedly murderous. If Arthur C. Clarke thought Shannon’s box was sinister, what would he think of Outa Space Man’s creation?
Kelsey D. Atherton, in an article written for Popular Science, uses the knife-wielding tentacle as an opportunity to ruminate on the future of AI. Half-jokingly, he asks the following existential question: “Will robots ever really understand the human condition? Is it possible, for example, for a machine to know both terrible purpose and utter futility at the same time?”
Maybe what Atherton has put his finger on here is what Abraham Flexner called “the folly of man,” an all-too-human impulse that allows pure science to be plied for the sake of pure destruction. “Tremble before the knowledge that a human made this rubbery nothing for fun,” writes Atherton.
I would like to think that the “The Arduino Knife-Wielding Tentacle” is a contemporary equivalent of Minsky and Shannon’s useless box. But the tentacle seems to be rooted in the uselessness of arts and humanities more than in the uselessness of science. It is a product of theoretical reflection, and it asks us to consider not the ethics of research into generalized A.I., but the personal motivations of the researchers. We need more useless boxes like this, objects-to-think-with that promote speculation about technology while simultaneously adding a little humour into the growing murmur on A.I. and ethics.
It should come as no surprise that the Minsky and Shannon’s Ultimate Machine has been rendered useful as a prefab maker kit ready for mass-consumption. After all, nothing is useless if it can generate a profit. I have purchased dozens of these kits myself, which are a perfect way to teach humanities students how to both solder circuits and hack machines, turning them into philosophical objects. But at no time in these lessons do I bring up the concept of ethics.
The problem with ethics is that they can too easily be used as a way of hiding motives or excusing behaviours. Ethics, as they are commonly understood in the tech industry, are something you tack onto a project at the end to make sure it’s socially acceptable. Ethics is a checkbox that someone fills out in an office far away from the engineering, design and marketing people. As a matter of fact, ethics in the tech community might be completely useless, except that ethics can create a barrier to innovation. And it sometimes feels like that is the only reason why people want to talk about ethics.
Rather than talking about ethics then, I prefer to talk about ethos, a concept that is well-known in useless academic disciplines like Philosophy and Rhetoric. Ethos, as the Oxford Online Dictionary tells us, defines the “characteristic spirit of a culture, era, community [or person] as manifested in its attitudes and aspirations.”
Ethos determines why someone is motivated to develop a technology in the first place. What are that person’s attitudes and aspirations? Are they guided by profit, by utility, by a single-minded dedication to innovation for the sake of innovation? Or are they guided by other motives that exceed the boundaries of the tech community? What is the ethos, for example, of a community or person who wants to produce an intelligent non-human agent that might very well have no practical use for human beings?
This question, which is a question of trust, is why the idea of general A.I. should provoke tentacular fear.
In the May 14 issue of The New Yorker, staff writer Tad Friend has a very detailed article about the perils and promise of AI. He quotes chess master Gary Kasparov, who, in spite of sour grapes about his bout with Big Blue, predicts wistfully that “using AI for ‘the more menial aspects’ of reasoning will free us, elevating our cognition ‘toward creativity, curiosity, beauty, and joy.’”
The problem with Kasparov’s gambit is that, as a result of our devotion to tech innovation, we might be losing the ability to appreciate, experience or even understand forms of creativity, curiosity, beauty, and joy that we have come to understand as useless. If A.I. frees us from “menial” cognitive tasks, will we spend more time reading Dostoevsky, volunteering at a soup kitchen, and pursuing art lessons? Or will we use that freedom to shop for suggested items on Amazon, browse prefab playlists on YouTube, and play some future version HQ that taps into our social media preferences?
The most effective way to make a machine that thinks like a human is to redefine what it means to be human in the first place. If humans develop an ethos that ignores useless things like philosophy, art, literature, or social justice – things that are very hard to program into machines — then researchers can take a giant leap toward achieving General AI. But at what cost?
Coincidentally, sandwiched into Friend’s New Yorker article is a cartoon by P.C. Vey. It shows two mobsters strong-arming a student who is wearing a cap and gown. His hands are bound. His feet are stuck in a tub of cement. As they prepare to hurl the student off a dock, one of the thugs explains his fate: “It’s not personal. The boss just doesn’t like seeing people in so much debt for such a useless degree.”
To me, this cartoon is a very happy – and useful — coincidence. What the tech community needs is more uselessness. Rather than putting all its effort into leaping forward, maybe it should slow down and look around. In my own Canadian city, the tech community can start by looking at other communities, from the Grand Valley women’s prison in South Kitchener, to the Old Order Mennonite farms in North Waterloo, keeping in mind that all of this is on land that was once promised to Indigenous people.
There’s a lot of talk about transforming the region where I live into a second Silicon Valley. A Silicon Valley of the North. But why would anyone wish this upon themselves?
A recent story in The Guardian describes why Gregory Stevens, a Palo Alto pastor, resigned from his church. To put it in his words, “I believe Palo Alto is a ghetto of wealth, power, and elitist liberalism by proxy, meaning that many community members claim to want to fight for social justice issues, but that desire doesn’t translate into action.”
The question of what is good, of how to live the good life, is an ancient question asked by many thinkers over the past few thousand years. It’s by asking these sorts of useless questions that a person develops an admirable ethos in the first place, an ethos that is guided by what is good for others and not just for oneself. An ethos that asks, Who is left out? Who is in need? An ethos that asks, What are the consequences – social, psychological, environmental – of my technological innovations? This is an ethos that might truly develop “Tech for Good.” But it’s going to take a lot of useless thinking to get there.