Who Cares?

“Soft computing” from Neo-Surreal by Jenny Odell, produced by Colpa Press.

Dear computer,

Who cares? I have learned to care not only for you but about you. I care about you because I care about your survival, which is also the survival of the humans who make, unmake, and remake machines just like you. I care about your ability to connect me to other people, and to connect other people to me, with and through you. I care about where you came together and where you will come apart. I care because I understand these two things to be deeply intertwined, that the possibility of your being is the possibility of your waste. I care to try, against the forces of capitalism, to not contribute to that possibility of waste which only allows for more possibilities of you. I don't care about the whole of computers because I care about only you yet, in spite of and due to whatever artificial intelligence you may have, you don’t care about me.

I’m reminded of something that John Haugeland wrote in 1979: “the trouble with artificial intelligence is that computers don’t give a damn.”1 That is, computers do not understand that they make and are made by the very worlds they are a part of. Recently I read a book by Brian Cantwell Smith who builds upon this claim: computers will only give a damn once “they develop committed and deferential existential engagement with the world.”2 Smith proposes that the ability to make judgements is the ability to be committed to the world. Not only must intelligent beings defer to the world, they must also treat the world with humility and compassion.3 In other words, for a computer to give a damn it must follow the lead of the its world, it must be humble in the face of this world, and it must be sympathetic towards it. Ultimately, for Smith, computers would have the capacity to care if only they had the capacity to make “good judgements.” Still, if AI systems continue to be in the world, regardless of what they are being towards, it seems important that the ways in which they are thought and built should be deeply rooted in notions of care. By this I mean that the people working across disciplines in AI, and more generally, disciplines in computation, should themselves be making “good” judgements as they continue to prop machines like you up in the world. They should care for their computer because it allows for the possibility of their work in the first place. 

The more interesting question for us here is not can you care but how might you be in the world were the project of artificial intelligence already rooted in structures of care? In other words, in a world where computers are not always and already subservient to the people that make them, where notions of control and command do not operate on a one-way street, and where computers do not pile up as waste when their material is no longer of use. A complication of John Haugeland’s claim might be that the trouble with artificial intelligence is not only that computers don’t give a damn but also that humans don’t give a damn about computers.

Perhaps it will be helpful to think of you, in María Puig de la Bellacasa’s terms, as a “matter of care.” That is, care as not something I do for you, or something you do for me, but as something that is generated between us. For Puig de la Bellacasa, care is not simple: “thinking of matters of fact as matters of care does not require translation into a fixed explanatory vision or a normative stance (moral or epistemological), it can be a speculative commitment to think about how things would be different if they generated care.”4 Framing artificial intelligence as such might generate new perspectives on how we understand it, what it sees, what we see, and the ways in which it shapes and un-shapes humans into realities with one another. The very word “care” carries with it questions of “who will do the work of care, as well as how to do it and for whom.” You, computer, carry in your construction many participants, but your ubiquity is due to your coming together on the global computing assembly line. 

Who else cares about you? Perhaps caring about you is precluded by caring about the industry of computing, an industry built on obsolescence and waste. But Puig de la Bellacasa’s provocation for “matters of care” is asking that care “in technoscience and naturecultures could mean more than the responsible maintenance of technology.” Taking up the computer as a matter of care would not only be to put forth a kind of responsibility for its maintenance but also to engage with its coming into being. If computers come together on the global computing assembly line who can be held accountable? Who is harmed by the coming together of artificial intelligence? Who is involved? Before we think through these things, Puig de la Bellacasa asks us to first think through “what are we encouraging caring for?” How does caring for something also influence how it is perceived, and how can researchers in science and technology studies position a matter so that it is perceived as something that requires attention? To see you as a matter of care is to question the care that has already been built into you. 

All of these questions are at odds with the technological moment we occupy in which devices like you are easily forgotten, replaced by newer versions or even altogether by shinier and more frictionless inventions. What might it look like to build computers that are not meant to be in service of or in replace of something human and instead to build computers which are meant to be in service or along with? With you either I am in complete control, utilizing your every affordance to get something done, or I am submissive to your complications, failures, and errors no matter how many times I click to quit or how hard I slam the mouse on the desk. 

In taking up Smith’s remark that computational systems “represent the world in ways that matter to us, not to them”5 the question remains how computers might represent the world as world if they were able to. And since this is indeed something that computers can’t yet do, the computer has become an object which asks us to “remake our relationship to it.”6 What are the possibilities for caring for you in a way that sustains your “life”? What if you lasted long enough for me to pass you down to a future generation? What if, instead of anthropomorphizing your every part, your mechanics were seen as metaphor to help humans better understand each other? How might the computer be a reminder that people should care, not simply, but in step with Puig de la Bellacasa, generatively, for each other and for the things that they are concerned about. And what would it even mean to care about computers when so much of the project of computation involves having computers “care” about humans.

It seems obvious that as long computers are built to be in service of, the limits of their intelligence will always be the limits of their suzerain. And if these limits are unavoidable, which is not to say that they can’t be pushed, perhaps the objective should not only be to realize the scope and the limits of what computers can do, but given these limits, consider what workers across disciplines of computation themselves can do for computers.

Yours,

Emma Rae Bruml Norton

P.S. This letter was made possible by you, computer, but also by the following people who have energized my thinking here from the very start: Elaine Gan who introduced to me care through feminist technoscience, Joseph Lemelin who introduced to me the history of artificial intelligence through philosophy, and Meg Miller who introduced new ways of writing into the letter itself.

This piece appears in the Are.na Annual 2021, themed “tend.” 

[1] Haugeland, John. "Understanding Natural Language." The Journal of Philosophy 76, no. 11 (1979): 619.

[2] Smith, Brian Cantwell. The Promise of Artificial Intelligence: Reckoning and Judgment. (Cambridge, Massachusetts: The MIT Press): 108.

[3] Smith, 146.

[4] Puig de la Bellacasa, María. "Matters of Care in Technoscience: Assembling Neglected Things." Social Studies of Science 41, no. 1 (2011): 96.

[5]  Smith, 108.

[6]  Puig de la Bellacasa, 100.

Emma practices programming, writing, research, and teaching. Since 2018 Emma has maintained a study of the computer mouse. Put another way, Emma studies the space between humans and computers. Learn more about Emma’s work at https://marceldochamp.net.