Madeleine George on The (curious case of) the Watson Intelligence
When I used to teach playwriting in the New York City public schools, there was one exercise I loved to do best. It always got the most fabulous writing out of kids, whether I was working with first graders or seniors. It was called "The Object Monologue," and it began with me leading the class through a guided visualization, in which I asked them to imagine themselves inside a room they knew well, and to walk around that room in their mind's eye until an object called out to them. "Now walk over to your object," I would instruct them, "and get right up next to it. Notice everything about it. Now lean in close to your object, so close that your nose is almost touching it. Now... jump inside your object and become your object." There was always a gratifying gasp and recoil at this point, as a roomful of third graders with their eyes closed reacted physically to the impact of becoming hairbrushes, Beanie Babies, and basketballs. "Feel how your body feels now that you are this object," I said. "Now look around you. What do you see through your object's eyes?" After a moment, I told them to write a monologue in which their object expressed its deepest desire. The writing produced by this exercise was invariably hyper-dramatic – verging on Greek in its keening, single-minded intensity. "Use me!" the hairbrushes howled. "All I want is for you to use me!" "When you come home from school today, please don’t forget to pick me up and kiss me," begged the Beanie Babies. "If you play with that soccer ball again instead of me, I'm going to let all my air out and lie here dead," the basketballs threatened.
When I talked to my playwright friend about this phenomenon – actually it was Anne Washburn, whose Mr. Burns began this season at Playwrights – she said, "Well, it makes sense that kids are so good at writing objects. Who knows better than kids what it's like to be completely dependent on others?"
But we're living through a sea change in our relationship with objects. Our most cherished objects no longer seem inert and dependent on us, waiting breathlessly in an empty room for us to come and make them useful. Increasingly, our objects seem to dictate the terms of their own use, and we're growing ever more dependent on them for our survival. The (curious case of the) Watson Intelligence is my attempt to puzzle out the problem of dependency – on devices, political institutions, and other people.
Dependency is, of course, an emotional paradox: we can't be fulfilled without making ourselves vulnerable to love and connection with others, but we are bound at some point to hate the ones we love when they inevitably frustrate us, abandon us, or die. In our current technological moment, when nothing can't be made faster, sleeker, and smarter, the temptation can be overwhelming to try to solve this problem with technology. After all, our devices are growing more person-like every day, and it seems like a more and more self-evidently awesome idea to merge the attentive, loving aspects of a human being with the intelligent, reliable aspects of a machine to get a perfect companion. This has been a tempting idea for as long as there have been machines; in The Watson Intelligence, two characters, one contemporary and one Victorian, both find themselves falling down the rabbit hole of this seduction.
Unfortunately for my characters, though, inefficiency, incomprehensibility, and risk are in fact the meaning of human relationships, not their failings. Relationships can't be "improved" without being radically impoverished: a relationship purged of incomprehensibility and risk is, in effect, a mirror. I think about this with some anxiety every time I'm on the subway – that famously unpredictable zone of urban encounter – and I see every single passenger gazing rapturously down into their own palm.
We're only going to rely more and more on our machines. As we become more deeply enmeshed in massive systems whose scope and complexity are beyond the ken of any human mind (big data, global networks), it's going to be up to highly intelligent machines – IBM's dazzling Watson, for one – to navigate and interpret the world for us. Our personal devices have already made themselves indispensible; I like to think that I maintain a healthy mistrust of my phone, but the sounds that have come out of me when it has failed are not so unlike the sounds the hairbrushes and Beanie Babies made back at PS 65. For me, the big question is what it will mean for our commitments to each other – our risky, mysterious, ultimately transformative commitments to each other – if we become dependent on the safe, reflective "commitments" offered to us by our machines. Even if we're able to accept that our devices don't love us, what happens if we fall in love with them? And if we do, what will we mean by love?