Like Watson, Like Us

previous page

Let’s say you’re playing Jeopardy!  The category is U.S. Cities.  The clue: “Its largest airport is named for a World War II hero, its second largest for a World War II battle.” 

If you’d asked me, “For whom or what are Chicago’s largest airports named?” I wouldn’t have been able to tell you.  I’d never really thought about where the names “O’Hare” and “Midway” came from.  But when I read the aforementioned category and clue, I thought about pairs of airports in U.S. cities I could name, and when Chicago’s popped into my head, it was clear that one name was a person’s and the other was... and that’s when I remembered my history.  Though I didn’t know the answer, I would have wagered quite a lot on that guess.

Anyone who’s watched Jeopardy! knows that, while knowing facts is certainly a help, the key to performing well is figuring out what the clues are asking for and what hints they might be offering.  That intuitive, improvisational reasoning is the same kind that, say, doctors use in making a diagnosis.  But even the smartest, best-educated doctor with access to every cutting edge tool and test can’t exercise that kind of clever hypothesizing with the entirety of human medical knowledge stored verbatim in her memory.  What if she could?  Or what if she had a computer that could?

That’s what the people behind IBM’s Watson supercomputer are after.  Though Watson is best known for defeating the two winningest champs in Jeopardy! history in a three-match televised tournament in February of 2011, Jeopardy! was just the practice round.  Watson’s first “real-world” application will be as a diagnostic aid.

But before you start either imagining the friendship you’ll have with your C-3PO-like robo-pal, or begin planning your escape from the Matrix, grok this:  Watson may have won the tournament, but it also wagered nearly $1000 on this response to that airport clue: “What is Toronto?????”

This answer isn’t exactly a dumb one.  It’s just that Watson plays Jeopardy! very differently from you or me.  It scans the vast store of documents in its memory banks, using thousands of different algorithms simultaneously to identify statistical correlations between the words in the clue.  The strengths of those correlations point Watson toward a response and give it a quantifiable measure of confidence that it can use to determine whether or not to answer – or when prompted, how much to wager.

One thing that statistical approach doesn’t do is allow Watson to identify the rules by which Jeopardy! categories operate.  The relationships between categories and their clues can vary – and the show’s writers come up with novel ones all the time.  “U.S. Cities” clearly restricts the possible responses to cities in the United States.  But a category like “Before and After” implies a rule for transformation (A: “1985 Pulitzer Prize winning Playwrights Horizons musical featuring the inventor of peanut butter.”  Q: “What is Sunday in the Park with George Washington Carver?”).

 

With no hard-and-fast rule for how categories relate to clues, IBM couldn’t set up a system for Watson to navigate them besides simply including the words in them in its correlation-computation.  Since Watson is capable of performing 80 trillion calculations per second, it’s hard for us to identify precisely what leads it each of its conclusions.  But the “U.S. Cities” category wouldn’t necessarily have led it to rule out Toronto, since the Canadian city (in North America, with an American League baseball team) appears often in close proximity to “U.S.” and related terms, and there is in fact a Toronto, Ohio.

There’s a debate amongst scientists over whether AI will ever duplicate or surpass human intelligence.  What our “smartest” machines do now is clearly not the same thing we do.  Or is it?  That’s the problem.  Though I can describe my mental process as I did above, my subjective experience and the objective reality of my cognition are two different things and scientists still don’t know exactly how the brain works at its foundations.  We do know how computers work.

 

Though the extreme complexity of Watson’s software means that it’s impossible for human beings to perfectly predict or explain its “choices,” each of the many bits of code interacting inside it are doing so with mathematical precision and each calculation it makes is ultimately pre-determined by the laws of physics and the architecture of its microprocessors.  We like to think that something different is happening inside us, that our sense of self is real, that we are making choices that are somehow not merely reflexive, electro-chemical responses, even if they are complex ones.  But we don’t know.  Right now, we can say pretty confidently that Watson doesn’t experience consciousness as we do.  But could we build a machine that does?  Or could we build a machine whose behavior is indistinguishable from our own without it actually experiencing subjective consciousness as we do?  Can we achieve such a thing without discovering, once-and-for-all, that we, ourselves, are nothing more than the sum of the physical interactions in our bodies?  If we can’t, is that proof of the existence of the soul?

As Madeleine George and her characters in The (curious case of the) Watson Intelligence intuit, the pursuit of smarter machines may actually be more about finding out who we really are than what they might become.

Alec Strum
Associate Literary Manager

previous page