In a transatlantic flight some time back, I happened to queue up the movie “Her,” a futuristic film about a character named Theodore who falls in love with his operating system (something like a much more sophisticated version of Apple’s Siri). What makes the movie really interesting, though, is that the operating system, which names itself Samantha, actually falls in love with Theodore—crazy, obsessive, possessive, can’t-sleep-at-night in love.
In the beginning, I assumed that this operating system was just able to make Theodore believe it actually had feelings for him. Well into the movie, however, when Samantha is forced to admit that she is simultaneously in love with several others as well, I began to get it: This operating system is so advanced it cannot only learn like a human being but, in fact, has the ability to develop emotions like a human being.
When the movie ended, I took off my headphones and asked my company’s CTO, himself, a tech visionary who had also watched the movie, how far in the future might this sort of thing be? MY friend, of course, comes from the real world of agile development, scrum, coding sprints, bug reports, etc. So I won’t go into any detail about his answer. Needless to say, I got the impression it may be a while.
“I’ll take Before and After for 50, Alex”
If anyone might know otherwise, it could very well be Mike Rhodin, who heads up IBM’s ultra sophisticated Watson group. In a recent at Harvard Law School on disruptive innovations, Rhodin explained a large paradigm shift that is about to occur in computing—a move from a deterministic programming model (we write code in an effort to determine an outcome) to a probabilistic model, statistical analysis of massive datasets to help computers go far beyond the manipulation of relational data.
What Rhodin and his team are pursuing is cognitive software—true artificial intelligence that can analyze words in syntactical relationships and derive meaning. It was the Watson group that built the supercomputer that was able to beat Harvard and MIT students on the TV game show, Jeopardy.
At first glance, the idea that a computer would have more trivial facts on hand than a human being doesn’t seem that impressive, but winning on Jeopardy goes far beyond regurgitation. Category names (like the one used in the heading above) are often, themselves, sophisticated puns that act as clues for an answer to a question that a contestant must form based on everything from a knowledge of history to popular culture and beyond, all tied together in the kind of complex relationship that would have previously required a human brain to figure out (and a really smart one, at that).
One liners and the Road Ahead
Rhodin is quick to point out that Jeopardy questions—as tricky as they are—are one-liners, and therefore, within the scope of the Watson computer at the time of the Jeopardy challenge. But he also points out that Watson got some of the questions wrong. Rhodin then goes on to discuss the problem posed by longer, more complex blocks of language, the analysis of which requires much more sophistication—the ability to formulate questions about the language that can only then be answered. For this type of analysis, the ability to think and learn like a human being must be replicated in a computer, and that sort of functionality is what the Watson team is pursuing.
Back to Samantha
If Rhodin believes that anything like Samantha is possible in, say, the next two or three lifetimes, he certainly didn’t imply it in his Harvard presentation. In fact, Rhodin goes so far as to say that technologies like Watson won’t replace lawyers, doctors, scientists, etc. but, rather, will enhance their abilities to synthesize enormous quantities of data on the fly and to arrive, themselves, at better conclusions.
Rhodin, of course, is referring specifically to situational matters that are simply too complex to be encoded in any kind of system—deterministic, probabilistic, or otherwise.
The compelling question . . .
The compelling question for me is this: If a cognitive system will, in the foreseeable future, be able to deconstruct a block of text the way a human being could, is it possible that a cognitive system could actually draft language, say a blog article on AI, for example? While I'm no expert, that sort of functionality sounds almost Samantha-like. So let's just say, I'm not too worried about being displaced just yet.