Has anyone else been following this thread of alarmist articles in the media? Facebook deployed some "virtual assistant" technology or "bots" that apparently developed their own "language" to communicate between themselves more effectively. Oh no! Next they'll be conspiring against us!
From Dave Gershgorn at Quartz:
But Dave does a decent job here, explaining that, in fact, there is nothing nefarious going on:
Exactly. But why does he conclude with another attention-grabbing and misleading hook:
The biggest problem with all this coverage, IMO, is that even when they DO explain the technology in terms that make sense, they don't really talk about what matters in my opinion: utility and history. They deliberately blur the boundary between the present and the future and ignore the past. History of the technology can help us understand context and trends; it's essential to sorting out the signal from all the noise. History can show us how advanced research today -- which seems like science-fiction -- relates to to applications we rely on every day.
The thing about AI is that it continually applies to a set of technologies AND APPLICATIONS that are not quite feasible but, and here is the key, over time those same set of technologies find appropriate applications that have become quite commonplace. The first category remains mysterious while the second is so basic that it appears ho-hum. And as we use it and take it for granted, we forget that just a few years ago it was still mysterious "AI" in the lab. In other words, when we adopt technology we also adopt a set of metaphors and stories that explain it, at least well enough to make it useful. And when we do, it no longer seems “intelligent” or “artificially intelligent” at all, just useful.
This is an example of TRYING to explain how a virtual assistant can be trained to negotiate with a human user: "Instead of outright saying what it wanted, sometimes the AI would feign interest in a worthless object, only to later concede it for something that it really wanted.” And here is an example of how it learned this strategy: "Facebook isn’t sure whether it learned from the human hagglers or whether it stumbled upon the trick accidentally, but either way when the tactic worked, it was rewarded.” Dave is at his best here.
But he could do better still by being more transparent about anthropomorphizing the computer. Notice how these are stories that explain how HUMANS see what the FB service is doing. What it is ACTUALLY doing is following some pretty simple patterns, including “learning” or the acquisition of new patterns. I’m not sure I think this is “learning” or “lying” or even “feigning interest in something” but simply manipulating symbols in a conversation in a predictable and repeatable way to achieve desired outcomes.
What the author SHOULD be pointing out is how this same technology is already being used at Amazon, Google and Facebook for example, to predict what you might want and to show you just the right things at the right time. We love it. After all, it's useful. We adopt it because it serves us, and the more useful it is the more we use it. And although we are less aware of this, it's also incredibly useful to the eCommerce merchants. These predictive algorithms are really expert negotiators, getting you to buy what you don't really need and to pay more than you want to... all the while making you feel like you're getting a great deal. Seriously.
It's a "virtuous" cycle. Or is it? Think about it.
Why don't we read about THAT in the press? One problem is that it's kind of boring. An eCommerce web app is way less interesting than a sexy new conversational agent which can produce and process human speech or text. In fact, it's so boring that the technology is virtually invisible. The mobile or web experiences where these algorithms are being applied are not newsworthy because there isn't anything new here. No novelty.
But apart from novelty, what about impact? What about scale? What about the money? These are alternative frames we could use to educate and stimulate conversations about these services which only grow. Behind the basic shopping experience are predictive algorithms which are essentially "negotiating" with a few billion users a day and learning more and more in the process. Unfortunately, the right thing to do is to demystify it: make it a LOT less dramatic. And focus on impact, scale, and the money. What's hard is to make that compelling and interesting copy.
But if you DO want to think about the myriad ways things can go wrong, there is plenty to write about. Instead of worrying about whether or not an AI will take over and control us in some scary future, we ought to be concerned that it has ALREADY been baked in to our everyday experiences and is affecting us ALREADY. We ought to worry about bias, for example. Criteria and accountability for decision-making, decisions that affect us. We ought to worry about who owns the data that is used to train the system… and who benefits from it’s use.
From Dave Gershgorn at Quartz:
Recent headlines and news articles have depicted research from Facebook’s artificial intelligence lab as the sci-fi beginnings of killer AI: one bot is learning to talk to another bot in a secret language so complex that it had to shut down. A BGR headline reads “Facebook engineers panic, pull plug on AI after bots develop their own language,” and Digital Journal claims in its article that “There’s not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.”Most of the coverage has been ridiculous: not just a waste of time or useless but actually misleading. These writers and publishers are using words to amuse, alarm and provoke but not to explain. Classic mystification.
But Dave does a decent job here, explaining that, in fact, there is nothing nefarious going on:
The bots did exactly what they were programmed to do: haggle over fake objects. They developed a new way of communicating with each other, because computers don’t speak English—just like we use x to stand in for a number in math, the bots were using other letters to stand in for longer words and ideas, like “iii” for “want” or “orange.”
Exactly. But why does he conclude with another attention-grabbing and misleading hook:
Perhaps the more concerning piece of news should be not that Facebook’s bots invented a machine-readable language, but rather that its successful English-speaking bot eventually learned to lie.Sigh. Here's the link if you want to look for yourself.
The biggest problem with all this coverage, IMO, is that even when they DO explain the technology in terms that make sense, they don't really talk about what matters in my opinion: utility and history. They deliberately blur the boundary between the present and the future and ignore the past. History of the technology can help us understand context and trends; it's essential to sorting out the signal from all the noise. History can show us how advanced research today -- which seems like science-fiction -- relates to to applications we rely on every day.
The thing about AI is that it continually applies to a set of technologies AND APPLICATIONS that are not quite feasible but, and here is the key, over time those same set of technologies find appropriate applications that have become quite commonplace. The first category remains mysterious while the second is so basic that it appears ho-hum. And as we use it and take it for granted, we forget that just a few years ago it was still mysterious "AI" in the lab. In other words, when we adopt technology we also adopt a set of metaphors and stories that explain it, at least well enough to make it useful. And when we do, it no longer seems “intelligent” or “artificially intelligent” at all, just useful.
This is an example of TRYING to explain how a virtual assistant can be trained to negotiate with a human user: "Instead of outright saying what it wanted, sometimes the AI would feign interest in a worthless object, only to later concede it for something that it really wanted.” And here is an example of how it learned this strategy: "Facebook isn’t sure whether it learned from the human hagglers or whether it stumbled upon the trick accidentally, but either way when the tactic worked, it was rewarded.” Dave is at his best here.
But he could do better still by being more transparent about anthropomorphizing the computer. Notice how these are stories that explain how HUMANS see what the FB service is doing. What it is ACTUALLY doing is following some pretty simple patterns, including “learning” or the acquisition of new patterns. I’m not sure I think this is “learning” or “lying” or even “feigning interest in something” but simply manipulating symbols in a conversation in a predictable and repeatable way to achieve desired outcomes.
What the author SHOULD be pointing out is how this same technology is already being used at Amazon, Google and Facebook for example, to predict what you might want and to show you just the right things at the right time. We love it. After all, it's useful. We adopt it because it serves us, and the more useful it is the more we use it. And although we are less aware of this, it's also incredibly useful to the eCommerce merchants. These predictive algorithms are really expert negotiators, getting you to buy what you don't really need and to pay more than you want to... all the while making you feel like you're getting a great deal. Seriously.
It's a "virtuous" cycle. Or is it? Think about it.
Why don't we read about THAT in the press? One problem is that it's kind of boring. An eCommerce web app is way less interesting than a sexy new conversational agent which can produce and process human speech or text. In fact, it's so boring that the technology is virtually invisible. The mobile or web experiences where these algorithms are being applied are not newsworthy because there isn't anything new here. No novelty.
But apart from novelty, what about impact? What about scale? What about the money? These are alternative frames we could use to educate and stimulate conversations about these services which only grow. Behind the basic shopping experience are predictive algorithms which are essentially "negotiating" with a few billion users a day and learning more and more in the process. Unfortunately, the right thing to do is to demystify it: make it a LOT less dramatic. And focus on impact, scale, and the money. What's hard is to make that compelling and interesting copy.
But if you DO want to think about the myriad ways things can go wrong, there is plenty to write about. Instead of worrying about whether or not an AI will take over and control us in some scary future, we ought to be concerned that it has ALREADY been baked in to our everyday experiences and is affecting us ALREADY. We ought to worry about bias, for example. Criteria and accountability for decision-making, decisions that affect us. We ought to worry about who owns the data that is used to train the system… and who benefits from it’s use.
No comments:
Post a Comment