Saturday, August 12, 2017

Artificial Intelligence is Out of Control at Facebook

Has anyone else been following this thread of alarmist articles in the media?  Facebook deployed some "virtual assistant" technology or "bots" that apparently developed their own "language" to communicate between themselves more effectively.  Oh no!  Next they'll be conspiring against us!

From Dave Gershgorn at Quartz:
Recent headlines and news articles have depicted research from Facebook’s artificial intelligence lab as the sci-fi beginnings of killer AI: one bot is learning to talk to another bot in a secret language so complex that it had to shut down. A BGR headline reads “Facebook engineers panic, pull plug on AI after bots develop their own language,” and Digital Journal claims in its article that “There’s not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.” 
Most of the coverage has been ridiculous:  not just a waste of time or useless but actually misleading.  These writers and publishers are using words to amuse, alarm and provoke but not to explain.  Classic mystification.

But Dave does a decent job here, explaining that, in fact, there is nothing nefarious going on:
The bots did exactly what they were programmed to do: haggle over fake objects. They developed a new way of communicating with each other, because computers don’t speak English—just like we use x to stand in for a number in math, the bots were using other letters to stand in for longer words and ideas, like “iii” for “want” or “orange.”
Exactly.  But why does he conclude with another attention-grabbing and misleading hook:
Perhaps the more concerning piece of news should be not that Facebook’s bots invented a machine-readable language, but rather that its successful English-speaking bot eventually learned to lie.
Sigh.  Here's the link if you want to look for yourself.

The biggest problem with all this coverage, IMO, is that even when they DO explain the technology in terms that make sense, they don't really talk about what matters in my opinion:  utility and history.  They deliberately blur the boundary between the present and the future and ignore the past.  History of the technology can help us understand context and trends;  it's essential to sorting out the signal from all the noise.  History can show us how advanced research today -- which seems like science-fiction -- relates to to applications we rely on every day.

The thing about AI is that it continually applies to a set of technologies AND APPLICATIONS that are not quite feasible but, and here is the key, over time those same set of technologies find appropriate applications that have become quite commonplace.  The first category remains mysterious while the second is so basic that it appears ho-hum.  And as we use it and take it for granted, we forget that just a few years ago it was still mysterious "AI" in the lab.  In other words, when we adopt technology we also adopt a set of metaphors and stories that explain it, at least well enough to make it useful.  And when we do, it no longer seems “intelligent” or “artificially intelligent” at all, just useful.

This is an example of TRYING to explain how a virtual assistant can be trained to negotiate with a human user:  "Instead of outright saying what it wanted, sometimes the AI would feign interest in a worthless object, only to later concede it for something that it really wanted.”  And here is an example of how it learned this strategy:  "Facebook isn’t sure whether it learned from the human hagglers or whether it stumbled upon the trick accidentally, but either way when the tactic worked, it was rewarded.”  Dave is at his best here.

But he could do better still by being more transparent about anthropomorphizing the computer.  Notice how these are stories that explain how HUMANS see what the FB service is doing.  What it is ACTUALLY doing is following some pretty simple patterns, including “learning” or the acquisition of new patterns.  I’m not sure I think this is “learning” or “lying” or even “feigning interest in something” but simply manipulating symbols in a conversation in a predictable and repeatable way to achieve desired outcomes.

What the author SHOULD be pointing out is how this same technology is already being used at Amazon, Google and Facebook for example, to predict what you might want and to show you just the right things at the right time.  We love it.  After all, it's useful.  We adopt it because it serves us, and the more useful it is the more we use it.  And although we are less aware of this, it's also incredibly useful to the eCommerce merchants.  These predictive algorithms are really expert negotiators, getting you to buy what you don't really need and to pay more than you want to... all the while making you feel like you're getting a great deal.  Seriously.

It's a "virtuous" cycle.  Or is it?  Think about it.

Why don't we read about THAT in the press?  One problem is that it's kind of boring.  An eCommerce web app is way less interesting than a sexy new conversational agent which can produce and process human speech or text.  In fact, it's so boring that the technology is virtually invisible.  The mobile or web experiences where these algorithms are being applied are not newsworthy because there isn't anything new here.  No novelty.

But apart from novelty, what about impact?  What about scale?  What about the money?  These are alternative frames we could use to educate and stimulate conversations about these services which only grow.  Behind the basic shopping experience are predictive algorithms which are essentially "negotiating" with a few billion users a day and learning more and more in the process.  Unfortunately, the right thing to do is to demystify it:  make it a LOT less dramatic.  And focus on impact, scale, and the money.  What's hard is to make that compelling and interesting copy.

But if you DO want to think about the myriad ways things can go wrong, there is plenty to write about.  Instead of worrying about whether or not an AI will take over and control us in some scary future, we ought to be concerned that it has ALREADY been baked in to our everyday experiences and is affecting us ALREADY.  We ought to worry about bias, for example.  Criteria and accountability for decision-making, decisions that affect us.  We ought to worry about who owns the data that is used to train the system… and who benefits from it’s use.

Saturday, December 31, 2016

Media, Corporations and Democracy

Robert Reich has posted another excellent video, this one entitled "Trump and the Media." In it he outlines how Trump is using power, the law and public opinion to undermine the media and consolidate his own power. Many of my friends and colleagues have responded to this thoughtful analysis with a big "So what?!" They point out that the media has always been biased, that it's never been truly independent, that it's all owned by a couple of corporations and none of them can be trusted. So what's the big deal?

This is fallacious reasoning for three reasons.  First, no matter how bad the media gets, it doesn't make it any more or less important for our communities, our society and our Republic. And second, just because they are biased, not independent and privately owned, does not mean that they MUST be undeserving of our trust. And finally, by generalizing and giving up, we may actually be making matters worse for the few remaining journalists, editors and publishers out there who are still fulfilling their public mission. Cynicism about the media is not the same as skepticism.

Wednesday, December 28, 2016

Automation and the Future of Work

My mother sent me this TedX video of David Autor, an MIT Professor of Economics.  It's great.  I highly recommend it. Here is his paper on the subject offering even more detail.  

The impact of technology is all around us and just seems to accelerate leaving entire generations in lower-paid, less skilled jobs than they had only 30 years ago.  This chart clearly shows waves of losses and gains in US employment by sector between 1940 and 2010.

Saturday, December 24, 2016

More on the Facebook Fake News Story

Finally we have John Herrman's post from the Times that gets to the root of the "Fake News" story.   He gets it right and summarizes it better than I did in this previous post.  There are really three problems, the first of which is simply the nature of the World Wide Web and the Internet, which, like any truly global market, is practically unregulated in important ways.  Not much we can do about fake news here.  The second problem is user-generated content published on Facebook which are and will remain un-curated, the responsibility of Facebook users, much of it populated with ridiculous and unsubstantiated opinion and outright lies.  It'll be impossible for Facebook to be the arbiter of truth in this domain either.

Sunday, November 13, 2016

The Role of Facebook and Social Media in the Election of 2016 (edited)

This much is true:  "The post falsely claiming that the Pope endorsed Trump has more than 868,000 Facebook shares, while the story debunking it has 33,000.” And it may have had a significant effect in Trump's triumph, claims Cliff Kuang in this FastCoDesign post.  He's mistaken, however, when he asserts that this problem is a design flaw in Facebook specifically or social media in general.  Blaming Facebook for the impact of fake news on society is like blaming the effect of gossip transmitted via post or ATT on the post office or the phone company.  Unless, of course, we consider the scale of "sharing" afforded by social media...

Modern web and mobile experiences make it easier than ever to create and consume social content... but it makes it harder and harder to understand relationships between sources of information and virtually impossible to easily confirm the source and integrity of more and more content on the Internet.  While this is true in general, he's got it wrong in this case:  this is not really a Facebook or a Facebook design problem.  He's not thinking clearly about who actually creates social media and why.  If we understand more about what the web is, what Facebook is, who owns what and who pays for it all, it's pretty clear that everything functions rather well at least with respect to its design intent.  It's just that neither the web nor social media is actually DESIGNED to deliver reliable and verifiable content.  This is a publishing problem which happens to include design, not a design problem per se and certainly not a technology problem. (1)

Saturday, September 24, 2016

Reflections on the Passing of John Rassias

I got the email from Professor Nancy Vickers:  John was gone.  Deep breath…

So what was I to make of that, I wondered?  All that motion, but to what end?  Boundless passion, for sure.  So much heart.  Love.  But was there progress?  Or just a lot of heat?  The older I get the more I want to know: what was that all about?  What have we learned?

Sunday, August 21, 2016

Race, Segregation, and Stories About Faceless Institutions, Families with Faces AND Evidence

I just finished reading this article in the Times today about a "broad yet little explored fact of American segregation.  I like that:  the FACT of segregation.  And the story of how even "affluent black families, freed from the restrictions of low income, often end up living in poor and segregated communities anyway."  I liked it a lot.  I learned something new about how laws and courts and the best of intentions of lots of people are simply not enough to change behaviors -- complex behaviors of almost ALL of us -- that perpetuate decades of segregation that disproportionately disadvantage another generation of Black Americans.  Sadly it IS still about race:  not class, not culture, not resources, but RACE.  The evidence is pretty clear.

I reflected for a minute and learned something else:  it is possible to tell a good story about complex systems and evidence that is also about individuals.