The Lede, Thurssday, March 8, 2018
By David Royse
Good Afternoon, and Happy International Women’s Day.
Robot Overlord News
And what many saw as a throw-away story in yesterday’s news – reporting claims that Amazon’s Alexa has been going rogue on a few occasions, has lead to more claims – around at least since Frankenstein – that something we’re building has the potential to go beyond our control and destroy life as we know it.
(Turns out, we now know, the Creepy Alexa laughing wasn’t as creepy as you might think. There’s an easy explanation.)
The majority of Americans, we learned this week, do believe robots will take more jobs than they create (though most people think their particular job is safe.)
(I guess I fit in here – there are bots being used to “write news” but we don’t use them here at LedeTree – so my job here won’t be taken over by drones.)
But the interesting thing to me about the Gallup poll is the widespread use of products that use some form of A.I.
While much of the media led their stories about the Gallup poll with the jobs fears – Gallup itself got the lede right, starting off with this interesting and surprising fact:
“Nearly nine in 10 Americans (85%) say they currently use at least one of six devices, programs or services that feature elements of artificial intelligence (AI). Use of these products ranges from 84% of U.S. adults using navigation applications to 20% using smart home devices such as self-learning thermostats and lighting.”
That points out a little bit of a disconnect between the notion that A.I. technology is somehow bad for society and our own use of it.
The A.I. that’s most widely used, according to Gallup:navigation apps, followed by music streaming services, and digital personal assistant apps on smartphones.
Roughly a decade on from the first scientific forays into true A.I., there is now starting to be serious discussion over the ends to which we are employing the technology – and how much we ultimately can retain control of it.
But we do control the tech, because we create it. And we are therefore responsible for it. That’s the argument in an opinion essay today in the New York Times by Stanford Computer Science Professor Fei-Fei Li, the director of the Stanford Artificial Intelligence Lab, and chief scientist for A.I. research at Google Cloud. Li imakes the case that A.I. isn’t really machine intelligence at all – it’s merely human intelligence being carried out by machines.
“Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns,” Li argues. “I call this approach “human-centered A.I.”
“No technology is more reflective of its creators than A.I.,” Lei says. “It has been said that there are no ‘machine’ values at all, in fact; machine values are human values. A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility.”
NOTES FROM THE AGE OF DISRUPTION
Has a new venture. Washington Post.
Reported record year-over-year revenue growth. Austin American Statesman
Will lay off about 100 engineers. The Verge
Has a new renewable energy deal in Asia. CNBC