E-E says: One scientist says, "AI is the new ..." />

XR Immersive Enterprise 2020 - May 5-7, 2020

Featured

Published on October 4th, 2016 | by Emergent Enterprise

0

Why Deep Learning is Suddenly Changing Your Life (Part II)

Emergent Enterprise E-E says: One scientist says, “AI is the new electricity.” Hyperbole? Perhaps not. Even though the technology seems to change on a daily basis, the potential is unlimited. There is no doubt that deep learning will become part of our everyday interaction with computers both on the job and off. Today’s article is Part II of a long read about deep learning. You can read the entire article here: http://fortune.com/ai-artificial-intelligence-deep-machine-learning/ or check back here at Emergent Enterprise for Part I. Whichever path you take – read and learn about your future. Then, leave your comments below.

Source: Roger Parloff, fortune.com, September 28, 2016

Photo Source: Justin Metz

This is Part II of this article on Emergent Enterprise. You can see Part I at this link.

The first sparks of the impending revolution began flickering in 2009. That summer Microsoft Research invited neural nets pioneer Geoffrey Hinton, of the University of Toronto, to visit. Impressed with his research, Lee’s group experimented with neural nets for speech recognition. “We were shocked by the results,” Lee says. “We were achieving more than 30% improvements in accuracy with the very first prototypes.

In 2011, Microsoft introduced deep-learning technology into its commercial speech-recognition products, according to Lee. Google followed suit in August 2012.

But the real turning point came in October 2012. At a workshop in Florence, Italy, Fei-Fei Li, the head of the Stanford AI Lab and the founder of the prominent annual ImageNet computer-vision contest, announced that two of Hinton’s students had invented software that identified objects with almost twice the accuracy of the nearest competitor. “It was a spectacular result,” recounts Hinton, “and convinced lots and lots of people who had been very skeptical before.” (In last year’s contest a deep-learning entrant surpassed human performance.)

Cracking image recognition was the starting gun, and it kicked off a hiring race. Google landed Hinton and the two students who had won that contest. Facebook signed up French deep learning innovator Yann LeCun, who, in the 1980s and 1990s, had pioneered the type of algorithm that won the ImageNet contest. And Baidu snatched up Ng, a former head of the Stanford AI Lab, who had helped launch and lead the deep-learning-focused Google Brain project in 2010.

The hiring binge has only intensified since then. Today, says Microsoft’s Lee, there’s a “bloody war for talent in this space.” He says top-flight minds command offers “along the lines of NFL football players.”

Geoffrey Hinton, 68, first heard of neural networks in 1972 when he started his graduate work in artificial intelligence at the University of Edinburgh. Having studied experimental psychology as an undergraduate at Cambridge, Hinton was enthusiastic about neural nets, which were software constructs that took their inspiration from the way networks of neurons in the brain were thought to work. At the time, neural nets were out of favor. “Everybody thought they were crazy,” he recounts. But Hinton soldiered on.

Neural nets offered the prospect of computers’ learning the way children do—from experience—rather than through laborious instruction by programs tailor-made by humans. “Most of AI was inspired by logic back then,” he recalls. “But logic is something people do very late in life. Kids of 2 and 3 aren’t doing logic. So it seemed to me that neural nets were a much better paradigm for how intelligence would work than logic was.” (Logic, as it happens, is one of the Hinton family trades. He comes from a long line of eminent scientists and is the great-great-grandson of 19th-century mathematician George Boole, after whom Boolean searches, logic, and algebra are named.)

During the 1950s and ’60s, neural networks were in vogue among computer scientists. In 1958, Cornell research psychologist Frank Rosenblatt, in a Navy-backed project, built a prototype neural net, which he called the Perceptron, at a lab in Buffalo. It used a punch-card computer that filled an entire room. After 50 trials it learned to distinguish between cards marked on the left and cards marked on the right. Reporting on the event, the New York Times wrote, “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

The Perceptron, whose software had only one layer of neuron-like nodes, proved limited. But researchers believed that more could be accomplished with multilayer—or deep—neural networks.

LRN.10.01.16-1

Hinton explains the basic idea this way. Suppose a neural net is interpreting photographic images, some of which show birds. “So the input would come in, say, pixels, and then the first layer of units would detect little edges. Dark one side, bright the other side.” The next level of neurons, analyzing data sent from the first layer, would learn to detect “things like corners, where two edges join at an angle,” he says. One of these neurons might respond strongly to the angle of a bird’s beak, for instance.

The next level “might find more complicated configurations, like a bunch of edges arranged in a circle.” A neuron at this level might respond to the head of the bird. At a still higher level a neuron might detect the recurring juxtaposition of beaklike angles near headlike circles. “And that’s a pretty good cue that it might be the head of a bird,” says Hinton. The neurons of each higher layer respond to concepts of greater complexity and abstraction, until one at the top level corresponds to our concept of “bird.”

To learn, however, a deep neural net needed to do more than just send messages up through the layers in this fashion. It also needed a way to see if it was getting the right results at the top layer and, if not, send messages back down so that all the lower neuron-like units could retune their activations to improve the results. That’s where the learning would occur.

In the early 1980s, Hinton was working on this problem. So was a French researcher named Yann LeCun, who was just starting his graduate work in Paris. LeCun stumbled on a 1983 paper by Hinton, which talked about multilayer neural nets. “It was not formulated in those terms,” LeCun recalls, “because it was very difficult at that time actually to publish a paper if you mentioned the word ‘neurons’ or ‘neural nets.’ So he wrote this paper in an obfuscated manner so it would pass the reviewers. But I thought the paper was super-interesting.” The two met two years later and hit it off.

LRN.10.01.16-2

In 1986, Hinton and two colleagues wrote a seminal paper offering an algorithmic solution to the error-correction problem. “His paper was basically the foundation of the second wave of neural nets,” says LeCun. It reignited interest in the field.

After a post-doc stint with Hinton, LeCun moved to AT&T’s Bell Labs in 1988, where during the next decade he did foundational work that is still being used today for most image-recognition tasks. In the 1990s, NCR NCR -0.91% , which was then a Bell Labs subsidiary, commercialized a neural-nets-powered device, widely used by banks, which could read handwritten digits on checks, according to LeCun. At the same time, two German researchers—Sepp Hochreiter, now at the University of Linz, and Jürgen Schmidhuber, codirector of a Swiss AI lab in Lugano—were independently pioneering a different type of algorithm that today, 20 years later, has become crucial for natural-language processing applications.

Despite all the strides, in the mid-1990s neural nets fell into disfavor again, eclipsed by what were, given the computational power of the times, more effective machine-learning tools. That situation persisted for almost a decade, until computing power increased another three to four orders of magnitude and researchers discovered GPU acceleration.

LRN.10.01.16-3

But one piece was still missing: data. Although the Internet was awash in it, most data—especially when it came to images—wasn’t labeled, and that’s what you needed to train neural nets. That’s where Fei-Fei Li, a Stanford AI professor, stepped in. “Our vision was that big data would change the way machine learning works,” she explains in an interview. “Data drives learning.”

In 2007 she launched ImageNet, assembling a free database of more than 14 million labeled images. It went live in 2009, and the next year she set up an annual contest to incentivize and publish computer-vision breakthroughs.

In October 2012, when two of Hinton’s students won that competition, it became clear to all that deep learning had arrived.

LRN.10.01.16-4

By then the general public had also heard about deep learning, though due to a different event. In June 2012, Google Brain published the results of a quirky project now known colloquially as the “cat experiment.” It struck a comic chord and went viral on social networks.

The project actually explored an important unsolved problem in deep learning called “unsupervised learning.” Almost every deep-learning product in commercial use today uses “supervised learning,” meaning that the neural net is trained with labeled data (like the images assembled by ImageNet). With “unsupervised learning,” by contrast, a neural net is shown unlabeled data and asked simply to look for recurring patterns. Researchers would love to master unsupervised learning one day because then machines could teach themselves about the world from vast stores of data that are unusable today—making sense of the world almost totally on their own, like infants.

LRN.10.01.16-5

In the cat experiment, researchers exposed a vast neural net—spread across 1,000 computers—to 10 million unlabeled images randomly taken from YouTube videos, and then just let the software do its thing. When the dust cleared, they checked the neurons of the highest layer and found, sure enough, that one of them responded powerfully to images of cats. “We also found a neuron that responded very strongly to human faces,” says Ng, who led the project while at Google Brain.

Yet the results were puzzling too. “We did not find a neuron that responded strongly to cars,” for instance, and “there were a lot of other neurons we couldn’t assign an English word to. So it’s difficult.”

The experiment created a sensation. But unsupervised learning remains uncracked—a challenge for the future.

Not surprisingly, most of the deep-learning applications that have been commercially deployed so far involve companies like Google, Microsoft, Facebook, Baidu, and Amazon—the companies with the vast stores of data needed for deep-learning computations. Many companies are trying to develop more realistic and helpful “chatbots”—automated customer-service representatives.

depp-learning-tech-giants

Companies like IBM and Microsoft are also helping business customers adapt deep-learning-powered applications—like speech-recognition interfaces and translation services—for their own businesses, while cloud services like Amazon Web Services provide cheap, GPU-driven deep-learning computation services for those who want to develop their own software. Plentiful open-source software—like Caffe, Google’s TensorFlow, and Amazon’s DSSTNE—have greased the innovation process, as has an open-publication ethic, whereby many researchers publish their results immediately on one database without awaiting peer-review approval.

Many of the most exciting new attempts to apply deep learning are in the medical realm (see sidebar). We already know that neural nets work well for image recognition, observes Vijay Pande, a Stanford professor who heads Andreessen Horowitz’s biological investments unit, and “so much of what doctors do is image recognition, whether we’re talking about radiology, dermatology, ophthalmology, or so many other ‘-ologies.’ ”

deep-learning-medicine

While a radiologist might see thousands of images in his life, a computer can be shown millions. “It’s not crazy to imagine that this image problem could be solved better by computers,” Pande says, “just because they can plow through so much more data than a human could ever do.”

The potential advantages are not just greater accuracy and faster analysis, but democratization of services. As the technology becomes standard, eventually every patient will benefit.

The greatest impacts of deep learning may well be felt when it is integrated into the whole toolbox of other artificial intelligence techniques in ways that haven’t been thought of yet. Google’s DeepMind, for instance, has already been accomplishing startling things by combining deep learning with a related technique called reinforcement learning. Using the two, it created AlphaGo, the system that, this past March, defeated the champion player of the ancient Chinese game of go—widely considered a landmark AI achievement. Unlike IBM’s Deep Blue, which defeated chess champion Garry Kasparov in 1997, AlphaGo was not programmed with decision trees, or equations on how to evaluate board positions, or with if-then rules. “AlphaGo learned how to play go essentially from self-play and from observing big professional games,” says Demis Hassabis, DeepMind’s CEO. (During training, AlphaGo played a million go games against itself.)

A game might seem like an artificial setting. But Hassabis thinks the same techniques can be applied to real-world problems. In July, in fact, Google reported that, by using approaches similar to those used by AlphaGo, DeepMind was able to increase the energy efficiency of Google’s data centers by 15%. “In the data centers there are maybe 120 different variables,” says Hassabis. “You can change the fans, open the windows, alter the computer systems, where the power goes. You’ve got data from the sensors, the temperature gauges, and all that. It’s like the go board. Through trial and error, you learn what the right moves are.

“So it’s great,” he continues. “You could save, say, tens of millions of dollars a year, and it’s also great for the environment. Data centers use a lot of power around the world. We’d like to roll it out on a bigger scale now. Even the national grid level.”

Chatbots are all well and good. But that would be a cool app.

A version of this article appears in the October 1, 2016 issue of Fortune with the headline “The Deep-Learning Revolution.” This version contains updated figures from the CB Insights research firm.

Tags: , , , , , , ,


About the Author

Emergent Enterprise

The Emergent Enterprise (EE) website brings together current and important news in enterprise mobility and the latest in innovative technologies in the business world. The articles are hand selected by Emergent Enterprise and not the result of automated electronic aggregating. The site is designed to be a one-stop shop for anyone who has an ongoing interest in how technology is changing how the world does business and how it affects the workforce from the shop floor to the top floor. EE encourages visitor contributions and participation through comments, social media activity and ratings.



Back to Top ↑