Learn More


Published on July 30th, 2019 | by Emergent Enterprise


Deep Learning is About to Get Easier — and More Widespread

Emergent Insight:
Artificial intelligence is a promising technology for virtually any business but the actual implementation of it can be a huge task. Ben Dickson writes at VentureBeat on the progress being made for AI to become more accessible to businesses needing guidance on the generation, organization and discrimination of data. Whatever the desired business goal, there needs to be sufficient and qualified data for the AI to work with. Even synthetic data can be generated and used to build stronger and more comprehensive data sets. When you don’t have enough data, use AI to create some.

Original Article:
Photo source: Image Credit: 4X-image/Getty

We’ve seen a big push in recent months to solve AI’s “big data problem.” And some interesting breakthroughs have begun to emerge that could make AI accessible to many more businesses and organizations.

What is the big data problem? It’s the challenge of getting enough data to enable deep learning, a very popular and promising AI technique that allows machines to find relationships and patterns in data by themselves. (For example, after being fed many images of cats, a deep learning program could create its own definition of what constitutes ‘cat’ and use that to identify future images as either ‘cat’ or ‘not cat’. If you change ‘cat’ to ‘customer,’ you can see why many companies are eager to test-drive this technology.)

Deep learning algorithms often require millions of training examples to perform their tasks accurately. But many companies and organizations don’t have access to such large caches of annotated data to train their models (getting millions of pictures of cats is hard enough; how do you get millions of properly annotated customer profiles — or, considering an application from the health care realm, millions of annotated heart failure events?). On top of that, in many domains, data is fragmented and scattered, requiring tremendous efforts and funding to consolidate and clean for AI training. In other fields, data is subject to privacy laws and other regulations, which may put it out of reach of AI engineers.

This is why AI researchers have been under pressure over the last few years to find workarounds for the enormous data requirements of deep learning. And it’s why there’s been a lot of interest in recent months as several promising solutions have emerged — two that would require less training data, and one that would allow organizations to create their own training examples.

Here’s an overview of those emerging solutions.

Hybrid AI models

For a good part of AI’s six-decade history, the field has been marked by a rivalry between symbolic and connectionist AI. Symbolists believe AI must be based on explicit rules coded by programmers. Connectionists argue that AI must learn through experience, the approach used in deep learning.

But more recently, researchers have found that by combining connectionist and symbolist models, they can create AI systems that require much less training data.

In a paper presented at the ICLR conference in May, researchers from MIT and IBM introduced the “Neuro-Symbolic Concept Learner,” an AI model that brings together rule-based AI and neural networks.

To continue reading, go here…

Tags: , , ,

About the Author

The Emergent Enterprise (EE) website brings together current and important news in enterprise mobility and the latest in innovative technologies in the business world. The articles are hand selected by Emergent Enterprise and not the result of automated electronic aggregating. The site is designed to be a one-stop shop for anyone who has an ongoing interest in how technology is changing how the world does business and how it affects the workforce from the shop floor to the top floor. EE encourages visitor contributions and participation through comments, social media activity and ratings.

Back to Top ↑