Hiển thị các bài đăng có nhãn learning. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn learning. Hiển thị tất cả bài đăng

Thứ Sáu, 20 tháng 1, 2017

Artificial Intelligence and Machine Learning: What's the Next Step?

It's difficult to describe in a concise list with less than 1,000 words what the definitive direction of artificial intelligence is going to be in a 12-month span. The year 2016 surprised several people in terms of the speed of certain technologies' development and the revised ETA of new AI-driven products hitting the public market.
Here are the four trends that will dominate artificial intelligence in 2017.

1. Language processing will continue
We could call this "natural language processing" or NLP, but let's think more broadly about language for a moment. The key to cognition, for you mavens of Psychology 101, is sophisticated communication, even internal abstract thinking. That will continue to prove critical in driving machine learning 'deeper.'

One place to keep track of progress in the space is in machine translation, which will give you an idea of how sophisticated and accurate our software currently is in translating some of the nuance and implications of our spoken and written language.



That will be the next step in getting personal assistant technology like Alexa, Siri, Google Assistant, or Cortana to interpret our commands and questions just a little bit better.

2. Efforts to square machine learning and big data with different health sectors will accelerate
"I envision a system that still has those predictive data pools. It looks at the different data you obtain that different labs are giving all the time," eBay Director of Data Science Kira Radinsky told an audience at Geektime TechFest 2016 last month, pioneering "automated processes that can lead to those types of discoveries."

Biotech researchers and companies are trying to get programs to automate drug discoveries, among other things. Finding correlations in data and extrapolating causation is not the same in all industries, nor in any one sector of medicine. Researchers in heart disease, neurological disorders, and various types of cancer are all organizing different metrics of data. Retrieving that information and programming the proper relationship between all those variables is an endeavor.



One of the areas where this is evident is in computer vision, exemplified by Zebra Medical Vision, which can detect anomalies in CT scans for a variety of organs including the heart and liver. But compiling patient medical records and hunting for diagnostic clues there, as well as constructing better treatment plans, are also markets machine learning is opening in 2017. Other startups like Israel's ‘HealthWatch’ are producing smart clothes that constantly feed medical data to doctors to monitor patients.

This developing ecosystem of health trackers should produce enough information about individual patients or groups of people for algorithms to extract new realizations.
3. They will probably have to come up with another buzzword to go deeper than 'deep learning'

Machines building machines? Algorithms writing algorithms? Machine learning programs will continue adding more layers of processing units, as well as more sophistication to abstract pattern analysis. Deep neural networks will be expected to draw even more observations from unsorted data, just as was mentioned above in regards to health care.



That future buzz term might be “generative” or “adversarial,” as in generative adversarial networks (GANs). Described by MIT Technology Review as the invention of Open AI scientist Ian Goodfellow, GANs will set up two networks like two people with different approaches to a problem. One network will try to create new data (read “ideas”) from a given set of data while the other “tries to discriminate between real and fake data” (let’s assume this is the robotic equivalent to a devil’s advocate).

4. Self-driving cars will force an expensive race among automotive companies
I saved this for last because many readers probably consider this patently obvious. However, the surprise many laypeople and people who might fancy themselves tech insiders had by seeing the speed of the industry’s development might be duplicated in 2017 for the opposite reason. While numbers of companies are testing the technology, it will run into some pun-intended roadblocks this year.



While talking about an “autonomous” vehicle is all the rage, several companies in the testing stage not only are cautious to keep someone behind the wheel if needed, but are also creating entire human-administered command centers to guide the cars.
There are some companies that will likely be able to avoid burning capital because of competition. Consider how NIVDIA is developing cars in conjunction with Audi and Mercedes-Benz, but separately. Still, BMW, Mercedes-Benz, Nissan-Renault, Ford, and General Motors are all making very big bets while trying to speed up their timeline and hit autonomous vehicle research milestones more quickly.

Even if the entire industry were to be wrong in a cataclysmic way about the unstoppable future of the self-driving car (which it won't be, but bear with me), there will still be more automated features installed in new vehicle models relatively soon. Companies will be forced to spend big, and fast to match features offered by their competitors.

By Gedalyah Reback

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Tư, 28 tháng 12, 2016

Two prestigious Universities set new mark for 'Deep Learning'

Neuroscience and artificial intelligence experts from Rice University and Baylor College of Medicine have taken inspiration from the human brain in creating a new "deep learning" method that enables computers to learn about the visual world largely on their own, much as human babies do.

In tests, the group's "deep rendering mixture model" largely taught itself how to distinguish handwritten digits using a standard dataset of 10,000 digits written by federal employees and high school. In results presented this month at the Neural Information Processing Systems (NIPS) conference in Barcelona, Spain, the researchers described how they trained their algorithm by giving it just 10 correct examples of each handwritten digit between zero and nine and then presenting it with several thousand more examples that it used to further teach itself. In tests, the algorithm was more accurate at correctly distinguishing handwritten digits than almost all previous algorithms that were trained with thousands of correct examples of each digit.

"In deep-learning parlance, our system uses a method known as semi supervised learning," said lead researcher Ankit Patel, an assistant professor with joint appointments in neuroscience at Baylor and electrical and computer engineering at Rice. "The most successful efforts in this area have used a different technique called supervised learning, where the machine is trained with thousands of examples: This is a one. This is a two.



"Humans don't learn that way," Patel said. "When babies learn to see during their first year, they get very little input about what things are. Parents may label a few things: 'Bottle, chair, momma.' But the baby can't even understand spoken words at that point. It's learning mostly unsupervised via some interaction with the world."

Patel said he and graduate student Tan Nguyen, a co-author on the new study, set out to design a semi supervised learning system for visual data that didn't require much "hand-holding" in the form of training examples. For instance, neural networks that use supervised learning would typically be given hundreds or even thousands of training examples of handwritten digits before they would be tested on the database of 10,000 handwritten digits in the Mixed National Institute of Standards and Technology (MNIST) database.

The semi supervised Rice-Baylor algorithm is a "convolutional neural network," a piece of software made up of layers of artificial neurons whose design was inspired by biological neurons. These artificial neurons, or processing units, are organized in layers, and the first layer scans an image and does simple tasks like searching for edges and color changes. The second layer examines the output from the first layer and searches for more complex patterns. Mathematically, this nested method of looking for patterns within patterns within patterns is referred to as a nonlinear process.

"It's essentially a very simple visual cortex," Patel said of the convolutional neural net. "You give it an image, and each layer processes the image a little bit more and understands it in a deeper way and by the last layer, you've got a really deep and abstract understanding of the image. Every self-driving car right now has convolutional neural nets in it because they are currently the best for vision."

Like human brains, neural networks start out as blank slates and become fully formed as they interact with the world. For example, each processing unit in a convolutional net starts the same and becomes specialized over time as they are exposed to visual stimuli.



"Edges are very important," Nguyen said. "Many of the lower layer neurons tend to become edge detectors. They're looking for patterns that are both very common and very important for visual interpretation, and each one trains itself to look for a specific pattern, like a 45-degree edge or a 30-degree red-to-blue transition.

"When they detect their particular pattern, they become excited and pass that on to the next layer up, which looks for patterns in their patterns, and so on," he said. "The number of times you do a nonlinear transformation is essentially the depth of the network, and depth governs power. The deeper a network is, the more stuff it's able to disentangle. At the deeper layers, units are looking for very abstract things like eyeballs or vertical grating patterns or a school bus."

Nguyen began working with Patel in January as the latter began his tenure-track academic career at Rice and Baylor. Patel had already spent more than a decade studying and applying machine learning in jobs ranging from high-volume commodities training to strategic missile defense, and he'd just wrapped up a four-year postdoctoral stint in the lab of Rice's Richard Baraniuk, another co-author on the new study. In late 2015, Baraniuk, Patel and Nguyen published the first theoretical framework that could both derive the exact structure of convolutional neural networks and provide principled solutions to alleviate some of their limitations.

Baraniuk said a solid theoretical understanding is vital for designing convolutional nets that go beyond today's state-of-the-art.

"Understanding video images is a great example," Baraniuk said. "If I am looking at a video, frame by frame by frame, and I want to understand all the objects and how they're moving and so on, that is a huge challenge. Imagine how long it would take to label every object in every frame of a video. No one has time for that. And in order for a machine to understand what it's seeing in a video, it has to understand what objects are, the concept of three-dimensional space and a whole bunch of other really complicated stuff. We humans learn those things on our own and take them for granted, but they are totally missing in today's artificial neural networks."

Patel said the theory of artificial neural networks, which was refined in the NIPS paper, could ultimately help neuroscientists better understand the workings of the human brains.



"There seem to be some similarities about how the visual cortex represents the world and how convolutional nets represent the world, but they also differ greatly," Patel said. "What the brain is doing may be related, but it's still very different. And the key thing we know about the brain is that it mostly learns unsupervised.

"What I and my neuroscientist colleagues are trying to figure out is, what is the semi supervised learning algorithm that's being implemented by the neural circuits in the visual cortex? and How is that related to our theory of deep learning?" he said. "Can we use our theory to help elucidate what the brain is doing? Because the way the brain is doing it is far superior to any neural network that we've designed."

Explore further: New AI algorithm taught by humans learns beyond its training
Provided by: Rice University*

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Tư, 26 tháng 10, 2016

Deep Learning: A Giant Step for Robots

The prospect of robots that can learn for themselves — through artificial intelligence and adaptive learning — has fascinated scientists and movie-goers alike. Films like Short Circuit, Terminator, Millennial Man, Chappie and Ex Machina flirt with the idea of a machine intelligence beyond the restricted rules of a set program.



Robots today can be programmed to reliably carry out a straightforward task over and over, such as installing a part on an assembly line. But a robot that can respond appropriately to changing conditions without specific instructions for how to do so has remained an elusive goal.

A robot that could learn from experience would be far more versatile than one needing detailed, baked-in instructions for each new act. It could rely on what artificial intelligence researchers call deep learning and reinforcement learning.



Deep learning enables the robot to perceive its immediate environment, including the location and movement of its limbs. Reinforcement learning means improving at a task by trial and error. A robot with these two skills could refine its performance based on real-time feedback.

For the past 15 years, Berkeley robotics researcher Pieter Abbeel has been looking for ways to make robots learn. In 2010 he and his students programmed a robot they named BRETT (Berkeley Robot for the Elimination of Tedious Tasks) to pick up different sized towels, figure out their shape and neatly fold them.

The key instructions allowed the robot to visualize the towel’s limp shape when held by one gripper and its outline when held by two. It may not seem like much but the challenge was daunting for the robot. After as many as a hundred trials — holding a towel in different places each time — BRETT knew the towel’s size and shape and could start folding. A YouTube video of BRETT’s skills was viewed hundreds of thousands of times.

“The algorithms instructed the robot to perform in a very specific set of conditions, and although it succeeded, it took 20 minutes to fold each towel,” laughs Abbeel, associate professor of electrical engineering and computer science.




“We stepped back and asked ‘How can we make it easier to equip robots with the ability to perfect new skills so that we can apply the learning process to many different skills?’”

This year in a first for the field Abbeel gave a new version of BRETT the ability to improve its performance through both deep learning and reinforcement learning. The deep learning component employs so-called neural networks to provide moment-to-moment visual and sensory feedback to the software that controls the robot’s movements.

With these programmed skills, BRETT learned to screw a cap onto a bottle, to place a clothes hanger on a rack and to pull out a nail with the claw end of a hammer.

Its onboard camera allowed BRETT to pinpoint the nail to be extracted, as well as the position of its own arms and hands. Through trial and error, it learned to adjust the vertical and horizontal position of the hammer claw as well as maneuver the angle to the right position to pull out the nail.

The deep reinforcement learning strategy opens the way for training robots to carry out increasingly complex tasks. The achievement gained widespread attention, including an article in The New York Times.



BRETT learned to complete his chores in 30 to 40 trials, with each attempt taking only a few seconds. Still, he has more trial and error ahead: Learning to screw a cap on a bottle doesn’t prepare him to screw a lid on a jar. Instead, he re-starts learning as if he had never mastered caps and bottles. Abbeel has begun research aimed at enabling robots to do something humans take for granted, generalize from one task to another.

Starting this year, the Bakar Fellows Program will support Abbeel’s lab with $75,000 a year for five years to help him refine the deep-learning strategy and move the research towards commercial viability. In addition to financial support, the Bakar Fellows Program provides mentoring in such crucial areas as the intricacies of venture capital and strategies to secure intellectual property rights.

“The Bakar support will allow us to improve the robot’s deep-learning ability and to apply a learned skill to new tasks,” Abbeel says.

Applications for such a skilled robot might range from helping humans with tedious housekeeping chores all the way to assisting in highly detailed surgery. In fact, Abbeel says, “Robots might even be able to teach other robots.”
Source: NEUROSCIENCE NEWS

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

 
OUR MISSION