Hiển thị các bài đăng có nhãn Artificial. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn Artificial. Hiển thị tất cả bài đăng

Thứ Năm, 9 tháng 3, 2017

Understanding the Brain with the Help of Artificial Intelligence

Neurobiologists aim to decode the brain’s circuitry with the help of artificial neural networks. NeuroscienceNews.com image is credited to Julia Kuhl.

Researchers have trained neural networks to accelerate the reconstruction of neural circuits.



How does consciousness arise? Researchers suspect that the answer to this question lies in the connections between neurons. Unfortunately, however, little is known about the wiring of the brain. This is due also to a problem of time: tracking down connections in collected data would require man-hours amounting to many lifetimes, as no computer has been able to identify the neural cell contacts reliably enough up to now. Scientists from the Max Planck Institute of Neurobiology in Martinsried plan to change this with the help of artificial intelligence. They have trained several artificial neural networks and thereby enabled the vastly accelerated reconstruction of neural circuits.

Neurons need company. Individually, these cells can achieve little, however when they join forces neurons form a powerful network which controls our behaviour, among other things. As part of this process, the cells exchange information via their contact points, the synapses. Information about which neurons are connected to each other when and where is crucial to our understanding of basic brain functions and superordinate processes like learning, memory, consciousness and disorders of the nervous system. Researchers suspect that the key to all of this lies in the wiring of the approximately 100 billion cells in the human brain.



To be able to use this key, the connectome, that is every single neuron in the brain with its thousands of contacts and partner cells, must be mapped. Only a few years ago, the prospect of achieving this seemed unattainable. However, the scientists in the Electrons – Photons – Neurons Department of the Max Planck Institute of Neurobiology refuse to be deterred by the notion that something seems “unattainable”. Hence, over the past few years, they have developed and improved staining and microscopy methods which can be used to transform brain tissue samples into high-resolution, three-dimensional electron microscope images. Their latest microscope, which is being used by the Department as a prototype, scans the surface of a sample with 91 electron beams in parallel before exposing the next sample level. Compared to the previous model, this increases the data acquisition rate by a factor of over 50. As a result an entire mouse brain could be mapped in just a few years rather than decades.

Although it is now possible to decompose a piece of brain tissue into billions of pixels, the analysis of these electron microscope images takes many years. This is due to the fact that the standard computer algorithms are often too inaccurate to reliably trace the neurons’ wafer-thin projections over long distances and to identify the synapses. For this reason, people still have to spend hours in front of computer screens identifying the synapses in the piles of images generated by the electron microscope.



Training for neural networks
However the Max Planck scientists led by Jörgen Kornfeld have now overcome this obstacle with the help of artificial neural networks. These algorithms can learn from examples and experience and make generalizations based on this knowledge. They are already applied very successfully in image process and pattern recognition today. “So it was not a big stretch to conceive of using an artificial network for the analysis of a real neural network,” says study leader Jörgen Kornfeld. Nonetheless, it was not quite as simple as it sounds. For months the scientists worked on training and testing so-called Convolutional Neural Networks to recognize cell extensions, cell components and synapses and to distinguish them from each other.

Following a brief training phase, the resulting SyConn network can now identify these structures autonomously and extremely reliably. Its use on data from the songbird brain showed that SyConn is so reliable that there is no need for humans to check for errors. “This is absolutely fantastic as we did not expect to achieve such a low error rate,” says Kornfeld with obvious delight at the success of SyConn, which forms part of his doctoral study. And he has every reason to be delighted as the newly developed neural networks will relieve neurobiologists of many thousands of hours of monotonous work in the future. As a result, they will also reduce the time needed to decode the connectome and, perhaps also, the consciousness, by many years.
Source: Max Planck Institute / Neuroscience.news

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Chủ Nhật, 26 tháng 2, 2017

Artificial ß for Neural Networks

Alberto Salleo, associate professor of materials science and engineering, with graduate student Scott Keene characterizing the electrochemical properties of an artificial synapse for neural network computing. They are part of a team that has created the new device. Credit: L.A. Cicero

A new organic artificial synapse could support computers that better recreate the way the human brain processes information. It could also lead to improvements in brain-machine technologies.



For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain's efficient design -- an artificial version of the space over which neurons communicate, called a synapse.

"It works like a real synapse but it's an organic electronic device that can be engineered," said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. "It's an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that's been done before with inorganics."



The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain
When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we've learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

"Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time," said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. "Instead of simulating a neural network, our work is trying to make a neural network."



The artificial synapse is based off a battery design. It consists of two thin, flexible
films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses
Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network's ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

"More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry," said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. "We've demonstrated a device that's ideal for running these type of algorithms and that consumes a lot less power."



This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.
This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential
Every part of the device is made of inexpensive organic materials. These aren't found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain's chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it's possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven
University of Technology in the Netherlands.



This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia's Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.
Story Source:
Materials provided by Stanford University. Original written by Taylor Kubota

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Năm, 9 tháng 2, 2017

The incredible Artificial Intelligence Systems: They may See the world as Humans Do

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

"The model performs in the 75th percentile for American adults, making it better than average," said Northwestern Engineering's Ken Forbus. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition."



The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus' laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner's structure-mapping theory. (Gentner received the 2016 David E. Rumelhart Prize for her work on this theory.)

Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern's McCormick School of Engineering, developed the model with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research was published online this month in the journal Psychological Review.

The ability to solve complex visual problems is one of the hallmarks of human intelligence. Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but it could potentially shrink the gap between computer and human cognition.

While Forbus and Lovett's system can be used to model general visual problem-solving phenomena, they specifically tested it on Raven's Progressive Matrices, a nonverbal standardized test that measures abstract reasoning. All of the test's problems consist of a matrix with one image missing. The test taker is given six to eight choices with which to best complete the matrix. Forbus and Lovett's computational model performed better than the average American.



"The Raven's test is the best existing predictor of what psychologists call 'fluid intelligence, or the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships,'" said Lovett, now a researcher at the US Naval Research Laboratory. "Our results suggest that the ability to flexibly use relational representations, comparing and reinterpreting them, is important for fluid intelligence."

The ability to use and understand sophisticated relational representations is a key to higher-order cognition. Relational representations connect entities and ideas such as "the clock is above the door" or "pressure differences cause water to flow." These types of comparisons are crucial for making and understanding analogies, which humans use to solve problems, weigh moral dilemmas, and describe the world around them.

"Most artificial intelligence research today concerning vision focuses on recognition, or labeling what is in a scene rather than reasoning about it," Forbus said. "But recognition is only useful if it supports subsequent reasoning. Our research provides an important step toward understanding visual reasoning more broadly."

Source: Amanda Morris - Journal reference: Psychological Review

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Sáu, 6 tháng 1, 2017

Why Artificial Intelligence has not yet revolutionized healthcare

Artificial intelligence and machine learning are predicted to be part of the next industrial revolution and could help business and industry save billions of dollars by the next decade.

The tech giants Google, Facebook, Apple, IBM and others are applying artificial intelligence to all sorts of data.
Machine learning methods are being used in areas such as translating language almost in real time, and even to identify images of cats on the internet.

So why haven't we seen artificial intelligence used to the same extent in healthcare?
Radiologists still rely on visual inspection of magnetic resonance imaging (MRI) or X-ray scans – although IBM and others are working on this issue – and doctors have no access to AI for guiding and supporting their diagnoses.



The challenges for machine learning

Machine learning technologies have been around for decades, and a relatively recent technique called deep learning keeps pushing the limit of what machines can do. Deep learning networks comprise neuron-like units into hierarchical layers, which can recognize patterns in data.
This is done by iteratively presenting data along with the correct answer to the network until its internal parameters, the weights linking the artificial neurons, are optimized. If the training data capture the variability of the real-world, the network is able to generalize well and provide the correct answer when presented with unseen data.

So, the learning stage requires very large data sets of cases along with the corresponding answers. Millions of records, and billions of computations are needed to update the network parameters, often done on a supercomputer for days or weeks.
Here lie the problems with healthcare: data sets are not yet big enough and the correct answers to be learned are often ambiguous or even unknown.

We're going to need better and bigger data sets

The functions of the human body, its anatomy and variability, are very complex. The complexity is even greater because diseases are often triggered or modulated by genetic background, which is unique to each individual and so hard to be trained on.
Adding to this, specific challenges to medical data exist. These include the difficulty to measure precisely and accurately any biological processes introducing unwanted variations.

Other challenges include the presence of multiple diseases (co-morbidity) in a patient, which can often confound predictions. Lifestyle and environmental factors also play important roles but are seldom available.
The result is that medical data sets need to be extremely large.



This is being addressed across the world with increasingly large research initiatives. Examples include Biobank in the United Kingdom, which aims to scan 100,000 participants.

Others include the Alzheimer's Disease Neuroimaging Initiative (ADNI) in the United States and the Australian Imaging, Biomarkers and Lifestyle Study of Ageing (AIBL), tracking more than a thousand subjects over a decade.
Government initiatives are also emerging such as the American Cancer Moonshot program. The aim is to "build a national cancer data ecosystem" so researchers, clinicians and patients can contribute data with the aim to "facilitate efficient data analysis". Similarly, the Australian Genomics Health Alliance aims at pooling and sharing genomic information.

Eventually the electronic medical record systems that are being deployed across the world should provide extensive high quality data sets. Beyond the expected gain in efficiency, the potential to mine population wide clinical data using machine learning is tremendous. Some companies such as Google are eagerly trying to access those data.

What a machine needs to learn is not obvious

Complex medical decisions are often made by a team of specialists reaching consensus rather than certainty.
Radiologists might disagree slightly when interpreting a scan where blurring and only very subtle features can be observed. Inferring a diagnosis from measurement with errors and when the disease is modulated by unknown genes often relies on implicit know-how and experience rather than explicit facts.

Sometimes the true answer cannot be obtained at all. For example, measuring the size of a structure from a brain MRI cannot be validated, even at autopsy, since post-mortem tissues change in their composition and size after death.



So a machine can learn that a photo contains a cat because users have labelled with certainty thousands of pictures through social media platforms, or told Google how to recognizes doodles.
It is a much more difficult task to measure the size of a brain structure from an MRI because no one knows the answer and only consensus from several experts can be assembled at best, and at a great cost.

Several technologies are emerging to address this issue. Complex mathematical models including probabilities such as Bayesian approaches can learn under uncertainty.

Unsupervised methods can recognize patterns in data without the need for what the actual answers are, albeit with challenging interpretation of the results.

Another approach is transfer learning, whereby a machine can learn from large, different, but relevant, data sets for which the training answers are known.

Medical applications of deep learning have already been very successful. They often come first at competitions during scientific meetings where data sets are made available and the evaluation of submitted results revealed during the conference.
At CSIRO, we have been developing CapAIBL (Computational Analysis of PET from AIBL) to analyze 3-D images of brain positron emission tomography (PET).



Using a database with many scans from healthy individuals and patients with Alzheimer's disease, the method is able to learn pattern characteristics of the disease. It can then identify that signature from unseen individual's scan. The clinical report generated allows doctors to diagnose the disease faster and with more confidence.
In the case (above), CapAIBL technology was applied to amyloid plaque imaging in a patient with Alzheimer's disease. Red indicates higher amyloid deposition in the brain, a sign of Alzheimer's.

The problem with causation

Probably the most challenging issue is about understanding causation. Analyzing retrospective data is prone to learning spurious correlation and missing the underlying cause for diseases or effect of treatments.
Traditionally, randomized clinical trials provide evidence on the superiority of different options, but they don't benefit yet from the potential of artificial intelligence.

New designs such as platform clinical trials might address this in the future, and could pave the way of how machine learning technologies could learn evidence rather than just association. So, large medical data sets are being assembled. New technologies to overcome the lack of certainty are being developed. Novel ways to establish causation are emerging.

This area is moving fast and tremendous potential exists for improving efficiency and health. Indeed, many ventures are trying to capitalize on this. Startups such as Enlitic, large firms such as IBM, or even small businesses such as Resonance Health, are promising to revolutionize health.

Impressive progress is being made but many challenges still exist.

by Olivier Salvado, The Conversation

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

 
OUR MISSION