Hiển thị các bài đăng có nhãn Intelligence. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn Intelligence. Hiển thị tất cả bài đăng

Thứ Năm, 9 tháng 3, 2017

Understanding the Brain with the Help of Artificial Intelligence

Neurobiologists aim to decode the brain’s circuitry with the help of artificial neural networks. NeuroscienceNews.com image is credited to Julia Kuhl.

Researchers have trained neural networks to accelerate the reconstruction of neural circuits.



How does consciousness arise? Researchers suspect that the answer to this question lies in the connections between neurons. Unfortunately, however, little is known about the wiring of the brain. This is due also to a problem of time: tracking down connections in collected data would require man-hours amounting to many lifetimes, as no computer has been able to identify the neural cell contacts reliably enough up to now. Scientists from the Max Planck Institute of Neurobiology in Martinsried plan to change this with the help of artificial intelligence. They have trained several artificial neural networks and thereby enabled the vastly accelerated reconstruction of neural circuits.

Neurons need company. Individually, these cells can achieve little, however when they join forces neurons form a powerful network which controls our behaviour, among other things. As part of this process, the cells exchange information via their contact points, the synapses. Information about which neurons are connected to each other when and where is crucial to our understanding of basic brain functions and superordinate processes like learning, memory, consciousness and disorders of the nervous system. Researchers suspect that the key to all of this lies in the wiring of the approximately 100 billion cells in the human brain.



To be able to use this key, the connectome, that is every single neuron in the brain with its thousands of contacts and partner cells, must be mapped. Only a few years ago, the prospect of achieving this seemed unattainable. However, the scientists in the Electrons – Photons – Neurons Department of the Max Planck Institute of Neurobiology refuse to be deterred by the notion that something seems “unattainable”. Hence, over the past few years, they have developed and improved staining and microscopy methods which can be used to transform brain tissue samples into high-resolution, three-dimensional electron microscope images. Their latest microscope, which is being used by the Department as a prototype, scans the surface of a sample with 91 electron beams in parallel before exposing the next sample level. Compared to the previous model, this increases the data acquisition rate by a factor of over 50. As a result an entire mouse brain could be mapped in just a few years rather than decades.

Although it is now possible to decompose a piece of brain tissue into billions of pixels, the analysis of these electron microscope images takes many years. This is due to the fact that the standard computer algorithms are often too inaccurate to reliably trace the neurons’ wafer-thin projections over long distances and to identify the synapses. For this reason, people still have to spend hours in front of computer screens identifying the synapses in the piles of images generated by the electron microscope.



Training for neural networks
However the Max Planck scientists led by Jörgen Kornfeld have now overcome this obstacle with the help of artificial neural networks. These algorithms can learn from examples and experience and make generalizations based on this knowledge. They are already applied very successfully in image process and pattern recognition today. “So it was not a big stretch to conceive of using an artificial network for the analysis of a real neural network,” says study leader Jörgen Kornfeld. Nonetheless, it was not quite as simple as it sounds. For months the scientists worked on training and testing so-called Convolutional Neural Networks to recognize cell extensions, cell components and synapses and to distinguish them from each other.

Following a brief training phase, the resulting SyConn network can now identify these structures autonomously and extremely reliably. Its use on data from the songbird brain showed that SyConn is so reliable that there is no need for humans to check for errors. “This is absolutely fantastic as we did not expect to achieve such a low error rate,” says Kornfeld with obvious delight at the success of SyConn, which forms part of his doctoral study. And he has every reason to be delighted as the newly developed neural networks will relieve neurobiologists of many thousands of hours of monotonous work in the future. As a result, they will also reduce the time needed to decode the connectome and, perhaps also, the consciousness, by many years.
Source: Max Planck Institute / Neuroscience.news

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Năm, 9 tháng 2, 2017

The incredible Artificial Intelligence Systems: They may See the world as Humans Do

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

"The model performs in the 75th percentile for American adults, making it better than average," said Northwestern Engineering's Ken Forbus. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition."



The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus' laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner's structure-mapping theory. (Gentner received the 2016 David E. Rumelhart Prize for her work on this theory.)

Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern's McCormick School of Engineering, developed the model with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research was published online this month in the journal Psychological Review.

The ability to solve complex visual problems is one of the hallmarks of human intelligence. Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but it could potentially shrink the gap between computer and human cognition.

While Forbus and Lovett's system can be used to model general visual problem-solving phenomena, they specifically tested it on Raven's Progressive Matrices, a nonverbal standardized test that measures abstract reasoning. All of the test's problems consist of a matrix with one image missing. The test taker is given six to eight choices with which to best complete the matrix. Forbus and Lovett's computational model performed better than the average American.



"The Raven's test is the best existing predictor of what psychologists call 'fluid intelligence, or the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships,'" said Lovett, now a researcher at the US Naval Research Laboratory. "Our results suggest that the ability to flexibly use relational representations, comparing and reinterpreting them, is important for fluid intelligence."

The ability to use and understand sophisticated relational representations is a key to higher-order cognition. Relational representations connect entities and ideas such as "the clock is above the door" or "pressure differences cause water to flow." These types of comparisons are crucial for making and understanding analogies, which humans use to solve problems, weigh moral dilemmas, and describe the world around them.

"Most artificial intelligence research today concerning vision focuses on recognition, or labeling what is in a scene rather than reasoning about it," Forbus said. "But recognition is only useful if it supports subsequent reasoning. Our research provides an important step toward understanding visual reasoning more broadly."

Source: Amanda Morris - Journal reference: Psychological Review

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Sáu, 6 tháng 1, 2017

Why Artificial Intelligence has not yet revolutionized healthcare

Artificial intelligence and machine learning are predicted to be part of the next industrial revolution and could help business and industry save billions of dollars by the next decade.

The tech giants Google, Facebook, Apple, IBM and others are applying artificial intelligence to all sorts of data.
Machine learning methods are being used in areas such as translating language almost in real time, and even to identify images of cats on the internet.

So why haven't we seen artificial intelligence used to the same extent in healthcare?
Radiologists still rely on visual inspection of magnetic resonance imaging (MRI) or X-ray scans – although IBM and others are working on this issue – and doctors have no access to AI for guiding and supporting their diagnoses.



The challenges for machine learning

Machine learning technologies have been around for decades, and a relatively recent technique called deep learning keeps pushing the limit of what machines can do. Deep learning networks comprise neuron-like units into hierarchical layers, which can recognize patterns in data.
This is done by iteratively presenting data along with the correct answer to the network until its internal parameters, the weights linking the artificial neurons, are optimized. If the training data capture the variability of the real-world, the network is able to generalize well and provide the correct answer when presented with unseen data.

So, the learning stage requires very large data sets of cases along with the corresponding answers. Millions of records, and billions of computations are needed to update the network parameters, often done on a supercomputer for days or weeks.
Here lie the problems with healthcare: data sets are not yet big enough and the correct answers to be learned are often ambiguous or even unknown.

We're going to need better and bigger data sets

The functions of the human body, its anatomy and variability, are very complex. The complexity is even greater because diseases are often triggered or modulated by genetic background, which is unique to each individual and so hard to be trained on.
Adding to this, specific challenges to medical data exist. These include the difficulty to measure precisely and accurately any biological processes introducing unwanted variations.

Other challenges include the presence of multiple diseases (co-morbidity) in a patient, which can often confound predictions. Lifestyle and environmental factors also play important roles but are seldom available.
The result is that medical data sets need to be extremely large.



This is being addressed across the world with increasingly large research initiatives. Examples include Biobank in the United Kingdom, which aims to scan 100,000 participants.

Others include the Alzheimer's Disease Neuroimaging Initiative (ADNI) in the United States and the Australian Imaging, Biomarkers and Lifestyle Study of Ageing (AIBL), tracking more than a thousand subjects over a decade.
Government initiatives are also emerging such as the American Cancer Moonshot program. The aim is to "build a national cancer data ecosystem" so researchers, clinicians and patients can contribute data with the aim to "facilitate efficient data analysis". Similarly, the Australian Genomics Health Alliance aims at pooling and sharing genomic information.

Eventually the electronic medical record systems that are being deployed across the world should provide extensive high quality data sets. Beyond the expected gain in efficiency, the potential to mine population wide clinical data using machine learning is tremendous. Some companies such as Google are eagerly trying to access those data.

What a machine needs to learn is not obvious

Complex medical decisions are often made by a team of specialists reaching consensus rather than certainty.
Radiologists might disagree slightly when interpreting a scan where blurring and only very subtle features can be observed. Inferring a diagnosis from measurement with errors and when the disease is modulated by unknown genes often relies on implicit know-how and experience rather than explicit facts.

Sometimes the true answer cannot be obtained at all. For example, measuring the size of a structure from a brain MRI cannot be validated, even at autopsy, since post-mortem tissues change in their composition and size after death.



So a machine can learn that a photo contains a cat because users have labelled with certainty thousands of pictures through social media platforms, or told Google how to recognizes doodles.
It is a much more difficult task to measure the size of a brain structure from an MRI because no one knows the answer and only consensus from several experts can be assembled at best, and at a great cost.

Several technologies are emerging to address this issue. Complex mathematical models including probabilities such as Bayesian approaches can learn under uncertainty.

Unsupervised methods can recognize patterns in data without the need for what the actual answers are, albeit with challenging interpretation of the results.

Another approach is transfer learning, whereby a machine can learn from large, different, but relevant, data sets for which the training answers are known.

Medical applications of deep learning have already been very successful. They often come first at competitions during scientific meetings where data sets are made available and the evaluation of submitted results revealed during the conference.
At CSIRO, we have been developing CapAIBL (Computational Analysis of PET from AIBL) to analyze 3-D images of brain positron emission tomography (PET).



Using a database with many scans from healthy individuals and patients with Alzheimer's disease, the method is able to learn pattern characteristics of the disease. It can then identify that signature from unseen individual's scan. The clinical report generated allows doctors to diagnose the disease faster and with more confidence.
In the case (above), CapAIBL technology was applied to amyloid plaque imaging in a patient with Alzheimer's disease. Red indicates higher amyloid deposition in the brain, a sign of Alzheimer's.

The problem with causation

Probably the most challenging issue is about understanding causation. Analyzing retrospective data is prone to learning spurious correlation and missing the underlying cause for diseases or effect of treatments.
Traditionally, randomized clinical trials provide evidence on the superiority of different options, but they don't benefit yet from the potential of artificial intelligence.

New designs such as platform clinical trials might address this in the future, and could pave the way of how machine learning technologies could learn evidence rather than just association. So, large medical data sets are being assembled. New technologies to overcome the lack of certainty are being developed. Novel ways to establish causation are emerging.

This area is moving fast and tremendous potential exists for improving efficiency and health. Indeed, many ventures are trying to capitalize on this. Startups such as Enlitic, large firms such as IBM, or even small businesses such as Resonance Health, are promising to revolutionize health.

Impressive progress is being made but many challenges still exist.

by Olivier Salvado, The Conversation

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

 
OUR MISSION