Hiển thị các bài đăng có nhãn Google. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn Google. Hiển thị tất cả bài đăng

Thứ Sáu, 6 tháng 1, 2017

Why Artificial Intelligence has not yet revolutionized healthcare

Artificial intelligence and machine learning are predicted to be part of the next industrial revolution and could help business and industry save billions of dollars by the next decade.

The tech giants Google, Facebook, Apple, IBM and others are applying artificial intelligence to all sorts of data.
Machine learning methods are being used in areas such as translating language almost in real time, and even to identify images of cats on the internet.

So why haven't we seen artificial intelligence used to the same extent in healthcare?
Radiologists still rely on visual inspection of magnetic resonance imaging (MRI) or X-ray scans – although IBM and others are working on this issue – and doctors have no access to AI for guiding and supporting their diagnoses.



The challenges for machine learning

Machine learning technologies have been around for decades, and a relatively recent technique called deep learning keeps pushing the limit of what machines can do. Deep learning networks comprise neuron-like units into hierarchical layers, which can recognize patterns in data.
This is done by iteratively presenting data along with the correct answer to the network until its internal parameters, the weights linking the artificial neurons, are optimized. If the training data capture the variability of the real-world, the network is able to generalize well and provide the correct answer when presented with unseen data.

So, the learning stage requires very large data sets of cases along with the corresponding answers. Millions of records, and billions of computations are needed to update the network parameters, often done on a supercomputer for days or weeks.
Here lie the problems with healthcare: data sets are not yet big enough and the correct answers to be learned are often ambiguous or even unknown.

We're going to need better and bigger data sets

The functions of the human body, its anatomy and variability, are very complex. The complexity is even greater because diseases are often triggered or modulated by genetic background, which is unique to each individual and so hard to be trained on.
Adding to this, specific challenges to medical data exist. These include the difficulty to measure precisely and accurately any biological processes introducing unwanted variations.

Other challenges include the presence of multiple diseases (co-morbidity) in a patient, which can often confound predictions. Lifestyle and environmental factors also play important roles but are seldom available.
The result is that medical data sets need to be extremely large.



This is being addressed across the world with increasingly large research initiatives. Examples include Biobank in the United Kingdom, which aims to scan 100,000 participants.

Others include the Alzheimer's Disease Neuroimaging Initiative (ADNI) in the United States and the Australian Imaging, Biomarkers and Lifestyle Study of Ageing (AIBL), tracking more than a thousand subjects over a decade.
Government initiatives are also emerging such as the American Cancer Moonshot program. The aim is to "build a national cancer data ecosystem" so researchers, clinicians and patients can contribute data with the aim to "facilitate efficient data analysis". Similarly, the Australian Genomics Health Alliance aims at pooling and sharing genomic information.

Eventually the electronic medical record systems that are being deployed across the world should provide extensive high quality data sets. Beyond the expected gain in efficiency, the potential to mine population wide clinical data using machine learning is tremendous. Some companies such as Google are eagerly trying to access those data.

What a machine needs to learn is not obvious

Complex medical decisions are often made by a team of specialists reaching consensus rather than certainty.
Radiologists might disagree slightly when interpreting a scan where blurring and only very subtle features can be observed. Inferring a diagnosis from measurement with errors and when the disease is modulated by unknown genes often relies on implicit know-how and experience rather than explicit facts.

Sometimes the true answer cannot be obtained at all. For example, measuring the size of a structure from a brain MRI cannot be validated, even at autopsy, since post-mortem tissues change in their composition and size after death.



So a machine can learn that a photo contains a cat because users have labelled with certainty thousands of pictures through social media platforms, or told Google how to recognizes doodles.
It is a much more difficult task to measure the size of a brain structure from an MRI because no one knows the answer and only consensus from several experts can be assembled at best, and at a great cost.

Several technologies are emerging to address this issue. Complex mathematical models including probabilities such as Bayesian approaches can learn under uncertainty.

Unsupervised methods can recognize patterns in data without the need for what the actual answers are, albeit with challenging interpretation of the results.

Another approach is transfer learning, whereby a machine can learn from large, different, but relevant, data sets for which the training answers are known.

Medical applications of deep learning have already been very successful. They often come first at competitions during scientific meetings where data sets are made available and the evaluation of submitted results revealed during the conference.
At CSIRO, we have been developing CapAIBL (Computational Analysis of PET from AIBL) to analyze 3-D images of brain positron emission tomography (PET).



Using a database with many scans from healthy individuals and patients with Alzheimer's disease, the method is able to learn pattern characteristics of the disease. It can then identify that signature from unseen individual's scan. The clinical report generated allows doctors to diagnose the disease faster and with more confidence.
In the case (above), CapAIBL technology was applied to amyloid plaque imaging in a patient with Alzheimer's disease. Red indicates higher amyloid deposition in the brain, a sign of Alzheimer's.

The problem with causation

Probably the most challenging issue is about understanding causation. Analyzing retrospective data is prone to learning spurious correlation and missing the underlying cause for diseases or effect of treatments.
Traditionally, randomized clinical trials provide evidence on the superiority of different options, but they don't benefit yet from the potential of artificial intelligence.

New designs such as platform clinical trials might address this in the future, and could pave the way of how machine learning technologies could learn evidence rather than just association. So, large medical data sets are being assembled. New technologies to overcome the lack of certainty are being developed. Novel ways to establish causation are emerging.

This area is moving fast and tremendous potential exists for improving efficiency and health. Indeed, many ventures are trying to capitalize on this. Startups such as Enlitic, large firms such as IBM, or even small businesses such as Resonance Health, are promising to revolutionize health.

Impressive progress is being made but many challenges still exist.

by Olivier Salvado, The Conversation

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Chủ Nhật, 25 tháng 12, 2016

Why it's dangerous to outsource our critical thinking to computers

It is crucial for a resilient democracy that we better understand how Google and Facebook are changing the way we think interact, and behave



The lack of transparency around the processes of Google’s search engine has been a preoccupation among scholars since the company began. Long before Google expanded into self-driving cars, smartphones and ubiquitous email, the company was being asked to explain the principles and ideologies that determine how it presents information to us. And now, 10 years later, the impact of reckless, subjective and inflammatory misinformation served up on the web is being felt like never before in the digital era.
Google responded to negative coverage this week by reluctantly acknowledging and then removing offensive autosuggest results for certain search results. Type “Jews are” into Google, for example, and until now the site would autofill “Jews are evil” before recommending links to several rightwing anti-Semitic sites.

What follows, the misinformation debacle that was the US general election. When Facebook CEO Mark Zuckerberg addressed the issue, he admitted that structural issues lie at the heart of the problem: the site financially rewards the kind of sensationalism and fake news likely to spread rapidly through the social network regardless on its veracity. The site does not identify bad reporting, or even distinguish fake news from satire.
Facebook is now trying to solve a problem it helped create. Yet instead of using its vast resources to promote media literacy, or encouraging users to think critically and identify potential problems with what they read and share, Facebook is relying on developing algorithmic solutions that can rate the trustworthiness of content.



This approach could have detrimental, long-term social consequences. The scale and power with which Facebook operates means the site would effectively be training users to outsource their judgment to a computerized alternative. And it gives even less opportunity to encourage the kind of 21st-century digital skills – such as reflective judgment about how technology is shaping our beliefs and relationships – that we now see to be perilously lacking.

The engineered environments of Facebook, Google and the rest have increasingly discouraged us from engaging in an intellectually meaningful way. We, the masses, aren’t stupid or lazy when we believe fake news; we’re primed to continue believing what we’re led to believe.

The networked info-media environment that has emerged in the past decade – of which Facebook is an important part – is a space that encourages people to accept what’s presented to them without reflection or deliberation, especially if it appears surrounded by credible information or passed on from someone we trust. There’s a powerful, implicit value in information shared between friends that Facebook exploits, but it accelerates the spread of misinformation as much as it does good content.



Every piece of information appears to be presented and assessed with equal weight, a New York Times article followed by some fake news about the pope, a funny dog video shared by a close friend next to a distressing, unsourced and unverified video of an injured child in some Middle East conflict. We have more information at our disposal than ever before, but we’re paralyzed into passive complacency. We’re being engineered to be passive, programmable people.

In the never-ending stream of comfortable, unchallenging personalized info-attainment there’s little incentive to break off, to triangulate and fact check with reliable and contrary sources. Actively choosing what might need investigating feels like too much effort, and even then a quick Google search of a questionable news story on Facebook may turn up a link to a rehashed version of the same fake story.

The “transaction costs” of leaving the site are high: switching gears is fiddly and takes time, and it’s also far easier to passively accept what you see than to challenge it. Platforms overload us with information and encourage us to feed the machine with easy, speedy clicks. The media feeds our susceptibility to ‘filter bubbles’ and capitalizes on contagious emotions such ‘anger’.

It is crucial for a resilient democracy that we better understand how these powerful, ubiquitous websites are changing the way we think, interact and behave. Democracies don’t simply depend on well-informed citizens – they require citizens to be capable of exerting thoughtful, independent judgment.

This capacity is a mental muscle; only repeated use makes it strong. And when we spend a long time in places that deliberately discourage critical thinking, we lose the opportunity to keep building that skill.



Source: Evans Selinger is a professor of philosophy at Rochester Institute of Technology, and Brett Frishmann is the Microsoft visiting professor of information technology policy at Princeton University and professor of law at Benjamin N Cardozo School of Law. Their forthcoming book Being Human in the 21st Century (Cambridge University Press, 2017) examines whether technology is eroding our humanity, and offers new theoretical tools for dealing with it.

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Năm, 10 tháng 12, 2015

The amazing Google quantum computer

In a recent article it was mentioned that a famous computer, acquired by Google from NASA, has proven beyond doubt to be superior to conventional computers used today. The information was provided by researchers at the Artificial Intelligence Laboratory of Google.
Google’s company said that this incredible computer has included a chip, so powerful and advanced, that would help the artificial intelligence algorithms.
Google researchers say that this computer may be used, accurately, in the advances of quantum physics and with the application of special mathematical theory may advance the AI field, this model would work much faster than a conventional computer.
Many powerful companies in the computer field like IBM, Microsoft, Goggle and the Government, are trying to develop a "quantum computer" using the concepts of quantum mechanics and work with data and to decipher huge amount of data not properly obtained or developed at its full capacity. These companies believe that quantum computer could make their studies of artificial intelligence more completed and also advance in the field of scientific materials.




NASA hopes that quantum computer can help in future space missions. Deepak Biswas, Director of Technology Ames Exploration Center in Mountain View, California NASA; said is a very disruptive technology that could change the way we do things today."
The scientific Boswas addressed the media specifying the joint work between NASA and Google’s company. He explained the management of data by the chip-superconductor called 'Quantum Annealer", which is modifiable with the adaptation of an algorithm called "problems-optimized" which are common in machine-learning and artificial intelligence programs.
Despite the progress of this computer (D-Wave's chip), they still have some detractors, particularly among quantum physicists. Several investigators claim that has not been fully proven that quantum computer can perform quantum physics problems and hit conventional computers.
After the controversial about the capability of this chip, Google AI labs, expressed, through Hartmut Neven, that related investigations have provided strong evidence on this matter. Google has developed a series of improvements to the D-Wave, provided by NASA, compared to conventional computers. Neven said that "Specifically, they have designed and improved the concept of problem-of-test to reach speeds of up to 100 million."
Google results are impressed, despite the questioning of the test validity, and are only part of the vindication of D-Wave. Initially, the computer lost a test with another quantum machine, when it was running a code to solve a problem manually, using a similar chip prepared in D-Wave algorithm. It is known that an alternative algorithm may have left the conventional computers be more competitive, or even win, Neven explained it was, what he called, a "bug" in the design of D-Wave. Neven said the test performed by the group is still important because the shortcut is not available for conventional computers until they complete, in the future, the quantum Annealers capability to work on large amounts of data.



The Google’s company is confident with D-Wave, to do it. Last summer, Google opened a new laboratory in Santa Barbara, led by academic researcher John Martins.

The academic is working on quantum hardware to optimize not just problems of limitations of Annealers. It foreshadows the computer will be called 'The Universal Quantum Computer" and could be programmed to solve any problem that could be more beneficial, but is expected a delay before they improve it. IBM and Microsoft, as well as Government laboratories are also working on this technology.



Google’s company, through its Vice-President John Giannandrea researches coordinator, said the quantum Annealers could be practical, and we may find many uses for more powerful software 'program-of-learning' orientated. 'We found problems during the solution of our non-practical products, using existing computers, and we also still have many problems’ he indicated that 'Possibly we have to wait many years before we make a difference with Google products."
For those persons amazed with this computer, it’s necessary to explain that the main purpose of the computer is to manipulate enormous amount of data, at the most rapidly speed possible. With this problem resolved, Google may provide programmers and general users the access to multiples data bases and even to cross information and obtain the results, of the search, almost immediately, and even the computer may “suggest” how we can use the results of the search more appropriately.
……. Remember that your comments are valuable for us.
Source:
Tom Simonite, December 9 2015
 
OUR MISSION