Hiển thị các bài đăng có nhãn Computers. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn Computers. Hiển thị tất cả bài đăng

Chủ Nhật, 26 tháng 2, 2017

Artificial ß for Neural Networks

Alberto Salleo, associate professor of materials science and engineering, with graduate student Scott Keene characterizing the electrochemical properties of an artificial synapse for neural network computing. They are part of a team that has created the new device. Credit: L.A. Cicero

A new organic artificial synapse could support computers that better recreate the way the human brain processes information. It could also lead to improvements in brain-machine technologies.



For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain's efficient design -- an artificial version of the space over which neurons communicate, called a synapse.

"It works like a real synapse but it's an organic electronic device that can be engineered," said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. "It's an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that's been done before with inorganics."



The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain
When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we've learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

"Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time," said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. "Instead of simulating a neural network, our work is trying to make a neural network."



The artificial synapse is based off a battery design. It consists of two thin, flexible
films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses
Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network's ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

"More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry," said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. "We've demonstrated a device that's ideal for running these type of algorithms and that consumes a lot less power."



This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.
This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential
Every part of the device is made of inexpensive organic materials. These aren't found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain's chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it's possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven
University of Technology in the Netherlands.



This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia's Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.
Story Source:
Materials provided by Stanford University. Original written by Taylor Kubota

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Chủ Nhật, 22 tháng 1, 2017

Minds and Computers: Is there a possibility of a Computational theory of Mind?

There is now a very wide range of sound introductory texts in the philosophy of mind. Matt Carter’s new book offers something rather different. His opening six chapters include material which will be very familiar to any student of the philosophy of mind: dualism, behaviorism, materialism, and functionalism. But his main concern is to outline and defend the possibility of a computational theory of mind. Three chapters outline in a formal, rigorous way a variety of concepts necessary for understanding what computation is, and the remainder of the book aims to show how this formal machinery might be invoked in an explanation of what the mind is and how it works.

Carter’s cautious conclusion is that on the one hand there is no objection in principle to the programme of strong artificial intelligence – ie, that there can be systems which display (and so have) mentality simply in virtue of instantiating certain computer programs – but that on the other hand, our best available programs are ‘woefully inadequate’ to that task.



Carter succeeds admirably in explaining why this might be so. The opening chapters will be fairly simple for philosophy students, but the material thereafter will be almost wholly new and not available elsewhere in such a user-friendly form. For students of artificial intelligence (AI), the book explains very clearly why the whole artificial intelligence project presupposes substantive and controversial answers to some traditional philosophical questions. The book is a model of exercise in interdisciplinary. It’s also written lucidly, with regular summaries of important points. An Appendix supplies a useful glossary of technical terms.

So far so, good, very good in fact. However, as usual among critics, I want to make a number of critical comments, of increasing weight. The first and least weighty: though Carter sprinkles the text with exercises, he rarely supplies any answers, which students will surely find frustrating.

Next, he doesn’t mention one set of reservations some philosophers have about AI. That is the tendency among cognitive scientists to attribute to the brain certain activities which (so the criticism goes) belong only to the whole person. For example, in accounting for perception, the scientists will speak in terms of the brain receiving messages, interpreting data, constructing hypotheses, drawing inferences, etc., as if the brain were itself a small person. Carter may well find such criticism unpersuasive, but it would have been good to give it an airing, as it has been a significant issue between defenders of AI and their critics.

The third reservation is a matter of substance. Computer programs operate on purely ‘syntactic’ features – ultimately speaking, they depend upon the physical form of the inputs, transformations and outputs. By contrast, human thought is always a thought about something, it represents something, it has a contents. It displays what philosophers call ‘intentionality’. One central problem for artificial intelligence is how to get aboutness into computer programs – how to get semantics out of syntactic.

Carter’s answer is to invoke experience. What enables certain expressions of syntax in our heads to represent features of the world is that they are linked with the external world, and the linkage comes about because we experience the world. “In order for our mental states to have meaning, intentionality, we must have antecedent experience of the world, mediated by our sensory apparatus”, (p.179). Now this response might be helpful for a computational theory of mind if experience could be explained in purely computational terms. Some philosophers and AI theorists believe that this can be done, but arguably the move is not available to Carter. For earlier in the book he committed himself to an account of experience which seems to preclude a computational treatment.



Carter thinks that all our experiences have a qualitative aspect: that they include so-called qualia. There is something it is like to see the color red. Visual experience is beyond merely having certain physical inputs in the forms of light waves, undergoing certain transformations in the brain and producing physical outputs such as speaking the sentence “There is something red.” What it is like to be in any given experiential state, says Carter, “can be known only by having the first person experience of being in the state” (p.43). However, if this is correct, it surely cannot be squared with a computational theory of experience. Carter thinks that detecting qualia requires you to have that experience yourself; but there is no reason to think that detecting a particular computer program requires you to be an embodiment of that program yourself. Therefore experience can’t be like a computer program.

Carter tries to avoid the qualia problem by saying that it is not important that we each have qualitative experience unknowable by other people, so long as we agree on which things are red and which are not. But this seems inadequate. If a computational theory of mind requires getting semantics out of syntactic, and this requires a connection with the real world via our sensory experiences, and these experiences essentially involve qualia, we can hardly accept that qualia are unimportant. They are precisely what makes experience experience, and so mind mind.



It seems that Carter is faced with a trilemma. He needs to explain how he thinks a computational account can be provided of qualia; or he needs to abandon a qualia-based account of experience, in favor of some computational account; or he needs to abandon his conclusion that there is no objection in principle to a purely computational account of the mind.

However, it would be unreasonable to expect an introductory text such as this to provide the solutions to these problems. What it needs to do is to give readers a sense of the issues involved. Carter’s text does that extremely well.
Source: Nicholas Everitt

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Hai, 28 tháng 11, 2016

Living Robot with 'Human Brain'

Close to the creation of Super-Computer with AI


COMPUTER scientists attempting to electronically replicate the human brain are close to creating a 'living PC'.



Engineers at the University of Massachusetts are developing microprocessors which mimic biological synapses - the nerve cells which pass messages across the human body.

The science fiction-style project is being undertaken by Joshua Yang and Qiangfei Xia, professors of electrical and computer engineering at the US college.

Their work focuses heavily on ‘memristors’ - a computer component which could change science forever, switching the focus from electronics to ‘ionics’.

Ionics, unlike electronics, is not dependent on a power source. It essentially has a memory, so even if it loses power it can remember what it was doing before and continue the action.



“The computers will send messages in the same manner of the human brain”
This means computers of the near-future will be able to shut on and off like a lightbulb, not losing any data or files in the process.

Different researchers and developers, including Mr. Yang and Mr. Xia, are now racing to be the first to harness this technology and use it to create a new generation of computers.

Professor Jennifer Rupp said: “I think there is a race going on. There is a strong driving force, but at the same time it's very important that there are players like HP, because they want to get to the market, show everyone that this is real.”

Mr. Yang and Mr. Xia explained the process in more detail in their report, explaining the process behind neuromorphic computing - computers which mimic humans.

Computers will soon have memories and be able to operate without power
They said: “Memristors have become a leading candidate to enable neuromorphic computing by reproducing the functions in biological synapses and neurons in a neural network system, while providing advantages in energy and size”.

“This work opens a new avenue of neuromorphic computing hardware based on ‘memristors’”.



“Specifically, we developed a diffusive-type ‘memristor’ where diffusion of atoms offers a similar dynamics and the needed time-scales as its bio-counterpart, leading to a more faithful emulation of actual synapses i.e. a true synaptic emulator”.

“The results here provide an encouraging pathway toward synaptic emulation using diffusive ‘memristors’ for neuromorphic computing."
Source: Joey Millar

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Ba, 18 tháng 10, 2016

New devices emulate Human Biological Synapses

A new type of nano device for computer microprocessors is being developed that can mimic the functioning of a biological synapse -- the place where a signal passes from one nerve cell to another in the body, report scientists.



Memristive devices are electrical resistance switches that can alter their resistance based on the history of applied voltage and current. These devices can store and process information and offer several key performance characteristics that exceed conventional integrated circuit technology.

Engineers at the University of Massachusetts Amherst are leading a research team that is developing a new type of nanodevice for computer microprocessors that can mimic the functioning of a biological synapse -- the place where a signal passes from one nerve cell to another in the body. The work is featured in the advance online publication of Nature Materials.

Such neuromorphic computing in which microprocessors are configured more like human brains is one of the most promising transformative computing technologies currently under study.



J. Joshua Yang and Qiangfei Xia are professors in the electrical and computer engineering department in the UMass Amherst College of Engineering. Yang describes the research as part of collaborative work on a new type of memristive device.
Memristive devices are electrical resistance switches that can alter their resistance based on the history of applied voltage and current. These devices can store and process information and offer several key performance characteristics that exceed conventional integrated circuit technology.

"Memristors have become a leading candidate to enable neuromorphic computing by reproducing the functions in biological synapses and neurons in a neural network system, while providing advantages in energy and size," the researchers say.
Neuromorphic computing -- meaning microprocessors configured more like human brains than like traditional computer chips -- is one of the most promising transformative computing technologies currently under intensive study. Xia says, "This work opens a new avenue of neuromorphic computing hardware based on memristors.”

They say that most previous work in this field with ‘memristors’ has not implemented diffusive dynamics without using large standard technology found in integrated circuits commonly used in microprocessors, microcontrollers, static random access memory and other digital logic circuits.



The researchers say they proposed and demonstrated a bio-inspired solution to the diffusive dynamics that is fundamentally different from the standard technology for integrated circuits while sharing great similarities with synapses. They say, "Specifically, we developed a diffusive-type ‘memristor’ where diffusion of atoms offers a similar dynamics and the needed time-scales as its bio-counterpart, leading to a more faithful emulation of actual synapses, i.e., a true synaptic emulator."

The researchers say, "The results here provide an encouraging pathway toward synaptic emulations using diffusive ‘memristors’ for neuromorphic computing."
Source: UMass Amherst

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Tư, 29 tháng 6, 2016

Computers have the ability to Reason Like Humans

Northwestern University's Ken Forbus is closing the gap between humans and machines.



Using cognitive science theories, Forbus and his collaborators have developed a model that could give computers the ability to reason more like humans and even make moral decisions. Called the structure-mapping engine (SME), the new model is capable of analogical problem solving, including capturing the way humans spontaneously use analogies between situations to solve moral dilemmas.



"In terms of thinking like humans, analogies are where it's at," said Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern's McCormick School of Engineering. "Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas."

The theory underlying the model is psychologist Dedre Gentner's structure-mapping theory of analogy and similarity, which has been used to explain and predict many psychology phenomena.

Structure-mapping argues that analogy and similarity involve comparisons between relational representations, which connect entities and ideas, for example, that a clock is above a door or that pressure differences cause water to flow.

Analogies can be complex (electricity flows like water) or simple (his new cell phone is very similar to his old phone). Previous models of analogy, including prior versions of SME, have not been able to scale to the size of representations that people tend to use. Forbus's new version of SME can handle the size and complexity of relational representations that are needed for visual reasoning, cracking textbook problems, and solving moral dilemmas.

"Relational ability is the key to higher-order cognition," said Gentner, Alice Gabrielle Twight Professor in Northwestern's Weinberg College of Arts and Sciences. "Although we share this ability with a few other species, humans greatly exceed other species in ability to represent and reason with relations."

Supported by the Office of Naval Research, Defense Advanced Research Projects Agency, and Air Force Office of Scientific Research, Forbus and Gentner's research is described in the June 20 issue of the journal Cognitive Science. Andrew Lovett, a postdoctoral fellow in Gentner's laboratory, and Ronald Ferguson, a PhD graduate from Forbus's laboratory, also authored the paper.



Many artificial intelligence systems -- like Google's AlphaGo -- rely on deep learning, a process in which a computer learns examining massive amounts of data. By contrast, people -- and SME-based systems -- often learn successfully from far fewer examples. In moral decision-making, for example, a handful of stories suffices to enable an SME-based system to learn to make decisions as people do in psychological experiments.

"Given a new situation, the machine will try to retrieve one of its prior stories, looking for analogous sacred values, and decide accordingly," Forbus said.

SME has also been used to learn to solve physics problems from the Advanced Placement test, with a program being trained and tested by the Educational Testing Service. As further demonstration of the flexibility of SME, it also has been used to model multiple visual problem-solving tasks.

To encourage research on analogy, Forbus's team is releasing the SME source code and a 5,000-example corpus, which includes comparisons drawn from visual problem solving, textbook problem solving, and moral decision making.

The range of tasks successfully tackled by SME-based systems suggests that analogy might lead to a new technology for artificial intelligence systems as well as a deeper understanding of human cognition. For example, using analogy to build models by refining stories from multiple cultures that encode their moral beliefs could provide new tools for social science. Analogy-based artificial intelligence techniques could be valuable across a range of applications, including security, health care, and education.



"SME is already being used in educational software, providing feedback to students by comparing their work with a teacher's solution," Forbus said. But there is a vast untapped potential for building software tutors that use analogy to help students learn."

Source: Northwestern University

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Tư, 22 tháng 6, 2016

Computers will Understand Human Language

Researchers at the University of Liverpool have developed a set of algorithms that help teach computers to process and understand the Human Language.



Whilst mastering natural language is easy for humans, it is something that computers have not yet been able to achieve. Humans understand language through a variety of ways for example this might be through looking up it in a dictionary, or by associating it with words in the same sentence in a meaningful way.

The algorithms will enable a computer to act in much the same way as a human would when encountered with an unknown word.

–Algorithm in mathematics and computer science is a self-contained step-by- step set of operations to be performed, they perform calculation, data processing, and automated reasoning tasks, the word algorithm come from the name al-Khwarizmi, a Persian mathematician, geographer and scholar

When the computer encounters a word it doesn't recognize or understand, the algorithms mean it will look up the word in a dictionary (such as the WordNet), and tries to guess what other words should appear with this unknown word in the text.

It gives the computer a semantic representation for a word that is both consistent with the dictionary as well as with the context in which it appears in the text.



In order to know whether the algorithm has provided the computer with an accurate representation of a word it compares similarity scores produced using the word representations learnt by the computer algorithm against human rated similarities.

Liverpool computer scientist, Doctor Danushka Bollegala, said: "Learning accurate word representations is the first step towards teaching languages to computers."

"If we can represent the meaning for a word in a way a computer could understand, then the computer will be able to read texts on behalf of humans and perform potentially useful tasks such as translating a text written in a foreign language, summarizing a lengthy article, or find similar other documents from the Internet.



"We are excitingly waiting to see the immense possibilities that will be brought about when such accurate semantic representations are used in various language processing tasks by the computers."

Source: University of Liverpool

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

 
OUR MISSION