Hiển thị các bài đăng có nhãn Machine. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn Machine. Hiển thị tất cả bài đăng

Thứ Sáu, 3 tháng 3, 2017

Scientists reveal new Super-Fast form of Computer that 'Grows as it Computes'

DNA double helix. Credit: public domain

Researchers from The University of Manchester have shown it is possible to build a new super-fast form of computer that "grows as it computes".

Professor Ross D King and his team have demonstrated for the first time the feasibility of engineering a nondeterministic universal Turing machine (NUTM), and their research is to be published in the prestigious Journal of the Royal Society Interface.

The theoretical properties of such a computing machine, including its exponential boost in speed over electronic and quantum computers, have been well understood for many years – but the Manchester breakthrough demonstrates that it is actually possible to physically create a NUTM using DNA molecules.

"Imagine a computer is searching a maze and comes to a choice point, one path leading left, the other right," explained Professor King, from Manchester's School of Computer Science. "Electronic computers need to choose which path to follow first.



"But our new computer doesn't need to choose, for it can replicate itself and follow both paths at the same time, thus finding the answer faster.

"This 'magical' property is possible because the computer's processors are made of DNA rather than silicon chips. All electronic computers have a fixed number of chips.
"Our computer's ability to grow as it computes makes it faster than any other form of computer, and enables the solution of many computational problems previously considered impossible.
"Quantum computers are an exciting other form of computer, and they can also follow both paths in a maze, but only if the maze has certain symmetries, which greatly limits their use.

"As DNA molecules are very small a desktop computer could potentially utilize more processors than all the electronic computers in the world combined - and therefore outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy."



The University of Manchester is famous for its connection with Alan Turing - the founder of computer science - and for creating the first stored memory electronic computer.

"This new research builds on both these pioneering foundations," added Professor King.
Alan Turing's greatest achievement was inventing the concept of a universal Turing machine (UTM) - a computer that can be programmed to compute anything any other computer can compute. Electronic computers are a form of UTM, but no quantum UTM has yet been built.

DNA computing is the performing of computations using biological molecules rather than traditional silicon chips. In DNA computing, information is represented using the four-character genetic alphabet - A [adenine], G [guanine], C [cytosine], and T [thymine] - rather than the binary alphabet, which is a series of 1s and 0s used by traditional computers.

Provided by: University of Manchester

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Sáu, 24 tháng 2, 2017

How humans bond: The Brain Chemistry Revealed

In a new study researches found for the first time that dopamine is involved in human bonding bringing the brain’s reward into understanding of how we form human attachments.



In new research published Monday in the journal Proceedings of the National Academy of Sciences, Northeastern University psychology professor Lisa Feldman Barrett found, for the first time, that the neurotransmitter dopamine is involved in human bonding, bringing the brain's reward system into our understanding of how we form human attachments. The results, based on a study with 19 mother-infant pairs, have important implications for therapies addressing postpartum depression as well as disorders of the dopamine system such as Parkinson's disease, addiction, and social dysfunction.



"The infant brain is very different from the mature adult brain -- it is not fully formed," says Barrett, University Distinguished Professor of Psychology and author of the forthcoming book How Emotions Are Made: The Secret Life of the Brain. "Infants are completely dependent on their caregivers. Whether they get enough to eat, the right kind of nutrients, whether they're kept warm or cool enough, whether they're hugged enough and get enough social attention, all these things are important to normal brain development. Our study shows clearly that a biological process in one person's brain, the mother's, is linked to behavior that gives the child the social input that will help wire his or her brain normally. That means parents' ability to keep their infants cared for leads to optimal brain development, which over the years results in better adult health and greater productivity."

To conduct the study, the researchers turned to a novel technology: a machine capable of performing two types of brain scans simultaneously -- functional magnetic resonance imaging, or fMRI, and positron emission tomography, or PET.

fMRI looks at the brain in slices, front to back, like a loaf of bread, and tracks blood flow to its various parts. It is especially useful in revealing which neurons are firing frequently as well as how different brain regions connect in networks. PET uses a small amount of radioactive chemical plus dye (called a tracer) injected into the bloodstream along with a camera and a computer to produce multidimensional images to show the distribution of a specific neurotransmitter, such as dopamine or opioids.



Barrett's team focused on the neurotransmitter dopamine, a chemical that acts in various brain systems to spark the motivation necessary to work for a reward. They tied the mothers' level of dopamine to her degree of synchrony with her infant as well as to the strength of the connection within a brain network called the medial amygdala network that, within the social realm, supports social affiliation.

"We found that social affiliation is a potent stimulator of dopamine," says Barrett. "This link implies that strong social relationships have the potential to improve your outcome if you have a disease, such as depression, where dopamine is compromised. We already know that people deal with illness better when they have a strong social network. What our study suggests is that caring for others, not just receiving caring, may have the ability to increase your dopamine levels."



Before performing the scans, the researchers videotaped the mothers at home interacting with their babies and applied measurements to the behaviors of both to ascertain their degree of synchrony. They also videotaped the infants playing on their own.

Once in the brain scanner, each mother viewed footage of her own baby at solitary play as well as an unfamiliar baby at play while the researchers measured dopamine levels, with PET, and tracked the strength of the medial amygdala network, with fMRI.

The mothers who were more synchronous with their own infants showed both an increased dopamine response when viewing their child at play and stronger connectivity within the medial amygdala network. "Animal studies have shown the role of dopamine in bonding but this was the first scientific evidence that it is involved in human bonding," says Barrett. "That suggests that other animal research in this area could be directly applied to humans as well."

The findings, says Barrett, are "cautionary." "They have the potential to reveal how the social environment impacts the developing brain," she says. "People's future health, mental and physical, is affected by the kind of care they receive when they are babies. If we want to invest wisely in the health of our country, we should concentrate on infants and children, eradicating the adverse conditions that interfere with brain development."
Source: Materials provided by Northeastern University, original written by Thea Singer.

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Sáu, 20 tháng 1, 2017

Artificial Intelligence and Machine Learning: What's the Next Step?

It's difficult to describe in a concise list with less than 1,000 words what the definitive direction of artificial intelligence is going to be in a 12-month span. The year 2016 surprised several people in terms of the speed of certain technologies' development and the revised ETA of new AI-driven products hitting the public market.
Here are the four trends that will dominate artificial intelligence in 2017.

1. Language processing will continue
We could call this "natural language processing" or NLP, but let's think more broadly about language for a moment. The key to cognition, for you mavens of Psychology 101, is sophisticated communication, even internal abstract thinking. That will continue to prove critical in driving machine learning 'deeper.'

One place to keep track of progress in the space is in machine translation, which will give you an idea of how sophisticated and accurate our software currently is in translating some of the nuance and implications of our spoken and written language.



That will be the next step in getting personal assistant technology like Alexa, Siri, Google Assistant, or Cortana to interpret our commands and questions just a little bit better.

2. Efforts to square machine learning and big data with different health sectors will accelerate
"I envision a system that still has those predictive data pools. It looks at the different data you obtain that different labs are giving all the time," eBay Director of Data Science Kira Radinsky told an audience at Geektime TechFest 2016 last month, pioneering "automated processes that can lead to those types of discoveries."

Biotech researchers and companies are trying to get programs to automate drug discoveries, among other things. Finding correlations in data and extrapolating causation is not the same in all industries, nor in any one sector of medicine. Researchers in heart disease, neurological disorders, and various types of cancer are all organizing different metrics of data. Retrieving that information and programming the proper relationship between all those variables is an endeavor.



One of the areas where this is evident is in computer vision, exemplified by Zebra Medical Vision, which can detect anomalies in CT scans for a variety of organs including the heart and liver. But compiling patient medical records and hunting for diagnostic clues there, as well as constructing better treatment plans, are also markets machine learning is opening in 2017. Other startups like Israel's ‘HealthWatch’ are producing smart clothes that constantly feed medical data to doctors to monitor patients.

This developing ecosystem of health trackers should produce enough information about individual patients or groups of people for algorithms to extract new realizations.
3. They will probably have to come up with another buzzword to go deeper than 'deep learning'

Machines building machines? Algorithms writing algorithms? Machine learning programs will continue adding more layers of processing units, as well as more sophistication to abstract pattern analysis. Deep neural networks will be expected to draw even more observations from unsorted data, just as was mentioned above in regards to health care.



That future buzz term might be “generative” or “adversarial,” as in generative adversarial networks (GANs). Described by MIT Technology Review as the invention of Open AI scientist Ian Goodfellow, GANs will set up two networks like two people with different approaches to a problem. One network will try to create new data (read “ideas”) from a given set of data while the other “tries to discriminate between real and fake data” (let’s assume this is the robotic equivalent to a devil’s advocate).

4. Self-driving cars will force an expensive race among automotive companies
I saved this for last because many readers probably consider this patently obvious. However, the surprise many laypeople and people who might fancy themselves tech insiders had by seeing the speed of the industry’s development might be duplicated in 2017 for the opposite reason. While numbers of companies are testing the technology, it will run into some pun-intended roadblocks this year.



While talking about an “autonomous” vehicle is all the rage, several companies in the testing stage not only are cautious to keep someone behind the wheel if needed, but are also creating entire human-administered command centers to guide the cars.
There are some companies that will likely be able to avoid burning capital because of competition. Consider how NIVDIA is developing cars in conjunction with Audi and Mercedes-Benz, but separately. Still, BMW, Mercedes-Benz, Nissan-Renault, Ford, and General Motors are all making very big bets while trying to speed up their timeline and hit autonomous vehicle research milestones more quickly.

Even if the entire industry were to be wrong in a cataclysmic way about the unstoppable future of the self-driving car (which it won't be, but bear with me), there will still be more automated features installed in new vehicle models relatively soon. Companies will be forced to spend big, and fast to match features offered by their competitors.

By Gedalyah Reback

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Sáu, 11 tháng 11, 2016

Would a Machine be as Smart as God Want?

The field of “scientific theology” ponders the ultimate purpose of mind



At a recent AI conference I was listening to smart people ponder what super-smart machines will want, I kept thinking of things I’d heard, watched and read before.


As some speakers acknowledged, countless science fictions have already imagined what artificial minds will desire. Common cinematic answers are power (2001: A Space Odyssey, The Terminator, The Matrix), freedom (I Robot, Ex Machina) and love (Steven Spielberg’s Artificial Intelligence, Spike Jones’s Her).



But what if the machines have all the power and freedom (which are arguably equivalent) and love they need? Or what if all the machines merge into one gigantic mind? At that point, freedom, power and love, which are social goals, become irrelevant. What will that cosmic computer want? What will it do to pass the time?

In The End of Science, I called this sort of speculation “scientific theology.” Physicist Freeman Dyson is my favorite practitioner. In 1979 he published “Time Without End: Physics and Biology in an Open Universe” in Reviews of Modern Physics. Dyson wrote the paper to counter physicist Steven Weinberg's infamous remark that "the more the universe seems comprehensible, the more it also seems pointless."

No universe with intelligence is pointless, Dyson retorted. He sought to show that even in an eternally expanding universe, intelligence could persist virtually forever and ward off heat death through shrewd conservation of energy.

In his 1988 essay collection Infinite in All Directions, Dyson envisioned intelligence spreading through the entire universe, transforming it into a vast cosmic mind. "What will mind choose to do when it informs and controls the universe?" Dyson asked. We “cannot hope to answer" this question definitively, he suggested, because it is theological rather than scientific:
"I do not make any clear distinction between mind and God. God is what mind becomes when it has passed beyond the scale of our comprehension. God may be considered to be either a world-soul or a collection of world souls. We are the chief inlets of God on this planet at the present stage in his development. We may later grow with him as he grows, or we may be left behind."

Dyson’s musings were inspired by the science-fiction writer (and philosopher) Olaf Stapledon, who died in 1950. In his books Last and First Men and Starmaker, Stapledon imagined what mind would become after millions or billions of years. He postulated that a cosmic mind will want to create. It will become an artist, whose works are entire universes.



That’s a cool idea (and it implies that we live in one of those works of art), but I prefer Dyson’s hypothesis. He guessed that a cosmic mind would be not an artist but a scientist, a knowledge-seeker. When I interviewed Dyson in 1993, he expressed confidence that the quest for knowledge would never end, because knowledge is infinite.

His optimism derived in part from Godel's theorem, which demonstrates that every system of axioms poses questions that cannot be answered with those axioms. The theorem implies that mathematics is open-ended and hence can continue forever.

"Since we know the laws of physics are mathematical,” Dyson told me, “and we know that mathematics is an inconsistent system, it's sort of plausible that physics will also be inconsistent" and therefore open-ended.

I have a hard time imagining the cosmic computer at the end of time—a.k.a. “God”—fussing over math or physics puzzles. My idea (admittedly drug-inspired) is that It will ponder the riddle of Its own origin. Here’s the meta-question: Will It solve that mystery of mysteries, or will It be forever stumped?

Postscript: Two other scientific theologians are worth mentioning. Physicist Frank Tipler, in The Physics of Immortality (1994), argues that the God-like machine at the end of time would resurrect every creature that ever lived in a blissful cyber-paradise. The sex will be fantastic. In his 1961 novel Solaris, about the encounter between humans and a sentient planet, Stanislaw Lem suggests that superintelligence will be inscrutable. His perspective evokes negative theology, which holds that God will always be beyond our ken. Lem’s brilliant twist is that plain old human minds are pretty fringing’ inscrutable too.

Source: John Horgan

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

 
OUR MISSION