Hiển thị các bài đăng có nhãn Computer. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn Computer. Hiển thị tất cả bài đăng

Thứ Bảy, 18 tháng 3, 2017

New MIT-developed System can make Webpages load 34 per cent Faster in any Browser

Students walk across the MIT campus, where the program was developed Joe Raedle/Getty Images

The 'Polaris' system could help counter the browser-slowing effects of complex webpages



Computer scientists at the world-famous Massachusetts Institute of Technology (MIT) have developed a system that can reliably make websites load 34 per cent faster.

As internet speeds have increased, websites have got more complex, leaving some pages sluggish and unresponsive. This is a problem for companies like Amazon, who say that for every one-second delay in loading time, their profits are cut by one per cent.
But a team of researchers, working at the university's Computer Science and Artificial Intelligence Laboratory, may have found the solution.



Named Polaris, the system cuts load-times by determining the best way to 'overlap' the downloading of different parts of a webpage.

When you visit a new page, your browser reaches across the internet to fetch 'objects' like pictures, videos, and HTML files. The browser then evaluates the objects and puts them on the page.

However, some objects are dependent on others, and browsers can't see all of these dependencies until they come across them.

Polaris works by tracking all of these relationships and dependencies between objects on the page and turning the information into a 'dependency graph' that can be interpreted by your browser.

Polaris essentially gives the browser a roadmap of the page, with all the details of the best and quickest way to load it.

PhD student Ravi Netravali, who worked on Polaris, explained: "It can take up to 100
milliseconds each time a browser has to cross a mobile network to fetch a piece of data."
"As pages increase in complexity, they often require multiple trips that create delays that really add up. Our approach minimizes the number of round trips so that we can substantially speed up a page's load-time."



The researchers tested Polaris across a range of network conditions on some of the world's most popular websites, and found it made them load an average of 34 per cent faster when compared to a normal browser.

Polaris could be used on any website and with unmodified browsers, and when tech companies like Google and Amazon are working hard to improve load-times, a similar system might appear on your device soon.

Source: Massachusetts Institute Of Technology, Computer Science

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Ba, 7 tháng 3, 2017

New Computer Operating System unlock DNA's Molecules nearly full storage potential

In a study in Science, researchers Yaniv Erlich and Dina Zielinski describe a new coding technique for maximizing the data-storage capacity of DNA molecules.Credit: New York Genome Center

An algorithm designed for streaming video on a cellphone can unlock DNA's nearly full storage potential by squeezing more information into its four base nucleotides, say researchers. They demonstrate that this technology is also extremely reliable.



Humanity may soon generate more data than hard drives or magnetic tape can handle, a problem that has scientists turning to nature's age-old solution for information-storage -- DNA.

In a new study in Science, a pair of researchers at Columbia University and the New York Genome Center (NYGC) show that an algorithm designed for streaming video on a cellphone can unlock DNA's nearly full storage potential by squeezing more information into its four base nucleotides. They demonstrate that this technology is also extremely reliable.



DNA is an ideal storage medium because it's ultra-compact and can last hundreds of thousands of years if kept in a cool, dry place, as demonstrated by the recent recovery of DNA from the bones of a 430,000-year-old human ancestor found in a cave in Spain.

"DNA won't degrade over time like cassette tapes and CDs, and it won't become obsolete -- if it does, we have bigger problems," said study coauthor Yaniv Erlich, a computer science professor at Columbia Engineering, a member of Columbia's Data Science Institute, and a core member of the NYGC.

Erlich and his colleague Dina Zielinski, an associate scientist at NYGC, chose six files to encode, or write, into DNA: a full computer operating system, an 1895 French film, "Arrival of a train at La Ciotat," a $50 Amazon gift card, a computer virus, a Pioneer plaque and a 1948 study by information theorist Claude Shannon.

They compressed the files into a master file, and then split the data into short strings of binary code made up of ones and zeros. Using an erasure-correcting algorithm called fountain codes, they randomly packaged the strings into so-called droplets, and mapped the ones and zeros in each droplet to the four nucleotide bases in DNA: A, G, C and T. The algorithm deleted letter combinations known to create errors, and added a barcode to each droplet to help reassemble the files later.



In all, they generated a digital list of 72,000 DNA strands, each 200 bases long, and sent it in a text file to a San Francisco DNA-synthesis startup, Twist Bioscience, that specializes in turning digital data into biological data. Two weeks later, they received a vial holding a speck of DNA molecules.

To retrieve their files, they used modern sequencing technology to read the DNA strands, followed by software to translate the genetic code back into binary. They recovered their files with zero errors, the study reports. (In this short demo, Erlich opens his archived operating system on a virtual machine and plays a game of Minesweeper to celebrate.)

They also demonstrated that a virtually unlimited number of copies of the files could be created with their coding technique by multiplying their DNA sample through polymerase chain reaction (PCR), and that those copies, and even copies of their copies, and so on, could be recovered error-free.



Finally, the researchers show that their coding strategy packs 215 petabytes of data on a single gram of DNA -- 100 times more than methods published by pioneering researchers George Church at Harvard, and Nick Goldman and Ewan Birney at the European Bioinformatics Institute. "We believe this is the highest-density data-storage device ever created," said Erlich.

The capacity of DNA data-storage is theoretically limited to two binary digits for each nucleotide, but the biological constraints of DNA itself and the need to include redundant information to reassemble and read the fragments later reduces
its capacity to 1.8 binary digits per nucleotide base.

The team's insight was to apply fountain codes, a technique Erlich remembered from graduate school, to make the reading and writing process more efficient. With their DNA Fountain technique, Erlich and Zielinski pack an average of 1.6 bits into each base nucleotide. That's at least 60 percent more data than previously published methods, and close to the 1.8-bit limit.

Cost still remains a barrier. The researchers spent $7,000 to synthesize the DNA they used to archive their 2 megabytes of data, and another $2,000 to read it. Though the price of DNA sequencing has fallen exponentially, there may not be the same demand for DNA synthesis, says Sri Kosuri, a biochemistry professor at UCLA who was not involved in the study. "Investors may not be willing to risk tons of money to bring costs down," he said.



But the price of DNA synthesis can be vastly reduced if lower-quality molecules are produced, and coding strategies like DNA Fountain are used to fix molecular errors, says Erlich. "We can do more of the heavy lifting on the computer to take the burden off time-intensive molecular coding," he said.
Source: Materials provided by Columbia University School of Engineering and Applied Science.

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Sáu, 3 tháng 3, 2017

Scientists reveal new Super-Fast form of Computer that 'Grows as it Computes'

DNA double helix. Credit: public domain

Researchers from The University of Manchester have shown it is possible to build a new super-fast form of computer that "grows as it computes".

Professor Ross D King and his team have demonstrated for the first time the feasibility of engineering a nondeterministic universal Turing machine (NUTM), and their research is to be published in the prestigious Journal of the Royal Society Interface.

The theoretical properties of such a computing machine, including its exponential boost in speed over electronic and quantum computers, have been well understood for many years – but the Manchester breakthrough demonstrates that it is actually possible to physically create a NUTM using DNA molecules.

"Imagine a computer is searching a maze and comes to a choice point, one path leading left, the other right," explained Professor King, from Manchester's School of Computer Science. "Electronic computers need to choose which path to follow first.



"But our new computer doesn't need to choose, for it can replicate itself and follow both paths at the same time, thus finding the answer faster.

"This 'magical' property is possible because the computer's processors are made of DNA rather than silicon chips. All electronic computers have a fixed number of chips.
"Our computer's ability to grow as it computes makes it faster than any other form of computer, and enables the solution of many computational problems previously considered impossible.
"Quantum computers are an exciting other form of computer, and they can also follow both paths in a maze, but only if the maze has certain symmetries, which greatly limits their use.

"As DNA molecules are very small a desktop computer could potentially utilize more processors than all the electronic computers in the world combined - and therefore outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy."



The University of Manchester is famous for its connection with Alan Turing - the founder of computer science - and for creating the first stored memory electronic computer.

"This new research builds on both these pioneering foundations," added Professor King.
Alan Turing's greatest achievement was inventing the concept of a universal Turing machine (UTM) - a computer that can be programmed to compute anything any other computer can compute. Electronic computers are a form of UTM, but no quantum UTM has yet been built.

DNA computing is the performing of computations using biological molecules rather than traditional silicon chips. In DNA computing, information is represented using the four-character genetic alphabet - A [adenine], G [guanine], C [cytosine], and T [thymine] - rather than the binary alphabet, which is a series of 1s and 0s used by traditional computers.

Provided by: University of Manchester

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Năm, 3 tháng 11, 2016

Can a brain-computer interface convert your thoughts to text?

Recent research shows brain-to-text device capable of decoding speech from brain signals, creating a breakthrough on the Artificial Intelligence field.



Ever wonder what it would be like if a device could decode your thoughts into actual speech or written words? While this might enhance the capabilities of already existing speech interfaces with devices, it could be a potential game-changer for those with speech pathologies, and even more so for "locked-in" patients who lack any speech or motor function.

"So instead of saying: 'Siri, what is the weather like today' or 'Ok Google, where can I go for lunch?' I just imagine saying these things," explains Christian Herff, author of a review recently published in the journal Frontiers in Human Neuroscience.



While reading one's thoughts might still belong to the realms of science fiction, scientists are already decoding speech from signals generated in our brains when we speak or listen to speech.

In their review, Herff and co-author, Dr. Tanja Schultz, compare the pros and cons of using various brain imaging techniques to capture neural signals from the brain and then decode them to text.
The technologies include functional MRI and near infrared imaging that can detect neural signals based on metabolic activity of neurons, to methods such as EEG and magnetoencephalography (MEG) that can detect electromagnetic activity of neurons responding to speech. One method in particular, called electro-corticography or ECoG, showed promise in Herff's study.

This study presents the Brain-to-text system in which epilepsy patients who already had electrode grids implanted for treatment of their condition participated. They read out texts presented on a screen in front of them while their brain activity was recorded. This formed the basis of a database of patterns of neural signals that could now be matched to speech elements or "phones."



When the researchers also included language and dictionary models in their algorithms, they were able to decode neural signals to text with a high degree of accuracy. "For the first time, we could show that brain activity can be decoded specifically enough to use ASR technology on brain signals," says Herff. "However, the current need for implanted electrodes renders it far from usable in day-to-day life."

So, where does the field go from here to a functioning thought detection device? "A first milestone would be to actually decode imagined phrases from brain activity, but a lot of technical issues need to be solved for that," concedes Herff.

Their study results, while exciting, are still only a preliminary step towards this type of brain-computer interface.
Source: Physics Today

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Chủ Nhật, 17 tháng 7, 2016

Teaching Machines to Predict the Future

Deep-learning vision system from the Computer Science and Artificial Intelligence Lab, anticipates human interactions using videos of TV shows.



When we see two people meet, we can often predict what happens next: a handshake, a hug, or maybe even a kiss. Our ability to anticipate actions is thanks to intuitions born out of a lifetime of experiences.

Machines, on the other hand, have trouble making use of complex knowledge like that. Computer systems that predict actions would open up new possibilities ranging from robots that can better navigate human environments, to emergency response systems that predict falls, to Google Glass-style headsets that feed you suggestions for what to do in different situations.



This week researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have made an important new breakthrough in predictive vision, developing an algorithm that can anticipate interactions more accurately than ever before.

Trained on YouTube videos and TV shows such as “The Office” and “Desperate Housewives,” the system can predict whether two individuals will hug, kiss, shake hands or slap five. In a second scenario, it could also anticipate what object is likely to appear in a video five seconds later.

While human greetings may seem like arbitrary actions to predict, the task served as a more easily controllable test case for the researchers to study.

“Humans automatically learn to anticipate actions through experience, which is what made us interested in trying to imbue computers with the same sort of common sense,” says CSAIL PhD student Carl Vondrick, who is first author on a related paper that he will present this week at the International Conference on Computer Vision and Pattern Recognition (CVPR). “We wanted to show that just by watching large amounts of video computers can gain enough knowledge to consistently make predictions about their surroundings.”

Vondrick’s co-authors include MIT Professor Antonio Torralba and former postdoctoral Hamed Pirsiavash, now a professor at the University of Maryland.

Past attempts at predictive computer-vision have generally taken one of two approaches.

The first method is to look at an image’s individual pixels and use that knowledge to create a photorealistic “future” image, pixel by pixel — a task that Vondrick describes as “difficult for a professional painter, much less an algorithm.” The second is to have humans label the scene for the computer in advance, which is impractical for being able to predict actions on a large scale.



The CSAIL team instead created an algorithm that can predict “visual representations,” which are basically freeze-frames showing different versions of what the scene might look like.

Rather than saying that one pixel value is blue, the next one is red, and so on, visual representations reveal information about the larger image, such as a certain collection of pixels that represents a human face,” Vondrick says.

The team’s algorithm employs techniques from deep-learning, a field of artificial intelligence that uses systems called “neural networks” to teach computers to pore over massive amounts of data to find patterns on their own.

Each of the algorithm’s networks predicts a representation is automatically classified as one of the four actions — in this case, a hug, handshake, high-five, or kiss. The system then merges those actions into one that it uses as its prediction. For example, three networks might predict a kiss, while another might use the fact that another person has entered the frame as a rationale for predicting a hug instead.

“A video isn’t like a ‘Choose Your Own Adventure’ book where you can see all of the potential paths,” says Vondrick. “The future is inherently ambiguous, so it’s exciting to challenge ourselves to develop a system that uses these representations to anticipate all of the possibilities.”



After training the algorithm on 600 hours of unlabeled video, the team tested it on new videos showing both actions and objects.

When shown a video of people who are one second away from performing one of the four actions, the algorithm correctly predicted the action more than 43 percent of the time, which compares to existing algorithms that could only do 36 percent of the time.

In a second study, the algorithm was shown a frame from a video and asked to predict what object will appear five seconds later. For example, seeing someone open a microwave might suggest the future presence of a coffee mug. The algorithm predicted the object in the frame 30 percent more accurately than baseline measures, though the researchers caution that it still only has an average precision of 11 percent.

It’s worth noting that even humans make mistakes on these tasks: for example, human subjects were only able to correctly predict the action 71 percent of the time.

“There’s a lot of subtlety to understanding and forecasting human interactions,” says Vondrick. “We hope to be able to work off of this example to be able to soon predict even more complex tasks.”

While the algorithms aren’t yet accurate enough for practical applications, Vondrick says that future versions could be used for everything from robots that develop better action plans to security cameras that can alert emergency responders when someone who has fallen or gotten injured.



“I’m excited to see how much better the algorithms get if we can feed them a lifetime’s worth of videos,” says Vondrick. “We might see some significant improvements that would get us closer to using predictive-vision in real-world situations.”

Source: Adam Conner-Simons | Rachel Gordon | CSAIL

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

 
OUR MISSION