Hiển thị các bài đăng có nhãn Computer Science. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn Computer Science. Hiển thị tất cả bài đăng

Thứ Ba, 11 tháng 4, 2017

Smartphone Usage Linked to Male Infertility

By: Alexandria Addesso

The use of cellular phones, have become so common in our day-to-day lives that the inanimate objects almost become another appendage. Smart phones, the most commonly used type of cellular phones today rely on electromagnetic frequency (EMF) radiation to receive real time messaging. But could this form of frequencies be harmful when they are being transmitted all day?

Being that cell phones are portable, people tend to have them on their person all day. Men, more often than women, usually keep their cellular smart phones in their front pockets. Multiple recent studies have been conducted on whether keeping these cellular devices in such a close proximity to a man’s genitalia while transmitting EMF radiation could be harmful.



“Collectively, the research indicates that exposure to cell phone radiation may lead to decreases in sperm count, sperm motility and vitality, as well as increases in indicators of sperm damage such as higher levels of reactive oxygen species (chemically reactive molecules containing oxygen), oxidative stress, DNA damage and changes in sperm morphology ,” said The Environmental Working Group (EWG) after publishing a scientific literature review of 10 studies linking smartphone usage and male infertility.

Other studies even indicate specifically an 8 percent decrease in sperm motility and an approximate 9 percent decrease in sperm viability.
“Overall, these findings raise a number of related health policy and patient management issues that deserve our immediate attention. Specifically, we recommend that men of reproductive age who engage in high levels of mobile phone use do not keep their phones in receiving mode below waist level,” wrote researcher GN De Iuliis in the study Mobile phone radiation induces reactive oxygen species production and DNA damage in human spermatozoa in vitro published in 2009.



Even though keeping your cell phone on a belt clip has long been seen as more safe, much data has shown that it is only slightly better than carrying it in your front pocket. If a man is trying to conceive a child it is best that he reduces his cell phone usage. Data on smartphone usage and female infertility is still widely unknown.

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Ba, 4 tháng 4, 2017

Parallel Computation Provides Deeper Insight into Brain Function

Unlike experimental neuroscientists who deal with real-life neurons, computational neuroscientists use model simulations to investigate how the brain functions. While many computational neuroscientists use simplified mathematical models of neurons, researchers in the Computational Neuroscience Unit at the Okinawa Institute of Science and Technology Graduate University (OIST) develop software that models neurons to the detail of molecular interactions with the goal of eliciting new insights into neuronal function. Applications of the software were limited in scope up until now because of the intense computational power required for such detailed neuronal models, but recently Dr. Weiliang Chen, Dr. Iain Hepburn, and Professor Erik De Schutter published two related papers in which they outline the accuracy and scalability of their new high-speed computational software, "Parallel STEPS". The combined findings suggest that Parallel STEPS could be used to reveal new insights into how individual neurons function and communicate with each other.

The first paper, published in The Journal of Chemical Physics in August 2016, focusses on ensuring that the accuracy of Parallel STEPS is comparable with conventional methods. In conventional approaches, computations associate with neuronal chemical reactions and molecule diffusion are all calculated on one computational processing unit or 'core' sequentially. However, Dr. Iain Hepburn and colleagues introduced a new approach to perform computations of reaction and diffusion in parallel which can then be distributed over multiple computer cores, whilst maintaining simulation accuracy to a high degree. The key was to develop an original algorithm separated into two parts - one that computed chemical reaction events and the other diffusion events.

"We tested a range of model simulations from simple diffusion models to realistic biological models and found that we could achieve improved performance using a parallel approach with minimal loss of accuracy. This demonstrated the potential suitability of the method on a larger scale," says Dr. Hepburn.



In a related paper published in Frontiers in Neuroinformatics this February, Dr. Weiliang Chen presented the implementation details of Parallel STEPS and investigated its performance and potential applications. By breaking a partial model of a Purkinje cell - one of the largest neurons in the brain - into 50 to 1000 sections and simulating reaction and diffusion events for each section in parallel on the Sango supercomputer at OIST, Dr. Chen and colleagues saw dramatically increased computation speeds. They tested this approach on both simple models and more complicated models of calcium bursts in Purkinje cells and demonstrated that parallel simulation could speed up computations by more than several hundred times that of conventional methods.

"Together, our findings show that Parallel STEPS implementation achieves significant improvements in performance, and good scalability," says Dr. Chen. "Similar models that previously required months of simulation can now be completed within hours or minutes, meaning that we can develop and simulate more complex models, and learn more about the brain in a shorter amount of time."

Dr. Hepburn and Dr. Chen from OIST's Computational Neuroscience Unit, led by Professor Erik De Schutter, are actively collaborating with the Human Brain Project, a world-wide initiative based at École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, to develop a more robust version of Parallel STEPS that incorporates electric field simulation of cell membranes.

So far STEPS is only realistically capable of modeling parts of neurons but with the support of Parallel STEPS, the Computational Neuroscience Unit hopes to develop a full-scale model of a whole neuron and subsequently the interactions between neurons in a network. By collaborating with the EPFL team and by making use of the IBM 'Blue Gene/Q' supercomputer located there, they aim to achieve these goals in the near future.



"Thanks to modern supercomputers we can study molecular events within neurons in a much more transparent way than before," says Prof. De Schutter. "Our research opens up interesting avenues in computational neuroscience that links biochemistry with electrophysiology for the first time."
Source: Journal of Chemical Physics. Provided by: Okinawa Institute of Science and Technology

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Ba, 7 tháng 3, 2017

New Computer Operating System unlock DNA's Molecules nearly full storage potential

In a study in Science, researchers Yaniv Erlich and Dina Zielinski describe a new coding technique for maximizing the data-storage capacity of DNA molecules.Credit: New York Genome Center

An algorithm designed for streaming video on a cellphone can unlock DNA's nearly full storage potential by squeezing more information into its four base nucleotides, say researchers. They demonstrate that this technology is also extremely reliable.



Humanity may soon generate more data than hard drives or magnetic tape can handle, a problem that has scientists turning to nature's age-old solution for information-storage -- DNA.

In a new study in Science, a pair of researchers at Columbia University and the New York Genome Center (NYGC) show that an algorithm designed for streaming video on a cellphone can unlock DNA's nearly full storage potential by squeezing more information into its four base nucleotides. They demonstrate that this technology is also extremely reliable.



DNA is an ideal storage medium because it's ultra-compact and can last hundreds of thousands of years if kept in a cool, dry place, as demonstrated by the recent recovery of DNA from the bones of a 430,000-year-old human ancestor found in a cave in Spain.

"DNA won't degrade over time like cassette tapes and CDs, and it won't become obsolete -- if it does, we have bigger problems," said study coauthor Yaniv Erlich, a computer science professor at Columbia Engineering, a member of Columbia's Data Science Institute, and a core member of the NYGC.

Erlich and his colleague Dina Zielinski, an associate scientist at NYGC, chose six files to encode, or write, into DNA: a full computer operating system, an 1895 French film, "Arrival of a train at La Ciotat," a $50 Amazon gift card, a computer virus, a Pioneer plaque and a 1948 study by information theorist Claude Shannon.

They compressed the files into a master file, and then split the data into short strings of binary code made up of ones and zeros. Using an erasure-correcting algorithm called fountain codes, they randomly packaged the strings into so-called droplets, and mapped the ones and zeros in each droplet to the four nucleotide bases in DNA: A, G, C and T. The algorithm deleted letter combinations known to create errors, and added a barcode to each droplet to help reassemble the files later.



In all, they generated a digital list of 72,000 DNA strands, each 200 bases long, and sent it in a text file to a San Francisco DNA-synthesis startup, Twist Bioscience, that specializes in turning digital data into biological data. Two weeks later, they received a vial holding a speck of DNA molecules.

To retrieve their files, they used modern sequencing technology to read the DNA strands, followed by software to translate the genetic code back into binary. They recovered their files with zero errors, the study reports. (In this short demo, Erlich opens his archived operating system on a virtual machine and plays a game of Minesweeper to celebrate.)

They also demonstrated that a virtually unlimited number of copies of the files could be created with their coding technique by multiplying their DNA sample through polymerase chain reaction (PCR), and that those copies, and even copies of their copies, and so on, could be recovered error-free.



Finally, the researchers show that their coding strategy packs 215 petabytes of data on a single gram of DNA -- 100 times more than methods published by pioneering researchers George Church at Harvard, and Nick Goldman and Ewan Birney at the European Bioinformatics Institute. "We believe this is the highest-density data-storage device ever created," said Erlich.

The capacity of DNA data-storage is theoretically limited to two binary digits for each nucleotide, but the biological constraints of DNA itself and the need to include redundant information to reassemble and read the fragments later reduces
its capacity to 1.8 binary digits per nucleotide base.

The team's insight was to apply fountain codes, a technique Erlich remembered from graduate school, to make the reading and writing process more efficient. With their DNA Fountain technique, Erlich and Zielinski pack an average of 1.6 bits into each base nucleotide. That's at least 60 percent more data than previously published methods, and close to the 1.8-bit limit.

Cost still remains a barrier. The researchers spent $7,000 to synthesize the DNA they used to archive their 2 megabytes of data, and another $2,000 to read it. Though the price of DNA sequencing has fallen exponentially, there may not be the same demand for DNA synthesis, says Sri Kosuri, a biochemistry professor at UCLA who was not involved in the study. "Investors may not be willing to risk tons of money to bring costs down," he said.



But the price of DNA synthesis can be vastly reduced if lower-quality molecules are produced, and coding strategies like DNA Fountain are used to fix molecular errors, says Erlich. "We can do more of the heavy lifting on the computer to take the burden off time-intensive molecular coding," he said.
Source: Materials provided by Columbia University School of Engineering and Applied Science.

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Chủ Nhật, 26 tháng 2, 2017

Artificial ß for Neural Networks

Alberto Salleo, associate professor of materials science and engineering, with graduate student Scott Keene characterizing the electrochemical properties of an artificial synapse for neural network computing. They are part of a team that has created the new device. Credit: L.A. Cicero

A new organic artificial synapse could support computers that better recreate the way the human brain processes information. It could also lead to improvements in brain-machine technologies.



For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain's efficient design -- an artificial version of the space over which neurons communicate, called a synapse.

"It works like a real synapse but it's an organic electronic device that can be engineered," said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. "It's an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that's been done before with inorganics."



The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain
When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we've learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

"Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time," said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. "Instead of simulating a neural network, our work is trying to make a neural network."



The artificial synapse is based off a battery design. It consists of two thin, flexible
films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses
Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network's ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

"More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry," said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. "We've demonstrated a device that's ideal for running these type of algorithms and that consumes a lot less power."



This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.
This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential
Every part of the device is made of inexpensive organic materials. These aren't found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain's chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it's possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven
University of Technology in the Netherlands.



This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia's Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.
Story Source:
Materials provided by Stanford University. Original written by Taylor Kubota

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Bảy, 18 tháng 2, 2017

The Internet and your brain are more alike than you think

Salk scientist finds similar rule governing traffic flow in engineered and biological systems. Credit: Salk Institute

A similar rule governs traffic flow in engineered and biological systems, reports a researcher. An algorithm used for the Internet is also at work in the human brain, says the report, an insight that improves our understanding of engineered and neural networks and potentially even learning disabilities.



Although we spend a lot of our time online nowadays -- streaming music and video, checking email and social media, or obsessively reading the news -- few of us know about the mathematical algorithms that manage how our content is delivered. But deciding how to route information fairly and efficiently through a distributed system with no central authority was a priority for the Internet's founders. Now, a Salk Institute discovery shows that an algorithm used for the Internet is also at work in the human brain, an insight that improves our understanding of engineered and neural networks and potentially even learning disabilities.



"The founders of the Internet spent a lot of time considering how to make information flow efficiently," says Salk Assistant Professor Saket Navlakha, coauthor of the new study that appears online in Neural Computation on February 9, 2017. "Finding that an engineered system and an evolved biological one arise at a similar solution to a problem is really interesting."
In the engineered system, the solution involves controlling information flow such that routes are neither clogged nor underutilized by checking how congested the Internet is. To accomplish this, the Internet employs an algorithm called "additive increase, multiplicative decrease" (AIMD) in which your computer sends a packet of data and then listens for an acknowledgement from the receiver: If the packet is promptly acknowledged, the network is not overloaded and your data can be transmitted through the network at a higher rate. With each successive successful packet, your computer knows it's safe to increase its speed by one unit, which is the additive increase part. But if an acknowledgement is delayed or lost your computer knows that there is congestion and slows down by a large amount, such as by half, which is the multiplicative decrease part. In this way, users gradually find their "sweet spot," and congestion is avoided because users take their foot off the gas, so to speak, as soon as they notice a slowdown. As computers throughout the network utilize this strategy, the whole system can continuously adjust to changing conditions, maximizing overall efficiency.

Navlakha, who develops algorithms to understand complex biological networks, wondered if the brain, with its billions of distributed neurons, was managing information similarly. So, he and coauthor Jonathan Suen, a postdoctoral scholar at Duke University, set out to mathematically model neural activity.



Because AIMD is one of a number of flow-control algorithms, the duo decided to model six others as well. In addition, they analyzed which model best matched physiological data on neural activity from 20 experimental studies. In their models, AIMD turned out to be the most efficient at keeping the flow of information moving smoothly, adjusting traffic rates whenever paths got too congested. More interestingly, AIMD also turned out to best explain what was happening to neurons experimentally.

It turns out the neuronal equivalent of additive increase is called long-term potentiation. It occurs when one neuron fires closely after another, which strengthens their synaptic connection and makes it slightly more likely the first will trigger the second in the future. The neuronal equivalent of multiplicative decrease occurs when the firing of two neurons is reversed (second before first), which weakens their connection, making the first much less likely to trigger the second in the future. This is called long-term depression. As synapses throughout the network weaken or strengthen according to this rule, the whole system adapts and learns.

"While the brain and the Internet clearly operate using very different mechanisms, both use simple local rules that give rise to global stability," says Suen. "I was initially surprised that biological neural networks utilized the same algorithms as their engineered counterparts, but, as we learned, the requirements for efficiency, robustness, and simplicity are common to both living organisms and the networks we have built."



Understanding how the system works under normal conditions, could help neuroscientists better understand what happens, when these results are disrupted, for example, in learning disabilities. "Variations of the AIMD algorithm are used in basically every large-scale distributed communication network," says Navlakha. "Discovering that the brain uses a similar algorithm may not be just a coincidence."
Story Source:
Materials provided by Salk Institute.

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Năm, 9 tháng 2, 2017

The incredible Artificial Intelligence Systems: They may See the world as Humans Do

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

"The model performs in the 75th percentile for American adults, making it better than average," said Northwestern Engineering's Ken Forbus. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition."



The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus' laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner's structure-mapping theory. (Gentner received the 2016 David E. Rumelhart Prize for her work on this theory.)

Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern's McCormick School of Engineering, developed the model with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research was published online this month in the journal Psychological Review.

The ability to solve complex visual problems is one of the hallmarks of human intelligence. Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but it could potentially shrink the gap between computer and human cognition.

While Forbus and Lovett's system can be used to model general visual problem-solving phenomena, they specifically tested it on Raven's Progressive Matrices, a nonverbal standardized test that measures abstract reasoning. All of the test's problems consist of a matrix with one image missing. The test taker is given six to eight choices with which to best complete the matrix. Forbus and Lovett's computational model performed better than the average American.



"The Raven's test is the best existing predictor of what psychologists call 'fluid intelligence, or the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships,'" said Lovett, now a researcher at the US Naval Research Laboratory. "Our results suggest that the ability to flexibly use relational representations, comparing and reinterpreting them, is important for fluid intelligence."

The ability to use and understand sophisticated relational representations is a key to higher-order cognition. Relational representations connect entities and ideas such as "the clock is above the door" or "pressure differences cause water to flow." These types of comparisons are crucial for making and understanding analogies, which humans use to solve problems, weigh moral dilemmas, and describe the world around them.

"Most artificial intelligence research today concerning vision focuses on recognition, or labeling what is in a scene rather than reasoning about it," Forbus said. "But recognition is only useful if it supports subsequent reasoning. Our research provides an important step toward understanding visual reasoning more broadly."

Source: Amanda Morris - Journal reference: Psychological Review

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Bảy, 28 tháng 1, 2017

New Laser based on unusual physics phenomenon could improve telecommunications, and computing applications

This is a schematic of the BIC laser: a high frequency laser beam (blue) powers the membrane to emit a laser beam at telecommunication frequency (red). Credit: Kanté group, UC San Diego

Researchers at the University of California San Diego have demonstrated the world's first laser based on an unconventional wave physics phenomenon called bound states in the continuum. The technology could revolutionize the development of surface lasers, making them more compact and energy-efficient for communications and computing applications. The new BIC lasers could also be developed as high-power lasers for industrial and defense applications.

"Lasers are ubiquitous in the present-day world, from simple everyday laser pointers to complex laser interferometers used to detect gravitational waves. Our current research will impact many areas of laser applications," said Ashok Kodigala, an electrical engineering Ph.D. student at UC San Diego and first author of the study.



"Because they are unconventional, BIC lasers offer unique and unprecedented properties that haven't yet been realized with existing laser technologies," said Boubacar Kanté, electrical engineering professor at the UC San Diego Jacobs School of Engineering who led the research.

For example, BIC lasers can be readily tuned to emit beams of different wavelengths, a useful feature for medical lasers made to precisely target cancer cells without damaging normal tissue. BIC lasers can also be made to emit beams with specially engineered shapes (spiral, donut or bell curve) -- called vector beams -- which could enable increasingly powerful computers and optical communication systems that can carry up to 10 times more information than existing ones.

"Light sources are key components of optical data communications technology in cell phones, computers and astronomy, for example. In this work, we present a new kind of light source that is more efficient than what's available today in terms of power consumption and speed," said Babak Bahari, an electrical engineering Ph.D. student in Kanté's lab and a co-author of the study.

Bound states in the continuum (BICs) are phenomena that have been predicted to exist since 1929. BICs are waves that remain perfectly confined, or bound, in an open system. Conventional waves in an open system escape, but BICs defy this norm -- they stay localized and do not escape despite having open pathways to do so.

In a previous study, Kanté and his team demonstrated, at microwave frequencies, that BICs could be used to efficiently trap and store light to enable strong light-matter interaction. Now, they're harnessing BICs to demonstrate new types of lasers. The team published the work Jan. 12 in Nature.



Making the BIC laser
The BIC laser in this work is constructed from a thin semiconductor membrane made of indium, gallium, arsenic and phosphorus. The membrane is structured as an array of Nano-sized cylinders suspended in air. The cylinders are interconnected by a network of supporting bridges, which provide mechanical stability to the device.

By powering the membrane with a high frequency laser beam, researchers induced the BIC system to emit its own lower frequency laser beam (at telecommunication frequency).
"Right now, this is a proof of concept demonstration that we can indeed achieve lasing action with BICs," Kanté said.

"And what's remarkable is that we can get surface lasing to occur with arrays as small as 8 × 8 particles," he said. In comparison, the surface lasers that are widely used in data communications and high-precision sensing, called VCSELs (vertical-cavity surface-emitting lasers), need much larger (100 times) arrays -- and thus more power -- to achieve lasing.

"The popular VCSEL may one day be replaced by what we're calling the 'BICSEL' -- bound state in the continuum surface-emitting laser, which could lead to smaller devices that consume less power," Kanté said. The team has filed a patent for the new type of light source.

The array can also be scaled up in size to create high power lasers for industrial and defense applications, he noted. "A fundamental challenge in high power lasers is heating and with the predicted efficiencies of our BIC lasers, a new era of laser technologies may become possible," Kanté said.

The team's next step is to make BIC lasers that are electrically powered, rather than optically powered by another laser. "An electrically pumped laser is easily portable outside the lab and can run off a conventional battery source," Kanté said.
Story Source:
Materials provided by University of California - San Diego. Original written by Liezel Labios.

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Sáu, 20 tháng 1, 2017

Artificial Intelligence and Machine Learning: What's the Next Step?

It's difficult to describe in a concise list with less than 1,000 words what the definitive direction of artificial intelligence is going to be in a 12-month span. The year 2016 surprised several people in terms of the speed of certain technologies' development and the revised ETA of new AI-driven products hitting the public market.
Here are the four trends that will dominate artificial intelligence in 2017.

1. Language processing will continue
We could call this "natural language processing" or NLP, but let's think more broadly about language for a moment. The key to cognition, for you mavens of Psychology 101, is sophisticated communication, even internal abstract thinking. That will continue to prove critical in driving machine learning 'deeper.'

One place to keep track of progress in the space is in machine translation, which will give you an idea of how sophisticated and accurate our software currently is in translating some of the nuance and implications of our spoken and written language.



That will be the next step in getting personal assistant technology like Alexa, Siri, Google Assistant, or Cortana to interpret our commands and questions just a little bit better.

2. Efforts to square machine learning and big data with different health sectors will accelerate
"I envision a system that still has those predictive data pools. It looks at the different data you obtain that different labs are giving all the time," eBay Director of Data Science Kira Radinsky told an audience at Geektime TechFest 2016 last month, pioneering "automated processes that can lead to those types of discoveries."

Biotech researchers and companies are trying to get programs to automate drug discoveries, among other things. Finding correlations in data and extrapolating causation is not the same in all industries, nor in any one sector of medicine. Researchers in heart disease, neurological disorders, and various types of cancer are all organizing different metrics of data. Retrieving that information and programming the proper relationship between all those variables is an endeavor.



One of the areas where this is evident is in computer vision, exemplified by Zebra Medical Vision, which can detect anomalies in CT scans for a variety of organs including the heart and liver. But compiling patient medical records and hunting for diagnostic clues there, as well as constructing better treatment plans, are also markets machine learning is opening in 2017. Other startups like Israel's ‘HealthWatch’ are producing smart clothes that constantly feed medical data to doctors to monitor patients.

This developing ecosystem of health trackers should produce enough information about individual patients or groups of people for algorithms to extract new realizations.
3. They will probably have to come up with another buzzword to go deeper than 'deep learning'

Machines building machines? Algorithms writing algorithms? Machine learning programs will continue adding more layers of processing units, as well as more sophistication to abstract pattern analysis. Deep neural networks will be expected to draw even more observations from unsorted data, just as was mentioned above in regards to health care.



That future buzz term might be “generative” or “adversarial,” as in generative adversarial networks (GANs). Described by MIT Technology Review as the invention of Open AI scientist Ian Goodfellow, GANs will set up two networks like two people with different approaches to a problem. One network will try to create new data (read “ideas”) from a given set of data while the other “tries to discriminate between real and fake data” (let’s assume this is the robotic equivalent to a devil’s advocate).

4. Self-driving cars will force an expensive race among automotive companies
I saved this for last because many readers probably consider this patently obvious. However, the surprise many laypeople and people who might fancy themselves tech insiders had by seeing the speed of the industry’s development might be duplicated in 2017 for the opposite reason. While numbers of companies are testing the technology, it will run into some pun-intended roadblocks this year.



While talking about an “autonomous” vehicle is all the rage, several companies in the testing stage not only are cautious to keep someone behind the wheel if needed, but are also creating entire human-administered command centers to guide the cars.
There are some companies that will likely be able to avoid burning capital because of competition. Consider how NIVDIA is developing cars in conjunction with Audi and Mercedes-Benz, but separately. Still, BMW, Mercedes-Benz, Nissan-Renault, Ford, and General Motors are all making very big bets while trying to speed up their timeline and hit autonomous vehicle research milestones more quickly.

Even if the entire industry were to be wrong in a cataclysmic way about the unstoppable future of the self-driving car (which it won't be, but bear with me), there will still be more automated features installed in new vehicle models relatively soon. Companies will be forced to spend big, and fast to match features offered by their competitors.

By Gedalyah Reback

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

 
OUR MISSION