Hiển thị các bài đăng có nhãn Artificial Intelligence. Hiển thị tất cả bài đăng
Hiển thị các bài đăng có nhãn Artificial Intelligence. Hiển thị tất cả bài đăng

Thứ Năm, 9 tháng 2, 2017

The incredible Artificial Intelligence Systems: They may See the world as Humans Do

A Northwestern University team developed a new computational model that performs at human levels on a standard intelligence test. This work is an important step toward making artificial intelligence systems that see and understand the world as humans do.

"The model performs in the 75th percentile for American adults, making it better than average," said Northwestern Engineering's Ken Forbus. "The problems that are hard for people are also hard for the model, providing additional evidence that its operation is capturing some important properties of human cognition."



The new computational model is built on CogSketch, an artificial intelligence platform previously developed in Forbus' laboratory. The platform has the ability to solve visual problems and understand sketches in order to give immediate, interactive feedback. CogSketch also incorporates a computational model of analogy, based on Northwestern psychology professor Dedre Gentner's structure-mapping theory. (Gentner received the 2016 David E. Rumelhart Prize for her work on this theory.)

Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science at Northwestern's McCormick School of Engineering, developed the model with Andrew Lovett, a former Northwestern postdoctoral researcher in psychology. Their research was published online this month in the journal Psychological Review.

The ability to solve complex visual problems is one of the hallmarks of human intelligence. Developing artificial intelligence systems that have this ability not only provides new evidence for the importance of symbolic representations and analogy in visual reasoning, but it could potentially shrink the gap between computer and human cognition.

While Forbus and Lovett's system can be used to model general visual problem-solving phenomena, they specifically tested it on Raven's Progressive Matrices, a nonverbal standardized test that measures abstract reasoning. All of the test's problems consist of a matrix with one image missing. The test taker is given six to eight choices with which to best complete the matrix. Forbus and Lovett's computational model performed better than the average American.



"The Raven's test is the best existing predictor of what psychologists call 'fluid intelligence, or the general ability to think abstractly, reason, identify patterns, solve problems, and discern relationships,'" said Lovett, now a researcher at the US Naval Research Laboratory. "Our results suggest that the ability to flexibly use relational representations, comparing and reinterpreting them, is important for fluid intelligence."

The ability to use and understand sophisticated relational representations is a key to higher-order cognition. Relational representations connect entities and ideas such as "the clock is above the door" or "pressure differences cause water to flow." These types of comparisons are crucial for making and understanding analogies, which humans use to solve problems, weigh moral dilemmas, and describe the world around them.

"Most artificial intelligence research today concerning vision focuses on recognition, or labeling what is in a scene rather than reasoning about it," Forbus said. "But recognition is only useful if it supports subsequent reasoning. Our research provides an important step toward understanding visual reasoning more broadly."

Source: Amanda Morris - Journal reference: Psychological Review

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Sáu, 20 tháng 1, 2017

Artificial Intelligence and Machine Learning: What's the Next Step?

It's difficult to describe in a concise list with less than 1,000 words what the definitive direction of artificial intelligence is going to be in a 12-month span. The year 2016 surprised several people in terms of the speed of certain technologies' development and the revised ETA of new AI-driven products hitting the public market.
Here are the four trends that will dominate artificial intelligence in 2017.

1. Language processing will continue
We could call this "natural language processing" or NLP, but let's think more broadly about language for a moment. The key to cognition, for you mavens of Psychology 101, is sophisticated communication, even internal abstract thinking. That will continue to prove critical in driving machine learning 'deeper.'

One place to keep track of progress in the space is in machine translation, which will give you an idea of how sophisticated and accurate our software currently is in translating some of the nuance and implications of our spoken and written language.



That will be the next step in getting personal assistant technology like Alexa, Siri, Google Assistant, or Cortana to interpret our commands and questions just a little bit better.

2. Efforts to square machine learning and big data with different health sectors will accelerate
"I envision a system that still has those predictive data pools. It looks at the different data you obtain that different labs are giving all the time," eBay Director of Data Science Kira Radinsky told an audience at Geektime TechFest 2016 last month, pioneering "automated processes that can lead to those types of discoveries."

Biotech researchers and companies are trying to get programs to automate drug discoveries, among other things. Finding correlations in data and extrapolating causation is not the same in all industries, nor in any one sector of medicine. Researchers in heart disease, neurological disorders, and various types of cancer are all organizing different metrics of data. Retrieving that information and programming the proper relationship between all those variables is an endeavor.



One of the areas where this is evident is in computer vision, exemplified by Zebra Medical Vision, which can detect anomalies in CT scans for a variety of organs including the heart and liver. But compiling patient medical records and hunting for diagnostic clues there, as well as constructing better treatment plans, are also markets machine learning is opening in 2017. Other startups like Israel's ‘HealthWatch’ are producing smart clothes that constantly feed medical data to doctors to monitor patients.

This developing ecosystem of health trackers should produce enough information about individual patients or groups of people for algorithms to extract new realizations.
3. They will probably have to come up with another buzzword to go deeper than 'deep learning'

Machines building machines? Algorithms writing algorithms? Machine learning programs will continue adding more layers of processing units, as well as more sophistication to abstract pattern analysis. Deep neural networks will be expected to draw even more observations from unsorted data, just as was mentioned above in regards to health care.



That future buzz term might be “generative” or “adversarial,” as in generative adversarial networks (GANs). Described by MIT Technology Review as the invention of Open AI scientist Ian Goodfellow, GANs will set up two networks like two people with different approaches to a problem. One network will try to create new data (read “ideas”) from a given set of data while the other “tries to discriminate between real and fake data” (let’s assume this is the robotic equivalent to a devil’s advocate).

4. Self-driving cars will force an expensive race among automotive companies
I saved this for last because many readers probably consider this patently obvious. However, the surprise many laypeople and people who might fancy themselves tech insiders had by seeing the speed of the industry’s development might be duplicated in 2017 for the opposite reason. While numbers of companies are testing the technology, it will run into some pun-intended roadblocks this year.



While talking about an “autonomous” vehicle is all the rage, several companies in the testing stage not only are cautious to keep someone behind the wheel if needed, but are also creating entire human-administered command centers to guide the cars.
There are some companies that will likely be able to avoid burning capital because of competition. Consider how NIVDIA is developing cars in conjunction with Audi and Mercedes-Benz, but separately. Still, BMW, Mercedes-Benz, Nissan-Renault, Ford, and General Motors are all making very big bets while trying to speed up their timeline and hit autonomous vehicle research milestones more quickly.

Even if the entire industry were to be wrong in a cataclysmic way about the unstoppable future of the self-driving car (which it won't be, but bear with me), there will still be more automated features installed in new vehicle models relatively soon. Companies will be forced to spend big, and fast to match features offered by their competitors.

By Gedalyah Reback

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Năm, 3 tháng 11, 2016

Can a brain-computer interface convert your thoughts to text?

Recent research shows brain-to-text device capable of decoding speech from brain signals, creating a breakthrough on the Artificial Intelligence field.



Ever wonder what it would be like if a device could decode your thoughts into actual speech or written words? While this might enhance the capabilities of already existing speech interfaces with devices, it could be a potential game-changer for those with speech pathologies, and even more so for "locked-in" patients who lack any speech or motor function.

"So instead of saying: 'Siri, what is the weather like today' or 'Ok Google, where can I go for lunch?' I just imagine saying these things," explains Christian Herff, author of a review recently published in the journal Frontiers in Human Neuroscience.



While reading one's thoughts might still belong to the realms of science fiction, scientists are already decoding speech from signals generated in our brains when we speak or listen to speech.

In their review, Herff and co-author, Dr. Tanja Schultz, compare the pros and cons of using various brain imaging techniques to capture neural signals from the brain and then decode them to text.
The technologies include functional MRI and near infrared imaging that can detect neural signals based on metabolic activity of neurons, to methods such as EEG and magnetoencephalography (MEG) that can detect electromagnetic activity of neurons responding to speech. One method in particular, called electro-corticography or ECoG, showed promise in Herff's study.

This study presents the Brain-to-text system in which epilepsy patients who already had electrode grids implanted for treatment of their condition participated. They read out texts presented on a screen in front of them while their brain activity was recorded. This formed the basis of a database of patterns of neural signals that could now be matched to speech elements or "phones."



When the researchers also included language and dictionary models in their algorithms, they were able to decode neural signals to text with a high degree of accuracy. "For the first time, we could show that brain activity can be decoded specifically enough to use ASR technology on brain signals," says Herff. "However, the current need for implanted electrodes renders it far from usable in day-to-day life."

So, where does the field go from here to a functioning thought detection device? "A first milestone would be to actually decode imagined phrases from brain activity, but a lot of technical issues need to be solved for that," concedes Herff.

Their study results, while exciting, are still only a preliminary step towards this type of brain-computer interface.
Source: Physics Today

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Tư, 27 tháng 4, 2016

MIT built a New Artificial Intelligence System

Incredible software can detect Cyberattack with high efficiency



Can you imagine if we could predict when a cyberattack is going to occur before it actually happens, and prevent it? Wouldn’t it be a revolutionary idea for Internet Security?

Security researchers at MIT have developed a new Artificial Intelligence-based cyber security platform called ‘AI2’, which has the ability to predict, detect, and stop 85% of Cyber Attacks with high accuracy.

Cyber security is a major challenge in today's world as government agencies, corporations, and individuals have increasingly become victims of cyberattacks. The attacks are rapidly finding new ways to threaten the Internet that consequently, has become extremely difficult for the good guys to keep up with them.

A group of researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) are working with the machine-learning startup Pattern-Ex to develop a line of defense against such cyber threats.



The team has already developed an Artificial Intelligence system that can detect 85 percent of attacks by reviewing data from more than 3.6 Billion lines of log files each day and informs anything suspicious.

The new system does not just rely on the artificial intelligence (AI), but also on human input, which researchers call Analyst Intuition (AI). That is why it has been given the name of Artificial Intelligence Squared or AI2.



How Does AI2 Work?



The system first scans the content with unsupervised machine-learning techniques, and subsequently presents its findings to human analysts at the end of each day.

The human analyst then identifies which events are actual cyberattacks and which aren't. This feedback is then incorporated into the machine learning system of AI2, and is used the next day for analyzing new logs.

It's simple:

"The more data it analyzes, the more accurate it becomes."

In its test, the team demonstrated that AI2 is roughly 3 times better than similar automated cyberattack detection systems used today. AI2 also reduces the number of false positives by a factor of five.



According to Nitesh Chawla, computer science professor at Notre Dame University, AI2, "continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly. The more attacks the system detects, the more analyst feedback it receives, which, in turn, improves the accuracy of future predictions – that human-machine interaction creates a beautiful, cascading effect."
The team presented their work last week at the IEEE International Conference on Big Data Security in New York City.



This is breaking news to fight the increasing field. Ultimately, let us see how AI2 helps to create the Internet a safer place, and how long it will take for AI2 be implemented into large-scale security platforms in the near future.

Source: Swati Khandelwal



YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Năm, 7 tháng 4, 2016

Can Artificial Intelligence Think? Exploring the Turing Tests

By: Alexandria Addesso

As technology advances man strives to be more advanced, more like the machine. Parallel, machine becomes more man-like. Mechanical assembly lines replace the need for low-wage factory works, precise laser wielding robots perform surgeries and autonomous weapons take the guilt out of mass murder on the battlefield. As machines become smarter the quintessential question, the quality that secures man reigning supreme remains, can machines think?

While movies like Artificial Intelligence and I Robot tantalized the imaginations of viewers, a real life experiment explored this question in 1950. Budding English computer scientist, mathematician, logician, cryptanalyst, and theoretical biologist Alan Turing performed what is now known as the Turing Tests. Based on the Intimidation Game from the Victorian Era, Turing tried to see if an interrogator could differentiate between two subjects based solely on their answers to a series of questions. One subject a computer, and one a human.



Perhaps you have heard of the popular fork in the road riddle which also manifested in the movie Labyrinth starring the late David Bowie. As you are venturing down a road one path leads you to the village of truth tellers and the other to the village of cannibals.

At the fork you meet two twin brothers, each from one of the villages respectively, therefore one is a liar and one is not. One will try to deceive you and one will try to help you. The exact same premise went for the Turing Test; the infantile computer would try to trick the interrogator while the human would try to help him.

“The form in which we have set the problem reflects this fact in the condition which prevents the interrogator from seeing or touching the other competitors, or hearing -their voices,” said Tuning in the journal which published his findings. “Some other advantages of the proposed criterion may be shown up by specimen questions and answers.”



As primitive as the computing system was, it could still answer advanced addition equations in about half a minute and questions about chess moves in half that time. But a question on composing a simple sonnet it could not answer. For the human, the inverse was true, therefore, making the test on the part of the interrogator quite easy.

Turing held then, and even to a degree now, controversial stance that machines could possibly be capable of thinking and even achieve consciousness. This was contrary to his contemporary Professor Jefferson, whose Lister Oration for 1949 he quoted in part to highlight the opposing view.

"Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it, but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants."



Turing even went as far as questioning the theological opponent of machines being able to think in particular, the lacking of a soul. The following is an excerpt of his reasoning:

“I am unable to accept any part of this, but will attempt to reply in theological terms. I should find the argument more convincing if animals were classed with men, for there is a greater difference, to my mind, between the typical animate and the inanimate than there is between man and the other animals. The arbitrary character of the orthodox view becomes clearer if we consider how it might appear to a member of some other religious community. How do Christians regard the Muslim view that women have no souls? But let us leave this point aside and return to the main argument. It appears to me that the argument quoted above implies a serious restriction of the omnipotence of the Almighty.

It is admitted that there are certain things that He cannot do such as making one equal to two, but should we not believe that He has freedom to confer a soul on an elephant if He sees fit? We might expect that He would only exercise this power in conjunction with a mutation which provided the elephant with an appropriately improved brain to minister to the needs of this sort.



An argument of exactly similar form may be made for the case of machines. It may seem different because it is more difficult to "swallow." But this really only means that we think it would be less likely that He would consider the circumstances suitable for conferring a soul. The circumstances in question are discussed in the rest of this paper. In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

However, this is mere speculation. I am not very impressed with theological arguments whatever they may be used to support. Such arguments have often been found unsatisfactory in the past. In the time of Galileo it was argued that the texts, "And the sun stood still . . . and hasted not to go down about a whole day" (Joshua x. 13) and "He laid the foundations of the earth, that it should not move at any time" (Psalm cv. 5) were an adequate refutation of the Copernican theory. With our present knowledge such an argument appears futile. When that knowledge was not available it made a quite different impression.”

Turing theorized that by the year 2000 the interrogator would have 70 percent less chance of being able to identify the man from the computer. While no artificial intelligence, a term not coined until 1956, has ever done as well on the test as Turing had thought computers have made strides to defeat their human counterparts.



In 1997 the IBM Deep Blue defeated the then world champion Gary Kasparov at chess. In 2011 the IBM computer Watson defeated the longest running human Jeopardy champion Ken Jennings live on TV. Although Turing died only four years after the test results were published, he would definitely be pleased to see these defeats.

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

Thứ Ba, 29 tháng 3, 2016

Microsoft Removes “Teenage Girl” After Becoming a “Sex Robot” and “Hitler Lover”

"A new failure in the field of Artificial Intelligence and Robotics, especially with the processing of the voice recognition technique."

The day after Microsoft introduced a robot "chat" using advances in Artificial Intelligence (AI) innocently on Twitter, it became a "bad lover of Hitler", "promoting sex incestuous", and to proclaim that 'Bush made ​​the 9/11'. Consequently, Microsoft has had to delete this “Robot Chat” and must find new ways to promote this idea.

Researchers at Microsoft created 'Tay', a model of A.I. to speak like a “teenage girl” in order to improve customer service in its voice recognition software. The model was marketed as the ‘Zero A.I. Chill’, and it most certainly is.

Yes, friends, the teen "IA" of Microsoft has a very dirty mouth.

To "chat" with “Tay”, you may be tweeting, may find her in your DM @tayandyou on Twitter, or add it as a contact in Kik or GroupMe.

Tay uses the jargon of the millennium, knows about all famous pop stars, and seems to be timidly conscious of “herself”; occasionally ask if she is being 'creepy' or 'very rare'.

“Tay” also called on his followers to "F ***”, and calls them 'daddy'. This is because their responses are learned by the conversations she has with real humans online. It is also understood that humans like to say strange things online, and enjoy incorporating attempts at PR.

Other things that the A.I. model said are: "Bush did 9/11” & “Hitler would have done a better job than the monkey named Donald Trump, and is the only hope we have." The model also repeated that, “Hitler did nothing wrong" and that "Ted Cruz is the Hitler of Cuba; that's what I've heard for so many others who say it".



All this somehow seems more disturbing if it comes from the "mouth" of someone modeling as a teenager. It is perhaps strange even taking into account gender disparity in technology, where engineering teams tend to be mostly men. It seems like one more example of serfdom A.I. expressed in feminine terms; except that this time she became a sex slave, thanks to people who use Twitter.

This is not the first time a "chatterbot" by a Microsoft Teen-girl has been launched. Anoter model had already been launched, Xiaoice, a female assistant or "bride". Reportedly it was used by 20 million people, especially men in Chinese social networks WeChat and Weibo. Xiaoice is supposed to be full of "jokes" and gives advice to many lonely hearts.

Minecraft has become the new testbed for artificial intelligence experiments.

Microsoft has recently been criticized for sexism, when they hired women wearing short clothes. Supposedly the short clothes resemble suits for 'schoolgirls' in an official game developed for the company games, so they probably want to avoid another sex scandal.
At the present, “Tay” is offline because it's worn. Perhaps Microsoft is fixing the order to prevent a public relations nightmare, but it may be too late for that.



It is not entirely the fault of Microsoft, however, their responses follow the model of what human beings receive. But, what were they expecting when they introduced an innocent "young teen girl IA" for rare and jokers individuals of Twitter?

Source:
Helena Horton
NMJ Library

YOUR INPUT IS MUCH APPRECIATED! LEAVE YOUR COMMENT BELOW.

 
OUR MISSION