Saturday, November 24, 2012

Scientists See Promise in Deep-Learning Programs

The advances have led to widespread enthusiasm among researchers who design software to perform human activities like seeing, listening and thinking. They offer the promise of machines that converse with humans and perform tasks like driving cars and working in factories, raising the specter of automated robots that could replace human workers.
The technology, called deep learning, has already been put to use in services like Apple’s Siri virtual personal assistant, which is based on Nuance Communications’ speech recognition service, and in Google’s Street View, which uses machine vision to identify specific addresses.
But what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just “neural nets” for their resemblance to the neural connections in the brain.
“There has been a number of stunning new results with deep-learning methods,” said Yann LeCun, a computer scientist at New York University who did pioneering research in handwriting recognition at Bell Laboratories. “The kind of jump we are seeing in the accuracy of these systems is very rare indeed.”
Artificial intelligence researchers are acutely aware of the dangers of being overly optimistic. Their field has long been plagued by outbursts of misplaced enthusiasm followed by equally striking declines.
In the 1960s, some computer scientists believed that a workable artificial intelligence system was just 10 years away. In the 1980s, a wave of commercial start-ups collapsed, leading to what some people called the “A.I. winter.”
But recent achievements have impressed a wide spectrum of computer experts. In October, for example, a team of graduate students studying with the University of Toronto computer scientist Geoffrey E. Hinton won the top prize in a contest sponsored by Merck to design software to help find molecules that might lead to new drugs.
From a data set describing the chemical structure of 15 different molecules, they used deep-learning software to determine which molecule was most likely to be an effective drug agent.
The achievement was particularly impressive because the team decided to enter the contest at the last minute and designed its software with no specific knowledge about how the molecules bind to their targets. The students were also working with a relatively small set of data; neural nets typically perform well only with very large ones.
“This is a really breathtaking result because it is the first time that deep learning won, and more significantly it won on a data set that it wouldn’t have been expected to win at,” said Anthony Goldbloom, chief executive and founder of Kaggle, a company that organizes data science competitions, including the Merck contest.

Tuesday, November 20, 2012

IBM supercomputer simulates 530 billion neurons

By Mat Smith posted Nov 20th, 2012 

IBM Research, in collaboration with DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program, has reached another brain simulation milestone. Powered by its new TrueNorth system on the world's second faster supercomputer, IBM was capable of crafting a 2.084 billion neurosynaptic cores and 100 trillion synapses -- all at a speed "only" 1,542 times slower than real life. The abstract explains that this isn't a biologically realistic simulation of the human brain, but rather mathematically abstracted -- and little more dour -- versions steered towards maximizing function and minimizing cost. DARPA's SyNAPSE project aims to tie together supercomputing, neuroscience and neurotech for a future cognitive computing architecture far beyond what's running behind your PC screen at the moment.

engadget