Brainy Robots Step into Daily Life

19 Julio 2006

Robot cars drive themselves across the desert, electronic eyes perform lifeguard duty in swimming pools and virtual enemies with humanlike behavior battle video game players.


These are some fruits of the research field known as artificial intelligence, where reality is finally catching up to the science-fiction hype. A half-century after the term was coined, both scientists and engineers say they are making rapid progress in simulating the human brain, and their work is finding its way into a new wave of real-world products.


The advances can also be seen in the emergence of bold new projects intended to create more ambitious machines that can improve safety and security, entertain and inform, or just handle everyday tasks. At Stanford University, for instance, computer scientists are developing a robot that can use a hammer and a screwdriver to assemble an Ikea bookcase (a project beyond the reach of many humans) as well as tidy up after a party, load a dishwasher or take out the trash.


One pioneer in the field is building an electronic butler that could hold a conversation with its master — á la HAL in the movie “2001: A Space Odyssey” — or order more pet food.


Though most of the truly futuristic projects are probably years from the commercial market, scientists say that after a lull, artificial intelligence has rapidly grown far more sophisticated. Today some scientists are beginning to use the term cognitive computing, to distinguish their research from an earlier generation of artificial intelligence work. What sets the new researchers apart is a wealth of new biological data on how the human brain functions.


“There’s definitely been a palpable upswing in methods, competence and boldness,” said Eric Horvitz, a Microsoft researcher who is president-elect of the American Association for Artificial Intelligence. “At conferences you are hearing the phrase ‘human-level A.I.,’ and people are saying that without blushing.”


Cognitive computing is still more of a research discipline than an industry that can be measured in revenue or profits. It is pursued in various pockets of academia and the business world. And despite some of the more startling achievements, improvements in the field are measured largely in increments: voice recognition systems with decreasing failure rates, or computerized cameras that can recognize more faces and objects than before.


Still, there have been rapid innovations in many areas: voice control systems are now standard features in midpriced automobiles, and advanced artificial reason techniques are now routinely used in inexpensive video games to make the characters’ actions more lifelike.


A French company, Poseidon Technologies, sells underwater vision systems for swimming pools that function as lifeguard assistants, issuing alerts when people are drowning, and the system has saved lives in Europe.


Last October, a robot car designed by a team of Stanford engineers covered 132 miles of desert road without human intervention to capture a $2 million prize offered by the Defense Advanced Research Projects Agency, part of the Pentagon. The feat was particularly striking because 18 months earlier, during the first such competition, the best vehicle got no farther than seven miles, becoming stuck after driving off a mountain road.


Now the Pentagon agency has upped the ante: Next year the robots will be back on the road, this time in a simulated traffic setting. It is being called the “urban challenge.”


At Microsoft, researchers are working on the idea of “predestination.” They envision a software program that guesses where you are traveling based on previous trips, and then offers information that might be useful based on where the software thinks you are going.


Tellme Networks, a company in Mountain View, Calif., that provides voice recognition services for both customer service and telephone directory applications, is a good indicator of the progress that is being made in relatively constrained situations, like looking up a phone number or transferring a call.


Tellme supplies the system that automates directory information for toll-free business listings. When the service was first introduced in 2001, it could correctly answer fewer than 37 percent of phone calls without a human operator’s help. As the system has been constantly refined, the figure has now risen to 74 percent.


More striking advances are likely to come from new biological models of the brain. Researchers at the École Polytechnique Fédérale de Lausanne in Lausanne, Switzerland, are building large-scale computer models to study how the brain works; they have used an I.B.M. parallel supercomputer to create the most detailed three-dimensional model to date of a column of 10,000 neurons in the neocortex.


“The goal of my lab in the past 10 to 12 years has been to go inside these little columns and try to figure out how they are built with exquisite detail,” said Henry Markram, a research scientist who is head of the Blue Brain project. “You can really now zoom in on single cells and watch the electrical activity emerging.”


Blue Brain researchers say they believe the simulation will provide fundamental insights that can be applied by scientists who are trying to simulate brain functions.


Another well-known researcher is Robert Hecht-Nielsen, who is seeking to build an electronic butler called Chancellor that would be able to listen, speak and provide in-home concierge services. He contends that with adequate resources, he could create such a machine within five years.


Although some people are skeptical that Mr. Hecht-Nielsen can achieve what he describes, he does have one successful artificial intelligence business under his belt. In 1986, he founded HNC Software, which sold systems to detect credit card fraud using neural network technology designed to mimic biological circuits in the brain. HNC was sold in 2002 to the Fair Isaac Corporation, where Mr. Hecht-Nielsen is a vice president and leads a small research group.


Last year he began speaking publicly about his theory of “confabulation,” a hypothesis about the way the brain makes decisions. At a recent I.B.M. symposium, Mr. Hecht-Nielsen showed off a model of confabulation, demonstrating how his software program could read two sentences from The Detroit Free Press and create a third sentence that both made sense and was a natural extension of the previous text.


For example, the program read: “He started his goodbyes with a morning audience with Queen Elizabeth II at Buckingham Palace, sharing coffee, tea, cookies and his desire for a golf rematch with her son, Prince Andrew. The visit came after Clinton made the rounds through Ireland and Northern Ireland to offer support for the flagging peace process there.”


The program then generated a sentence that read: “The two leaders also discussed bilateral cooperation in various fields.”


Artificial intelligence had its origins in 1950, when the mathematician Alan Turing proposed a test to determine whether or not a machine could think or be conscious. The test involved having a person face two teleprinter machines, only one of which had a human behind it. If the human judge could not tell which terminal was controlled by the human, the machine could be said to be intelligent.


In the late 1950’s a field of study emerged that tried to build systems that replicated human abilities like speech, hearing, manual tasks and reasoning.


During the 1960’s and 1970’s, the original artificial intelligence researchers began designing computer software programs they called “expert systems,” which were essentially databases accompanied by a set of logical rules. They were handicapped both by underpowered computers and by the absence of the wealth of data that today’s researchers have amassed about the actual structure and function of the biological brain.


Those shortcomings led to the failure of a first generation of artificial intelligence companies in the 1980’s, which became known as the A.I. Winter. Recently, however, researchers have begun to speak of an A.I. Spring emerging as scientists develop theories on the workings of the human mind. They are being aided by the exponential increase in processing power, which has created computers with millions of times the power of those available to researchers in the 1960’s — at consumer prices.


“There is a new synthesis of four fields, including mathematics, neuroscience, computer science and psychology,” said Dharmendra S. Modha, an I.B.M. computer scientist. “The implication of this is amazing. What you are seeing is that cognitive computing is at a cusp where it’s knocking on the door of potentially mainstream applications.”


At Stanford, researchers are hoping to make fundamental progress in mobile robotics, building machines that can carry out tasks around the home, like the current generation of robotic floor vacuums, only more advanced. The field has recently been dominated by Japan and South Korea, but the Stanford researchers have sketched out a three-year plan to bring the United States to parity.


At the moment, the Stanford team is working on the first steps necessary to make the robot they are building function well in an American household. The team is focusing on systems that will consistently recognize standard doorknobs and is building robot hands to open doors.


“It’s time to build an A.I. robot,” said Andrew Ng, a Stanford computer scientist and a leader of the project, called Stanford Artificial Intelligence Robot, or Stair. “The dream is to put a robot in every home.”


Source: New York Times

Compartir