Monday, January 8, 2007

Artificial Intelligence: IIIT comes out with intelligent robots

2007
By Syed Akbar
Hyderabad, Jan 8: Dozens of robots vied with one another on Monday separating ferocious "tigers" from large but humble "elephants" and "eating" fruits leisurely to boost their artificial intelligence.
The Robo Safari held at the International Institute of Information Technology here brought to the fore the advancement made by India on the artificial intelligence front. Robots in different shapes and sizes took part in the Robo Safari, the first of its kind competition being held down the Vindhyas. The competition coincides the five-day International Joint Conference on Artificial Intelligence which began on Monday.
The Robo Safari began with robots entering a room where several objects shaped like elephants, tigers, dogs, zebras and lions and apples, bananas, citrus, grapes and pomegranates were placed. The robots vied with one another to recognise these artificial objects using their artificial brains.
The task before the robots was not simple. They will have to identify the object and record its image as well. This twin task of identifying and recording the object is not quite easy for the robots. The designers have all racked their brains and put their natural intelligence together to make the robots work with artificial intelligence.
"The contest is aimed at testing visual intelligence and mechanical strength. The idea is to look at the development of mobile robots that can effectively work in a wide variety of situations and environments. The challenge is to build robots that can perform tasks in realistic environments exploiting the recent developments in the areas of computer vision and mobile robotics," says one of the organisers.
The robots were developed based on a real-time vision system with a laptop and a camera mounted on a given mobile platform. The vision system must detect, recognise and locate all instances of objects along the path of the robot. The Safari was held in two subjects location and identification.
The robots moved on five different types of terrain, racing along the circuit one at a time. Scores were given based on their performance. The robots were started by a button/switch at the beginning of the race once the judge pressed the timer. After starting, the robots worked completely autonomous and moved along the track and negotiated obstacles without external control or intervention.
While doing so the robots recognised all objects in their path. A 3D model of the specified objects were placed in different poses. Room lighting was kept normal with no deep shadows.
The contestants were supplied with a set of multiple images of several target objects a month in advance while another set was released only on Sunday to make the competition even tougher.
The robots counted the objects - one apple, two tigers and an elephant even while reporting it correctly.
In the second competition based on "localisation", the robot entered the room where simple geometric objects (3D) were kept. It stopped at certain positions (unknown to participants). The vision task is to localise the robot at that position. The starting location of the robot was made available to the participants.
At positions where it stops, optional landmarks (at-least 3 in number) were placed for aiding localisation. These landmarks were made known to participants in advance. The positions of the landmarks were given at the start of the competition. They were in the same frame of reference as the starting position of the robot.
When the robot stopped it was notified through a mike input to the lap-top in form of a beep or tone. Uncertainty in position of the robot up to a certain error was allowed.

No comments: