[Back]


Talks and Poster Presentations (with Proceedings-Entry):

M. Vincze, D. Legenstein, W. Frühwirt, W. Ponweiser:
"Measuring Local Illumination to Improve Object Finding Tasks in Realistic Environments";
Talk: 1st International Workshop on Advances in Service Robotics, Bardolino, Italien; 2003-03-13 - 2003-03-15; in: "Proceedings", (2003), ISBN: 3-8167-6268-9; 7 - 14.



English abstract:
A common task of a service robot in a home environment is to go and fetch an object. Colour information
is one of the most relevant cues for this task. Colour is naturally used by humans when
describing a target object, for example, with a phrase such as bring me the red mug. However,
the colour perceived with cameras strongly depends on the scene illumination, the environment,
and the specific sensor. Under a wide range of these conditions the human eye-sight exhibits
good colour constancy. A similar ability is required if computer vision systems are used to
recognize objects in uncontrolled or partially controlled environments. We propose a pragmatic
yet simple and efficient approach that measures the local illumination, uses a parameter-free cue
integration method to find different reddish objects, and evaluate the adaptation to the ambient
conditions for situations as different as a single light bulb over a table to outside light from sun
with and without snow on the ground.
A recent comparison of colour constancy algorithms (Banard, PAMI 11(9) 2002) demonstrates
that the average chromaticity error can be reduced from 12 (without correction) to 4
percent. No maximum error is reported. This requires an elaborate scheme of calibrating the
camera’s spectral response, simulating responses to lighting situations, and is sensitive to the
image segmentation method. Practical tests in indoor robotic settings indicate that original
lighting variations introduce a much larger error and after correction errors up to 40 percent
can remain.
To find an object based on colour cues in a service robotic scenario the following approach is
proposed. In the view of the fixed or active robot’s camera, e.g., parallel to the lower image edge,
we mount a reference colour template. Whenever an object is trained, the reference colours on
the template are used to transform the perceived colour space into a reference colour space. In
new situations with other illumination conditions the ambient lighting situation is measured with
the template and the target’s colour is transformed into the new space. Although theoretically
a diagonal mapping of colour is sufficient, it can be shown that actual camera characteristics
and the inhomogeneous light distribution are better adjusted to if using the full matrix. With
this colour transformaiton matrix only the target’s colour is adapted, making this an extremely
efficient approach. In the experiments the coloured object is found using the multi-spectral
classification method to integrate the colour (and texture) cues. The original method (Kraus,
Photogrammetry, 1990) has been improved by a Gaussian probabilistic weighting, which makes
the method free of setting any parameters. A series of tests have been carried out to distinguish
different colours and different shades of red. Illuminations tested range from illumination of one
light bulb over a table to images taken against a window with and without snow or sun. The
approach has been found to be robust by evaluating the sensitivity of the colour transformation
to changes of the distance and angle to the target. The maximum error has been found to be
32 percent when viewed against the illumination, with average cases of about 8 percent and a
standard deviation of under 7 percent in the worst case and 3 percent on average.


Online library catalogue of the TU Vienna:
http://aleph.ub.tuwien.ac.at/F?base=tuw01&func=find-c&ccl_term=AC04404085


Created from the Publication Database of the Vienna University of Technology.