Printable Version of this PageHome PageRecent ChangesSearchSign In
Tag:
Correcting Far Field Visual Differences in Robotic Navigation

Problems arises in vision-based robotic navigation when terrain or obstacles look different at distance because of differences in light incident angle. This poses difficulty when classifying path or obstacles using learned samples at near distance applied to far field.

To solve such such problem with simple outdoors lighting condition (non-artificial lit), I propose applying known models of lighting (based on simple variables of material diffusiveness and reflectivity) to compute lighting changes at distance. Visual images collected during live navigation will periodically recompute the incident angle, the directed and the ambient intensity of the light. These are used, in conjunction with off-line lighting models of different materials, to predict the far field view of the terrain. Corrections from stored models of materials are made via image samples of materials taken at known incident angle and distance based on available stereo disparity data.

Experiments are run (hopefully) to compare the accuracy of far field prediction on a number of typical terrain surfaces (including asphalt, grass, mulch, etc). A number of different sampling/correction intervals are tested on the LAGR robot platform to determine the optimal trade-off between speed and temporal accuracy of the prediction.

Last modified 12 December 2007 at 12:35 am by shumin