Joel's Assignment 14

Currently I am fairly uncertain on what exact topic my thesis will be; however, I do have an idea of the general area it will be in: robotics. I'm still fairly uncertain between machine learning and vision, but for the purpose of this assignment I will write about an idea I had concerning vision.

The basic concept for short-range vision (at least, as far as I've learned) is to measure the disparity between two eyes and consider high enough values to be an "object" to be avoided. Then we use A (or some other search algorithm) to find the shortest route to the goal avoiding that disparity. I propose a different strategy, one which I thought up at Alexander Repenning's colloquium last week. He introduced the concept of anti-objects; where an object has a high value, and all of the values on the graph are summations of the surrounding squares. It seems straightforward to adapt this method to robotics. This method might also take care of noise in our disparity values, I would think.

I believe that many more opportunities lie in the field of robotics and the antiobjects, although I'm not really sure how to address all of them. For instance, instead of using cartesian planning, what about image planning? If our goal is located directly behind an object of high disparity, how do plan around it? Also, what would we do when our field of vision is blocked, is there a way around this with the antiobject method? There is also the inherent problem of disparity not being good for long ranges, is there some way to adapt this for farther reaching effects?