It has to be pointed out that the overall system (that is, the Navigation, Pilot and Vision systems) used in the simulations is not exactly the same as the one described in the previous chapter (also described in [13]). Since the beginning of this research, four years ago, the Navigation, Pilot and Vision systems have been evolving (agents of the Navigation system have been added, modified and removed, and the capabilities of the Pilot and Vision systems have also changed) until we have reached what, by now, is the definitive version, which has just been described. This evolution has been guided by the experimentation, both on simulation and with the real robot. The simulation experiments described in this chapter show the performance of a previous version of our system [59,12].
One of the main differences between the simulated system and the definitive one is that in the simulated one the Vision system did not provide information about the distance to the visible landmarks; it provided the Navigation system only with angular information. Moreover, the simulated Vision system had no range limitation, that is, it could identify any landmark, no matter how far it was, as long as it was in the view field of the camera. Obviously, this does not hold on the real Vision system.
Due to this lack of distance information, the Map Manager
agent had to compute the distance to the landmarks using the change
in angle of each landmark on successive viewframes.
Since the change in angle can vary very little for the landmark the
robot is going towards (i.e. the target), it was very difficult to
accurately compute the distance to the target. In the simulated system,
there was an additional
agent, the Distance Estimator, that helped on computing the
distance to the target. The role of this agent was to move the robot
orthogonally with respect to the line connecting the robot and the
target landmark while pointing the camera in the direction of the
target, so that the change in angle was maximal, permitting the
Map Manager to compute the distance accurately. The
Distance Estimator agent computed the imprecision associated to the
distance to the target. This imprecision is computed as
,
where
is a parameter to control the shape of the function,
and
is the error in distance, and, similarly to what the Target
Tracker does, it is computed as
the size of the interval corresponding to the 70%
-cut
of the fuzzy number representing the distance to the target.
The Distance Estimator agent bids were a function on this imprecision. If
the imprecision was high, it bid high to move the robot orthogonally,
so the distance to the target could be computed with a lower error..
On the other hand, if the imprecision was low, so were the bids.
This agent played a very important role at the beginning of the
navigation, since the distance to the target was unknown, and
therefore, the imprecision maximal. Thus, the Distance Estimator
would bid very high in order to let the Map Manager
get a first estimate of the distance.
This agent was also responsible for deciding if the robot had reached the
target, since it had the distance information. On the definitive
system, this is responsibility of the Target Tracker.
Another important difference is that the simulated system did not use Visual Memory. That is, the Navigation system was only informed about the landmarks currently visible within the view field of the camera. This restriction made it difficult to create ``good'' beta-units, since all the visible landmarks were within a narrow view field, and thus, very collinear.
The Rescuer agent also had some differences: apart from getting active when the robot was blocked and when the imprecision in the target's location was too high, it also got active when the risk (computed and broadcasted by the Risk Manager) was over a threshold. Furthermore, its behavior was to always visually scan the surroundings of the robot and, after that, ask for a diverting target, not taking into account the reason of its activation.
There were also differences on the Pilot system. Another partner on the project we are involved in was responsible of building the Pilot system. Therefore, initially, we did not focus on this system, and did not worry about how it was designed. As long as it was able to avoid the obstacles encountered in its way, its design did not affect at all our coordination mechanism nor the design of the agents. For this reason, we started using a built-in pilot system of the Webots simulator that used simulated sonar sensors in order to avoid obstacles. In the real robot, however, such sonar sensors are not available, and, as explained in the previous chapter, the Pilot system we finally implemented is only able to detect obstacles by bumping into them.
A final difference is that the mapping and navigation method used was not as explained in Chapter 3. Firstly, the criterion used to select topological regions was based only on the collinearity of the region and its size, thus, permitting overlapping regions, and not assuring a complete representation of the environment. And secondly, the computed diverting targets were always single landmarks; the computation of edges as diverting targets was introduced after experimenting with the real robot.
Despite all these differences, the basic elements of our approach have not been drastically modified during the evolution of the system: the bidding coordination mechanism has not been changed at all, and the mapping method has experienced only slight modifications.
© 2003 Dídac Busquets