Discussion and Future Work

The results obtained confirmed that, as already seen through simulation, the bidding coordination mechanism and the mapping and navigation methods work appropriately. The bidding mechanism achieves the desired effect of combining the simple behaviors of the agents into an overall behavior that executes the most appropriate action at each moment, and leads the robot to the target destination. As for the mapping and navigation method, we have seen that it is able to build a map of the environment and is used for two different purposes: on one hand, to compute diverting targets when the robot finds the path to the target blocked, and on the other hand, to compute the location of the target when this is not visible. Regarding this latter use of the map, Table 6.3 shows the statistics of how the target's location is computed. The sources of this computation can be the following: (1) the real Vision system, that is, the target is recognized and its location computed from the images, (2) the Visual Memory (described in Chapter 4), and, (3) the Map Manager, that is, the location of the target is computed using the beta-coefficient system and the locations of other landmarks. As can be seen from the statistics, most of the time (76.1%) the location is computed using the Visual Memory, however, sometimes (11.2%) the Navigation system must make use of its ``orientation sense'' in order to figure out where the target is. Figure 6.14 shows the evolution of the imprecision on the target's location and the different sources (the colored band at the bottom of the graphic). Although, usually, the robot realizes that it has reached the target by obtaining its location from the Visual Memory, it sometimes realizes it using the orientation sense. However, since the computation of the target location using the orientation sense is more imprecise than the Visual Memory (because it accumulates the imprecision of several landmarks' locations), the robot sometimes informs about having reached the target when it has not really done it, thus failing in its mission.


Table 6.3: Sources of computation of the target's location
Vision System 12.7%
Visual Memory 76.1%
Map Manager 11.2%

Figure 6.14: Evolution of the target's location imprecision and sources of computation
\includegraphics[width=12cm]{figures/target-impr-colors}

The scenarios used in the real experiments were not very complex. Therefore, some more experimentation on more complex scenarios should be performed. These new scenarios should include more blocking obstacles, possibly having some cul-de-sacs, so that the robot would need to undo the path already done.

Although the good results obtained indicate that the agents are well designed, we could still improve them and, hopefully, improve the performance of the overall robotic system. Actually, during the experimentation with the real robot, we already did some refinement. However, this refinement can be a never-ending task, and for this reason we decided to stop it and do the real experiments with the version of the agents described in Chapter 4. The possible further refinement of some of the agents could go in the following directions:

Some improvements could also be done on the Pilot and Vision systems. Regarding the Pilot, we could use a better obstacle avoidance algorithm. With the current algorithm, only the closest obstacle is considered for computing the avoidance path. We could improve the robot's performance if the Pilot took into account all the obstacles and landmarks stored in the Visual Memory, thus, producing better avoidance paths. We are also planning to equip the robot with a laser scanner. This laser would be continuously scanning a 180 degree area in front of the robot to accurately detect obstacles that are several meters away. With this new sensor, the Pilot could avoid the obstacles before bumping into them, thus, generating better paths. Regarding the Vision system, we plan several improvements. The first one is to finish the stereo algorithm, so we can use the two available cameras for computing the distance to the landmarks. Another very important improvement is to make the Vision system more robust, so that it does not need to check the recognized landmarks against the Visual Memory. Actually, we should use the robust Vision system to adjust the imprecisions of the Visual Memory. We also plan to convert the Vision system into a Multiagent Vision system. In this system, several agents would process the camera images with different algorithms, and the agents should agree on what could be a good landmark candidate (salient enough, robust, static, etc.). A final improvement of the Vision system would be to let it bid for services by other systems (either the Pilot system or itself). With the bidding capability, it could request the Pilot to approach a landmark to better recognize it, or even ``request itself'' to slightly move the camera so that a partially visible landmark enters completely the view field.

© 2003 Dídac Busquets