Graphical Interface

In order to carry out the experimentation, we have developed a graphical interface so that a human operator can give orders to the robot. The interface, shown in Figure 6.6, permits the operator to manually control the robot motion (translational and rotational speeds) and the pan and tilt unit movements. The interface has a three-dimensional representation of the environment, showing the robot and the detected landmarks and obstacles (including those stored in the Visual Memory). It also shows the images gathered from the cameras and a list of detected landmarks.

Figure 6.6: Graphical control interface

The operator can select the type of landmarks to be recognized. In our case, we were only able to use the bar-coded landmarks described in the previous section. Once the landmarks' type has been selected, the Vision system starts processing the images coming from the cameras, and the detected landmarks are displayed in the interface. The operator can then select one of the detected landmarks and set it as the target landmark to be reached. Once the target is selected, the operator can instruct the robot to go to the target. From this point on, the robot will autonomously navigate towards the target until either it reaches the target or it is instructed to stop navigating.

The interface also gives information about the Navigation system, such as the current target or how many object, beta and topological units the Map Manager has stored, and a graphical representation of the topological map. When the target is reached, the relevant information about the trial is given: trial duration, total length of the path, distribution of winning bids among the agents and number of diverting targets computed. This information can also be stored for later statistical analysis.

Although the interface has been used only with our robot, we have developed it so that it can be used with any robotic system, so there is no need to have a specific control interface for each different robot we may have in the lab. The idea is to let the operator configure a specific system by choosing a robot platform (be it wheeled, legged, or any other kind of autonomous robot), the type of landmarks to be used (which may imply having more than one Vision system running in parallel), and the Pilot and Navigation systems that will control the robot . Once the robotic system has been configured, it can be controlled as described above.

© 2003 Dídac Busquets