Vision-based autonomous navigation for aerial robots (UAVs/drones)
Unmanned aerial vehicles (UAVs, UAVs or drones) can be used for various applications such as remote sensing, cargo transportation, search and rescue, among others. To solve these problems it is possible to use various sensors. In outdoor environments, the location can be determined using GPS. But indoors, it is necessary to resort to techniques for localization and mapping using sensors such as cameras, laser, etc. This line of research focuses on the study of techniques of navigation, control, localization and map building based primarily on the use of cameras alongside other low-level sensors (gyroscopes, accelerometers, etc.).
Stereo Visual SLAM for large-scale environments
To solve the problem of autonomous navigation it is necessary to estimate the robot position with respect to the environment. When an a priori map of the latter is not available, the problems of localization and mapping need to be solved simultaneously. This problem is known in the literature as SLAM: Simultaneous Localization and Mapping . Among the different approaches presently studied, the use of cameras as sensors for solving these problems is widespread. In the case of a robot traversing large scale environments, it is necessary to use efficient methods that can operate in real-time. In this line of research the aim is to develop a SLAM method, using stereo-vision, that divides the task of localization and mapping in different threads, exploiting multi-core architectures available today.
Appearance-based navigation using topological maps
As an alternative to the use of the SLAM technique to solve the problem of localization, it is possible to pose the problem of navigation in different ways. Usually, the techniques are based on a metric SLAM approach which seeks to estimate the position of the robot and build a map of the environment that are both precise and globally consistent. However, it is possible in certain cases to solve navigation using a non-metric or appearance based approach. This type of approach consists in representing the environment through visual characteristics that describe its appearance from the viewpoint of the robot during its motion. It is thus possible to navigate autonomously by means of localizing the robot qualitatively over a topological map, which is not required to be globally consistent. The currently under development method has been successfully tested using ground and aerial robots in indoor and outdoor environments.
Vision-based, efficient and precise external localization system
While studying and proposing different techniques for autonomous localization it is usually necessary to perform experiments in order to measure the accuracy of the corresponding estimates. Currently, there are various systems used for external localization that allow to obtain with great precision the pose of one or more robots inside a work area. However, these systems are usually very expensive and difficult to use. Alternatively, a localization method based on the use of conventional cameras used for detecting and tracking planar patterns (placed on robots or other items of interest) was developed. The system is highly efficient (can process images at speeds of thousands of frames per second) and very accurate (with errors less than 1%). Furthermore, tt also allows patterns to be locate in 3D space using a single camera. Using multiple cameras, it is possible to extend the work area or improve the accuracy of the system, from the integration of the individual results obtained by each camera.
Stereo-vision navigation for arthropod robots
Robots using wheel-based locomotion have limitations when facing navigation in unstructured environments, such as in the case of search and rescue applications in collapsed-building scenarios. For this reason, it becomes attractive to use platforms which employ jointed-leg locomotion (arthropod robots). On the other hand, these platforms present new challenges from the point of view of solving the problems of navigation, localization and map building. Under this research line, the study and application of techniques based on vision is proposed and its fusion with other types of sensors such as gyroscopes, accelerometers aims, etc. Also, path-planning techniques and gaits will be developed for the purpose of overcoming obstacles and to face rough terrains efficiently.
Simultaneous planning and mapping for mobile manipulators
Currently, robotic manipulators are used in numerous human activities (industrial, medical, precision agriculture, handling of waste or hazardous elements, etc.). In most applications, manipulators are generally fixed to their workspace. However, at present, new alternatives that involve the use of robotic manipulators mounted on mobile platforms, called “mobile manipulators”, are being explored. In these cases it is necessary to solve the problem of path-planning for both the platform itself as the manipulator together. In addition, the workspace of mobile manipulators is a dynamic area therefore it is necessary to use employ appropriate planning techniques. In this research line the aim is to use to use a vision-based approach that builds a map and perform path planning simultaneously (SPAM: Simultaneous Planning and Mapping em>).