top of page
WhatsApp Image 2019-09-16 at 10.08.10 AM

Autonomous landing system

  • Semillero de Investigación en Robótica UAO
  • Apr 16, 2019
  • 3 min read


In this research aims to detect and track a landing platform so that a UAV (Unmanned Aerial Vehicle) can land. This system is part of a cooperation system between an unmanned aerial vehicle and a land mobile robot.


Unmanned aerial vehicles are devices that can be autonomous or controlled remotely, either by means of a remote control or an application. These devices are also called drones and are commonly used for aerial surveillance on large plots or as a hobby.


One of the most common drawbacks in the use of drones is the short duration of their battery, due to the use of peripherals to the system and the high economic costs involved in increasing energy efficiency. To mitigate this problem, batteries of longer duration have been developed and alternative energy generation systems have even been implemented for their use. The cooperation between robots in order to increase the energy efficiency of drones is an area of research little explored.


A cooperation system drone - mobile robot is composed of a subsystem responsible for the teleoperation of the terrestrial robot, and another that is responsible for ensuring the autonomous landing of the drone. A system like this can be used so that the air vehicle can take advantage of the energy autonomy of the land vehicle and recharge its battery by inductive or alternative methods, increasing its energy efficiency and flight autonomy.


However, in order for the entire system to work properly, a robust module for detecting and tracking the landing platform is required, which allows the system to understand the location of the landing platform at all times, even when it is not detected. Emphasis is placed on this point because the lander location module is responsible for sending the reference data to the control system for an adequate landing of the aerial vehicle.


Phases of the project:


Phases of the project

Results:


In the development of the first phase of the detection and tracking module, the different computational vision methods that allow the image detection and comparison of templates "Template Matching" to make the identification of the landing platform were investigated. It was found that among the algorithms and methods investigated such as extractors and feature descriptors called SIFT (Scale Invariant Features Transform), SURF (Speeded-up Robust Features), FAST (Features From Accelerated Segment Test), ORB (Oriented Fast And Rotated Brief), among others, the SURF was the one of greatest interest to perform detection tasks, because methods such as SIFT and FAST have shortcomings when it comes to wanting to perform detections at heights of more than 2 meters. The SURF method is robust against changes in appearance in the object of interest, such as rotation, blur, changes in lighting and scale.


Landing platform marking

Now for the implementation of this detection and monitoring, a Logitech C920 camera and a laptop with the Ubuntu operating system version 16.04 with ROS-Kinetic were used, in which the SURF extractor and feature descriptor was executed with the help of a ROS package called find_object_2D, which allows the detection of an object based on a previously defined template. Along with the above, an algorithm was performed in C ++, which computes the position of the corners of the object and its centroid based on the homography matrix and the dimensions (in pixels) of the same.


Once the system was implemented, the tests corresponding to the detection phase where the detection of the landing platform was filmed from a height of approximately 4 meters were performed. To validate the filming, several movements were made (rotations, displacements and changes in height) in the camera to take the images, in order to subsequently calculate relative errors and averages for the position of the corners thrown by the detector and the real value of these in pixels.


Errors were obtained with a high degree of error due to the non-detection of the landing platform and started with phase two of the work, where the different object tracking methods were investigated and it was found that the Kalman filter allows estimating the dynamic behavior of objects even when the state is not completely observable. To implement the Kalman filter it was assumed that the movement of the landing platform is subject to linear dynamics. This implementation was done in C ++ language, where better results and minor errors were obtained than those obtained in the first implementation.



Some videos and photos about the project:


Vehicle equipped with the landing mark

Vehicle and drone MATRIX-600

Video of system operation

Commenti


bottom of page