University of Twente Student Theses

Login

Robust localization and navigation for autonomous, military robots in GNSS-denied environments

Boersma, Y. (2022) Robust localization and navigation for autonomous, military robots in GNSS-denied environments.

[img] PDF
7MB
Abstract:The use of Robotic and Autonomous Systems (RAS) is becoming increasingly important for the Royal Netherlands Army (RNLA), as it allows for the effective deployment of scarce human resources. Additionally, the correct application of RAS can lead to increased personnel safety and aid in achieving battlespace supremacy. Consequently, the Robotics & Autonomous Systems Cell was founded with the goal to design and experiment with autonomous, military robots. Through concept development and experiments, the RAS Cell aims to define a list of requirements for the autonomous platform that can be used to outsource part of the development of RAS projects and products. Localization and navigation are important aspects of autonomous platforms, as robots can only navigate to a specified location when their own location is known. For military use cases, this functionality must be achieved in environments without a usable Global Navigation Satellite System (GNSS) signal, as this can be jammed or spoofed by hostiles. Additionally, the robot’s localization and navigation solution must be capable of dealing with multiple deployment environments, changing (weather) conditions and hardware or software failures. These requirements necessitate a robust localization and navigation solution that can be confidently deployed by the RNLA. Multiple sensors and localization methods have been researched to determine suitable options for the proposed solution. There two main categories in which these two aspects can be divided: active versus passive sensors, and absolute versus relative localization. Each category contains multiple technologies and algorithms, such as stereo vision cameras, 3D Light Detection and Ranging (LiDAR) sensors, Simultaneous Localization and Mapping (SLAM), celestial navigation and magnetic field-based solutions. Additionally, every sensor-localization method combination has its own, different characteristics. The aforementioned research indicates that there are multiple sensors and localization methods that are suitable for the autonomous platforms. Many of these solutions allow for accurate localization and navigation in unknown environments. However, whether it is due to a lack of precipitation resistance, the use of active sensors, or a lack of obstacle avoidance capabilities, none of these solutions are robust enough as a standalone solution. This is where adaptive sensor fusion can be applied. By integrating multiple sensors and localization methods, the autonomous platform can use the most suitable technologies which are compatible with the current environment and conditions. There are multiple types of adaptive sensor fusion that can be implemented: hard sensor fusion, soft sensor fusion, and predetermined sensor fusion. Hard sensor fusion analyses the data from active sensors and disables a sensor when its data falls outside a defined range for certain parameters. Soft sensor fusion corrects deviating sensor data instead of disabling the sensor. This provides a more accurate position estimate than hard sensor fusion, at the cost of higher implementation complexity. Finally, predetermined sensor fusion analyses the environment to determine what sensors and localization methods are usable. This is an efficient approach that is easier to implement than soft sensor fusion. Similar to the researched sensors and localization methods, none of the adaptive sensor fusion frameworks are a suitable option for standalone operation, as each has its own advantages and disadvantages. Hence, the proposed solution must combine multiple variants to fill in the functionality gaps and increase the solution’s robustness. A proposed solution is designed based on the results from the theoretical research and experiments. This solution combines a custom adaptive sensor fusion framework - which combines predetermined sensor fusion with hard sensor fusion - with a behaviour tree for the sensor fusion management and navigation control. The first step of the proposed solution is to select an initial localization method. This is done based on mission parameters defined by a commander, and the deployment environment variables. The autonomous platform then navigates through a list of defined waypoints using this localization method. While navigating, data from the active sensors is continuously analysed to ensure that each active sensor is still usable. If a persistent sensor error is detected, the behaviour tree will pause the navigation process and select a new, usable localization method. This selection is done based on an updated sensor list that takes into account the unusable sensor. After a successful localization method switch, the behaviour tree will continue the navigation process. The proposed solution was tested in multiple simulation runs with a TurtleBot3, 2D LiDAR and stereo camera for the autonomous platform and sensor configuration respectively. The simulation results indicate that the adaptive sensor fusion framework is capable of consistently detecting unusable sensor data when it is configured correctly. Furthermore, the behaviour tree successfully pauses the navigation process when a persistent sensor data error is detected. After the localization method switch, the behaviour tree also consistently resumed the waypoint navigation process. As a result, the simulated robot successfully reached the final waypoint in all simulation runs. Based on these results, it can be concluded that the proposed solution provides a solid foundation for an autonomous, military platform. The flexible and modular setup of the behaviour tree allows the design to grow with Project Sentinel as it progresses and aims for a more capable autonomous platform that can be integrated with multiple, different autonomous platforms and sensor configurations. There are two main areas that can be explored to improve the proposed solution’s functionality in the short-term. First, the behaviour tree’s implementation can be enhanced through code optimizations and the integration of live human interactions. Second, localization methods tailored towards difficult environments or challenging conditions can be researched and implemented for increased robustness, allowing the autonomous platform to be deployed in a wider range of scenarios.
Item Type:Essay (Master)
Faculty:ET: Engineering Technology
Programme:Industrial Design Engineering MSc (66955)
Link to this item:https://purl.utwente.nl/essays/90779
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page