NOTE: this is a work in progress
Introduction
This project is an individual project that was started in summer of 2021 and is currently being developed. It was built in response to the difficulty of switching simulation engines and common API's (such as mavlink and simulator specific interfaces). A problem I also noticed was the computational requirement of full flight controller simulations (SITL) with multiple vehicles was to expensive for typical machines. To account for this problem, interfaces that do not simulate the flight controller intricacies were used instead. This allows for faster and larger simulation of vehicle teams.
This project's foundation is robot_ws
. That repository is a typical ROS2 workspace containing various submodules that actually implement the algorithms and control infrastructures. Here's a list of all the submodules and a brief of what they contain:
Submodule | Brief |
---|---|
robot-control | Common API for robot simulation across many simulators and interfaces. |
ros2_utils | Common ROS2 classes and functions to easily manage launch files, logging, and ROS message conversions. |
robot-command | Algorithms focused on commanding positions or velocities to achieve a goal. Supports following and circling TFs. |
robot_behavior_tree | C++ package implementing behavior tree action nodes. Abstracts decision making logic to tree structure with Groot interface. |
robot_bringup | Package to tie together all other packages with a single launch file. |
cv-ros | Implementations of common CV algorithms. Currently contains algorithms for projecting detections from darknet to 3D space as a TF. |
airsim_utils | Python library for generating AirSim settings with multiple vehicles and starting AirSim from functions. |
robot_gazebo | Classic Gazebo models and worlds with representations of vehicles with various sensor arrays. |
robot_ignition | Ignition Gazebo models and worlds with representations of vehicles with various sensor arrays and control schemes. |
Demos
Below are some videos demonstrating the current capabilties of this software suite.
This video demonstrates a drone with a realsense camera using darknet's YoloV4 to generate a bounding box around the object of interest (fire hydrant). Using the bounding box and a depth image, the fire hydrants position is projected into the 3D world and the drone is commanded to circle it at 18 deg/s for 20 seconds.
Here the path capabilties of a single drone are demonstrated inside ignition gazebo. A square path is sent to the drone here.
In the following multiple drone's are spawned and one drone is told to follow another drone at a distance of 2 meters behind it. The drone being followed performs the same square path as above.
This shows one of the more recent capabilities added, swarm path planning. In this instance, three drones are commanded to search an area defined by vertices in a custom shell prompt written by me. The area is divided using voronoi and then the routes are generated with a sweeping algorithm. All results are visualized in Rviz.
Open Source Contributions
While developing this project a few troubles occurred when integrating with ignition gazebo. I wanted to retrieve 3D velocity and pose estimations (odometry) from an ignition model. The only capabilities in ignition gazebo at the time were 2D odometry estimation. To resolve this problem I extended the odometry plugin to allow for 3D estimations. I created a pull request on the ign-gazebo
repository since I thought this should be part of the official release. After working with the developers at OSRF, this PR was merged into the official ignition gazebo release.