Software-In-The-Loop Simulation

The RhinoHawk team at NovaLabs is preparing to enter the 2018 Medical Express Challenge. In this competition an Unmanned Aerial Vehicle (UAV) must find and render assistance to a person stranded in the outback. Teams construct the UAV and AI systems to complete the challenge. Not an easy task!

The scenario from the challenge website:

Outback Joe is at his property in remote Queensland, Australia and has been feeling unwell. He has had a remote consultation with his doctor using video conferencing, and his doctor has requested some blood tests be done as soon as possible. Joe is well prepared, has a home sample taking kit, and has taken his blood sample. The challenge is now to get the blood sample to the lab. Joe’s property is very remote and to make matters worse, it has been cut off by floodwaters.

Teams are invited to attempt to retrieve a blood sample from the remote Outback Joe and return it to base where medical staff will be waiting to analyse it. Teams are encouraged to develop systems that can carry out the mission in a fully autonomous manner using Type 2 Autonomy.

Finding Outback Joe

One part of solving the Challenge is to estimate Outback Joe’s location in GPS cordinates. These coordinates are used to land the UAV close to Joe. Outback Joe’s location is estimated by finding Joe in images from the UAV’s camera and translating from image position to GPS coordinates. The translation to GPS coordinates is accomplished by combining Joe’s location in the image, the attitude or POSE of the UAV, and the on-board GPS sensor’s location.

There are many pieces needed to locate Joe — AI software and a custom built UAV. Software and hardware need to work together seamlessly. Early in any project that’s not going to happen. One tactic used to ease software development is to simlulate the hardware. This makes software development easier since hardware issues are removed. The following describes the SILT simulation we used to facilitate developing our Challenge solution.

SILT

The RhinoHawk team’s configuration for the Challenge is a Quadcopter with a camera attached. The Quadcopter is controlled by a PX4 microprocessor autopilot that can operate in joystick controlled and autonomous modes. The SILT simulation compiles the PX4 firmware for a Linux laptop instead of the PX4 microprocessor and uses Gazebo to simulate the Quadcopter and attached sensors. The image pipeline is implemented in ROS and interacts with the PX4 via Mavros. The entire SILT simulation runs on one laptop.

The PX4 SILT project has an Iris Quadcopter model among many other vehicle models. The Rhinohawk simulation adds a camera to the PX4 Iris Quadcopter. A large blue box is added as our search target and also gas station for context and scale:

gas station simulation

The blue box is on the ground plane at a fixed location of (3, 3). The goal of the simultaion is to fly the quadcopter around and process images from the camera. If all goes well we will predict the box’s location from the images and the predicted location will match the known location.

The next image shows a close up of the Quadcopter in Gazebo’s rendering of the simulation.

quadcopter simulation detail

Computer Vision

The next image shows the quadcopter hovering at about 10 meters. The camera is shown as white site lines and a screen hanging down from the quad. The camera images are projected on the hanging screen.

quadcopter hovering simulation

The next image is from the camera with the quadcopter hovering at 40 meters. The camera is looking straight down from the quadcopter.

birds-eye-view of simulation

The above image is shown in rqt which is a ROS dashboard. From the RhinoGawk point of view there are two main integration points here:

  • The RhinoHawk image processing pipeline to the camera in the Gazebo simulation.
  • The RhinoHawk image processing pipeline to the PX4 autopilot via Mavros.

The other big integration point is the PX4 autopilot to the Gazebo simulator which comes with PX4’s SITL.

The image pipeline is a simple blob detector which is good enough for a simulated environment. The images are transformed into an HSV color space and then a mask is selected which admits only the blue target:

mask

 

 

 

 

 

 

 

The target is estimated to be at the center of an enclosing contour of the mask:

target within the mask

 

 

 

 

Where is Joe?

From our target’s coordinates in the image we have to use the position and orientation of the quadcopter to calculate the target’s position on the ground. Here we are trying to reproduce the local coordinates (3, 3) which are relative to the quadcopter’s takeoff coordinates.

The ROS Translate Frame library, tf, is used to sort out the coordinates. The autopilot publishes attitude or pose relative to the takeoff or home position. The pose is combined with the known geometry between the autopilot and the camera to calculate a direction towards the ground. Joe’s coordinates are the intersection of this direction vector and the simultaion’s flat ground plane.

Here he is!

To test this system fire up qgroundcontrol and create a small waypoint mission. The mission is then loaded to the autopilot and off we go:

quadcopter ground control

Joe’s image coordinates and ground coordinates are reported on ROS topics of /nikon/joe_location and /nikon/position.

coordinate locations in ROS

The estimated position of (1.9, 3.2) seems okay. The above qgroundcontrol image shows the quadcopter flying at 50 meters and a little way off from the target. The target is near the “H” icon in the qgroundcontrol image.