TelemeThing, is an early stage start up. We are currently in the process of creating a number of components for an autonomous drone robotics platform for applications focusing on autonomous navigation, searching, tracking, and visualization.
There are three platforms involved: Onboard, Ground Station, and HoloLens.
On Line
We will be speaking at #PX4DevSummit hosted by @Dronecode. Looking forward to talking about Getting Started as a Contributor on PX4
and m...
On Line
The onboard computer is carried on the UAV, running various AI and CV applications on ROS. This platform interfaces with the Flight Controller ( PX4 or DJI ) as well as RTK, Lidar, Visible, and IR sensors to provide autonomous flight and visualization functions.
The ground station provides control and telemetry functions, running on iOS, Android, and Windows. The ground station can either replace or run along side conventional ground station and RC controllers.
The HoloLens app integrates with the Ground Station to provide extensive 3D AR visualization as well as specific control and telemetry. A set of 3D geospatial features provide real world immersion.
Here we demonstrate the display of a live Lidar point cloud on a Hololens.
The data is taken from a Livox Mid40 sensor, processed on a ROS node running on an Nvidia Xavier NX, and transmitted to a Hololens. The sensor and processor are configured as a drone payload, however in this video they are mounted on a ground-based tripod.
In this video we show a simple live stream of a sparse cloud, colorized by distance from viewer. Not shown in this video are visible/thermal fusion or a generated map cloud. For an example of the Lidar data fused with visible and thermal see https://www.youtube.com/watch?v=AML5q....
Here we see a simulated Airsim drone flying a mission from both a mobile device (Xamarin on IOS + Android) and from a HoloLens.
The simulation is run in HITL and works in concert with physical GPS devices. The relevant function is pretty much identical a physical drone flying around. The mobile device and the HoloLens are connected and work together. The mobile device communicates with a ROS node running on the drone, relaying telemetry and control to the HoloLens.
The ROS node on the drone can control both PX4 and DJI. This allows for full control of the drone through ROS, or shared control along with conventional RC and ground station channels, as preferred.
The mobile device is a 3D tile server for the HoloLens, allowing for pre-fetch and remediation of limited storage on the HoloLens, however the HoloLens will cache tiles within limits.
Here we see two windows. On the left is the fused image, in this case visible on projected LIDAR point cloud. On the right is the source data panel, showing thermal (FLIR), visible, and LIDAR point cloud projection.
We start with a static pose, full cloud mix. We clip range (distance) upper and lower limits as well as slide a limit widow up and down, this can help us locate and isolate objects at a given distance of interest.
Then we phase in the visible spectrum image rectified to the plane of the projected cloud, going full visible and then back again to a mix. We can see the coincidence of points from both data sets.|
Then we start moving the sensor array around in various directions to show the live fusion. A great deal of persistence is intentionally given to the point cloud data to demonstrate the fusion process. This is for visualization only and is not a feature of the goal.
Note how the mountain in the background is reported by both the thermal and visible but not by the lidar. The lidar simply doesn't have the range to see it. A stereo visible camera could be added to the sensor and fused if that kind of point cloud range is required, but it will not function well in low light or weather. A stereo thermal could be added for $$$ but still will not function well in weather. Pre-collected data could be fused in with geo location to get around weather issues if such range data is a hard requirement.
This video demonstrates intermediate products, this is not the goal. In this video we do not demonstrate thermal fusion, temporal fusion, or voxel painting.
This is a mobile client talking Mavlink and ROS to a ROS node, which then controls the flight controller, which can be PX4 ( or DJI via the PX4/DJI Translator).
In this scene we see Airsim acting as the simulator. We created this because we didn’t want to get tied to one flight platform, or one radio system, or anything proprietary. This is not an attempt to create yet another ground station, we wrote this specifically to be a really flexible interface to ROS platforms running on UAVs, after we got tired of an endless array of partial solutions and one-off solutions.
If the platform UAV has the standard RC radio control, then that still works. You can use this platform to just control the ROS payloads and leave the standard RC for classic control, or you can use this as a full replacement of RC, perhaps using LoRa, or packet radio on an amateur band, or something from a satellite. The same goes for the ground station, if you wish you can keep using your ground station and use this just for control of the ROS payloads.
The client in implemented as a .NET standard lib, so it runs anywhere, including Linux and Azure services. In this video is a client UI (seen on the right) which we wrote in Xamarin, which means it runs on iOS and Android and Windows. That client is a ROS node, which connects to a ROS node running on the UAV. That client is also a Mavlink flight controller. The Mavlink signals travel between the nodes as ROS messages. The UAV ROS node can run on Windows but it’s probably best run on Ubuntu. We have ROS running on various UAV platforms from medium to large, from RPi and Up and Nvidia, including the awesome Xavier for the serious AV and CV payloads. The UAV ROS node connects to the flight controller via a serial link.
If the flight controller is PX4 or Ardupilot then the protocol from node to FC is Mavlink. If the flight controller is DJI then the protocol from node to FC the DJI SDK. In any case: DJI, PX4, Ardupilot; the ground station talks to the UAV only in ROS and Mavlink, this is possible because we wrote a full Mavlink/DJI two-way translator as a ROS node. There is no need to bother any other part of the system with DJI/PX4 specific handling. We did this by porting full on PX4 to ROS, and creating bridges between controls and telemetry. The solution is so complete that you can control a DJI directly from QGC.
In this video we see the process of setting up and running a waypoint mission. That’s fairly basic. The real value of this platform is the support of ROS running AI, full autonomy and navigation, computer video, Lidar, thermal, searching, tracking, etc. Not seen in this video is invocation of a ROS payload, or the streaming video (including 3D) built in. Also not shown is the HoloLens platform, which is really good for telecasting a remote ‘situation’. We'll feature those in upcoming videos.
Here we see the adaptive tracker engaging on-line tuning while searching, finding, and tracking an object of a given class.
Note that the gimbal is disengaged in order to force full obscuring of target in order to test recovery as it moves in and out of view. Notice the four numbers at the bottom left of the screen, which indicate tuning meta info for the four participating tracker algorithms.
A view of the alignment tool, which is used at setup time to create the translations for extrinsics which are applied at run time in real time in the fusion node.
Here we see: 1) IR from the FLIR, 2) RGB from the Visible, and 3) the point cloud from the LIDAR. All looking in the same direction, but not aligned and not of the same FOV.
In the LIDAR view, the temporal fusion span is low enough and the points are small enough to reveal the interesting scan pattern of the LIDAR when looking from the sensor POV. As the viewpoint is manipulated, the 3-D structure of the scene becomes apparent. Don't be misled by the blocky appearance, this view has been down sampled to only 1% of collected points.
The final product (not shown here) is the fused cloud displayed in real time on iOS, Android, and HoloLens. Detection and segmentation are applied per application, most notably for augmented reality. Despite running on ROS, most of this is done with lower level tool-kits or direct point manipulation in order to make it fast enough for small hardware, running on a small drone/UAV.
Sign up to hear from us about progress and events.
Redmond, Washington, United States