Slung Load Controller

Slung Load Controller

Multirotor Unmanned Aerial Vehicles (MRUAV) have become an increasingly interesting area of study in the past decade, becoming tools that allow for positive changes in today’s world. Not having an on-board pilot means that the MRUAV must contain advanced on- board autonomous capabilities and operate with varying degrees of autonomy. One of the most common applications for this type of aircraft is the transport of goods. Such applications require low-altitude flights with hovering and vertical take-off and landing (VTOL) capabilities.

Similar as before in this project we use the AltaX Flight Stack which is compromised by a Raspberry Pi 3 as companion computer and a naze32 as flight controller.

The slung load controller and the machine learning estimator is running on the RPI3, although of course the training of the recurrent neural network was done offline in a big desktop computer. The RPI calculates the next vehicle position based on the estimation of the position of the slung load, everything is running using our framework DronePilot and guess what? its open source ;). Keep reading for more details.

If the transported load is outside the MRUAV fuselage, it is usually carried beneath the vehicle attached with cables or ropes, this is commonly referred to as an under-slung load. Flying with a suspended load can be a very challenging and sometimes hazardous task because the suspended load significantly alters the flight characteristics of the MRUAV. This prominent pendulous oscillatory movement affects the response in the frequency range of the attitude control of the vehicle. Therefore, a fundamental understanding of the dynamics of slung loads as they relate to the vehicle handling is essential to develop safer automatic pilots to ensure the use of MRUAV in transporting load is feasible. The dynamics of the slung load coupled to a MRUAV are investigated applying Machine Learning techniques.

The learning algorithm selected in this work is the Artificial Neural Network (ANN), a ML algorithm that is inspired by the structure and functional aspects of biological neural networks. Recurrent Neural Network (RNN) is a class of ANN that represents a very powerful system identification generic tool, integrating both large dynamic memory and highly adaptable computational capabilities.

Recurrent neural network diagram

In this post the problem of a MRUAV flying with a slung load (SL) is addressed. Real flight data from the MRUAV/SL system is used as the experience that will allow a computer software to understand the dynamics of the slung in order to propose a swing-free controller that will dampen the oscillations of the slung load when the MRUAV is following a desired flight trajectory.

This is achieved through a two-step approach: First a slung load estimator capable of estimating the relative position of the suspension system. This system was designed using a machine learning recurrent neural network approach. The final step is the development of a feedback cascade control system that can be put on an existing unmanned autonomous multirotor and makes it capable of performing manoeuvres with a slung load without inducing residual oscillations.

Proposed control strategy

The machine learning estimator was designed using a recurrent neural network structure which was then trained in a supervised learning approach using real flight data of the MRUAV/SL system. This data was gathered using a motion capture facility and a software framework (DronePilot) which was created during the development of this work.

Estimator inputs-outputs

After the slung load estimator was trained, it was verified in subsequent flights to ensure its adequate performance. The machine learning slung load position estimator shows good performance and robustness when non-linearity is significant and varying tasks are given in the flight regime.

Estimator verification

Consequently, a control system was created and tested with the objective to remove the oscillations (swing-free) generated by the slung load during or at the end of transport. The control technique was verified and tested experimentally.

The overall control concept is a classical tri-cascaded scheme where the slung load controller generates a position reference based on the current vehicle position and the estimated slung load position. The outer loop controller generates references (attitude pseudo- commands) to the inner loop controller (the flight controller).

Control scheme

The performance of the control scheme was evaluated through flight testing and it was found that the control scheme is capable of yielding a significant reduction in slung load swing over the equivalent flight without the controller scheme.

The next figures show the performance when the vehicle is tracking a figure-of-eight trajectory without control and with control.

The control scheme is able to reduce the control effort of the position control due to efficient damping of the slung load. Hence, less energy is consumed and the available flight time increases.

Regarding power management, flying a MRUAV with a load will reduce the flight times because of two main factors. The first one relates to adding extra weight to the vehicle, consequently the rotors must generate more thrust to keep the desired height of the trajectory controller, hence reducing the flight time. The second factor relates to aggressive oscillations of the load for this reason. The position controller demands faster adjustment to the attitude controller which increases accordingly the trust generated by the rotors. The proposed swing-free controller increases the time of flight of the MRUAV when carrying a load by 38% in comparison with the same flight without swing-free control. This is done by reducing the aggressive oscillations created by the load.

The proposed approach is an important step towards developing the next generation of unmanned autonomous multirotor vehicles. The methods presented in this post enables a quadrotor to perform flight manoeuvres while performing swing-free trajectory tracking.

Don’t forget to watch the video, it is super fun:

UoG 360 Spherical

UoG 360 Spherical

Glasgow University area 360-degree spherical panoramic photo*.

DJI FC220
ƒ/2.2 4.7 mm
1/120 ISO 151


* Complying w/ UK Air Navigation Order (CAP393). Always remember to fly safe!

 

Computer vision using GoPro and Raspberry Pi

Computer vision using GoPro and Raspberry Pi

In this post I’m going to demonstrate how to do test some computer vision techniques using the video feed from a GoPro Hero 3 directly towards a Raspberry Pi 3.

I’m using a special bridge that has a HDMI input and as an output goes to the CSI camera port of the Raspberry Pi, so, basically as easy as using a RPI camera…

IMG_2634

This is actually not a very common technique to do, the bridge is from the company Auvidea, and the model is the B101.

And the best part is that its plug and play. I just install it on my RPI, connected the CSI cable to the camera port, turn my RPI on, turn the GoPro on, and run “raspivid -t 0” and voilà, you will see the video on the screen!!!

Different angle of the rpi + HDMI bridge
Different angle of the rpi + HDMI bridge

After that is just question of using my computer vision repository: https://github.com/alduxvm/rpi-opencv, and start testing the different scripts… As usual, I created a video for you guys to see it working, take a look here:

BigX

BigX

Carbon fibre foldable quadcopter

BigX is a bespoke vehicle designed and built to support research projects. It’s big, with a 900mm wheelbase which means it can hold up to 21in propellers.

BigX
BigX

The frame is manufactured by SoliDrone, the model of the frame is FR4X 900F. I got this prototype frame in order to build it and test it. The company will start selling this great frame soon, so, check their website for updates. They have a beautiful render that you can see here:

SoliDronesolidrone

 

Specifications of the vehicle:

Frame: FR4X 900F
Motors: Foxtech S5010 288kv
Propellers: 18×6.5in CF
Wheebase: 900mm
FC: Pixhack
ESCs: Hobbywing XRotor Pro 40A
Weight(no batt): 3kg

 

The carbon fibre plates are really thick, 3mm, which makes it very very hard and solid… SoliDrone, hehe… But that is one of the reasons why is a bit heavy. But considering the type of applications this one is going to be use in, it is just right. It’s big, 900mm wheelbase, and because of that, it can be equipped with very large propellers, but being foldable, makes it very easy to transport. This is a great feature.

 

Building process:

 

So, how big it is??

DSC03989

Weight with 16,000mah battery
Weight with 16,000mah battery

At the moment I’m putting 18in props to this one, and its performing quite well, maybe I will put bigger props on the future.

Hovering time:

I did several tests using two different batteries. Both  batteries are Multistar LiHV from HobbyKing. The longest flight was almost 32 minutes.

Battery Flight Time
10,000 mah 25 min
16,000 mah 32 min

Then I added a Raspberry Pi and using DronePilot, I made it fly autonomously in very different ways, and it performs great!! You can see in the video how good it flies. And as usual, the Scottish weather does not help, but this vehicle was able to fly under raining conditions and strong wind gusts, with ease.

 

Video:

Trajectory Controller

Trajectory Controller

A while ago, we reviewed how a hover controller works, in this post we are going to discuss how to go a bit further and create a trajectory controller.

In that previous blog post we discussed about how to control a drone to make it hold a specified position. This refers to the part of Control in the GNC argo. This refers to the manipulation of the forces, by way of steering controls, thrusters, etc., needed to track guidance commands while maintaining vehicle stability.

In this part we are focused on the Guidance. This refers to the determination of the desired path of travel (the “trajectory”) from the vehicle’s current location to a designated the target, as well as desired changes in velocity, rotation and acceleration for following that path.

There is several steps before we can achieve this. Mainly the next ones:

  1. Fly the vehicle using the flight stack
  2. Design a controller that will track/hold a specified position
  3. Create a trajectory, based on time and other factors

For the first part, in this blog we will use Altax Flight Stack, that compromises a companion computer and a flight controller. In this particular case I’m using a naze32 as flight controller, and two companion computers: Raspberry Pi 2 and oDroid U3.

The naze32 is connected to the Odroid U3 via a usb cable (a very short one). The vehicle is a 330mm rotor to rotor fiber glass frame, with 7×3.8in propellers, 1130kv motors, 15amps ESCs and a 3000mah 10C battery. It will fly for 11-13 minutes.

The Odroid U3 is running Ubuntu 14.04.1 in a eMMC module, which makes it boot and run generally faster. Its being powered by a BEC that is connected to the main battery.

The companion computer will “talk” a special language in order to send commands to the vehicle, this one is described here. And the most important part is that it will run the DronePilot framework. This framework is the that will pilot the vehicle.

For the second part (position controller), you can refer to this page to see how it works.

And now the trajectory part…

We need to generate certain X and Y coordinates that then it will be “fed” to the position controller at a specific time. We are going to create two types of trajectories, circle and a infinity symbol. Why this ones? because this ones are easy to generate and perfect to excite all the multi-rotor modes.

How to generate a circle trajectory??

circle

This one is very simple… there is basically two parameters needed… Radius and angle. In our case the angle part we are going to combine it with the step time of the main loop of the controller and pi… basically the angle one will go from 0 to 360 degrees (in radians of course). The code looks like this:

circle

So, if we declare “w” like this: (2*pi)/12 it means that the trajectory will take 12 seconds to complete a full revolution, and then start over. This is the parameter that we will change if we want the vehicle to travel faster. Its better to start with a slow value, and then progress to faster trajectories.

The next step is to “fed” this coordinates to the position controller inside the control loop. That is done in this script.

The infinity trajectory is a special one! this one is called in several ways: Inifity trajectory, figure of eight… And there is several ways of how to calculate the coordinates, you can see in the next gif the posibilites of how to create a figure of eight:

infinity

The one I like the red dot one! why is this?? that one is called the Lemniscate of Bernoulli, which is constructed as a plane curve defined from two given points F1 and F2, known as foci, at distance 2a from each other as the locus of points P so that PF1·PF2 = a2.

600px-Lemniscate_of_Bernoulli_props.svg

This lemniscate was first described in 1694 by Jakob Bernoulli as a modification of an ellipse, which is the locus of points for which the sum of the distances to each of two fixed focal points is a constant. We can calculate it as a parametric equation:

infinity

And then the rest is feeding that information to the position controller which will try to follow that trajectory as the dots on the plots. Magic.

fastest

 

The cool video can be seeing here:

pyIRCamera

pyIRCamera

In this post, I’m going to describe how to read a I2C sensor using a Raspberry Pi. The sensor I’m interested on reading/using is actually a InfraRed camera.

This camera comes (originally) from a Wiimote controller.

wiimote

I spend this weekend developing a tiny python module that will interface to this wee camera.

What is a PixArt?

 

This device is a 128×96 monochrome camera with built-in image processing. The camera looks through an infrared pass filter in the remote’s plastic casing. The camera’s built-in image processing is capable of tracking up to 4 moving objects, and these data are the only data available to the host. Raw pixel data is not available to the host, so the camera cannot be used to take a conventional picture. The built-in processor uses 8x subpixel analysis to provide 1024×768 resolution for the tracked points.

The is lots of extra technical information about the Wiimote: http://wiibrew.org/wiki/Wiimote#IR_Camera

The sensor used for this library is a very good package made by DFRobot, important links:

The how-to for using the sensor, and the python module is on my github page, click here to go.

In the next image you can see the sensor picking the IR light coming from a Zippo:

713419189_13454880553243088493

The python module will report the X and Y from the center of the IR source, it actually read up to 4 IR sources at the same time.

If you are looking to build a “light tracker” robot, or perhaps a precision landing for a multirotor, this sensor is worth to consider! Why?? just because the computer vision is already done inside the camera and it can work up to 100hz tracking IR objects… so, is a super super fast sensor!

A video of this sensor in action can be seen here:

 

Low Latency Raspberry Pi video transmission

Low Latency Raspberry Pi video transmission

In this post I’m going to explain how to make a very low latency video transmission using a RPI and a RPI camera. I have done a demonstration of this technique and posted a while ago a video, but I never publish how to actually do it, you can see the video here:

What will you need?

 

  • Raspberry Pi: No matter which one, I’m using a RPI A+ (because of its small size, it can be used on small drones, like racers 😉 ), the RPI must be running raspbian, just that. F8332699-01
  • Raspberry Pi camera module: There is nothing as fast as the CSI port…
  • Wifi dongle: To connect the rpi to your network. You can actually make the rpi create a ad-hoc network or in any case your ground computer can do it.
  • Ground computer: In this case I’m using a Mac to receive the data, the commands might be transferable to windows, but I don’t own one (Thank to all the gods out there :P), so, this will be “terminal” oriented.

How to?

 

  1. The rpi and your computer must be in the same network, and you must know the IP addresses of both devices.
  2. The mac must do a ssh connection to the rpi in order to activate the command.
  3. On a terminal window of the mac, execute this line:

netcat -l -p 5000 | mplayer -fps 60 -cache 1024 -

  1. Create a fifo file on the rpi (must be done only once…), by doing: mkfifo video
  2. On the ssh terminal window of the rpi, (replace the IP address to the one of your pi) execute this command:

cat video | nc.traditional 192.168.1.3 5000 & raspivid -o video -t 0 -w 640 -h 480

 

Then wait for 20 seconds and you will start seeing video:

fpv-screenshot

 

You might notice that at the beginning the video is not “synced”, but if you wait more seconds, the video will catch up and stay in that way!! and you can actually check the low low latency.

Tuneable?

 

Of course!!! you can change the width and height of the source on the rpi, of course, if you want the super extra low latency, then go for a lower resolution, also you can change the bitrate, but… you need to experiment to get the proper value.

Important to notice is that I have not try it flying yet… but if someone manage to do it and report back to me, I could make improvements to the code to make it easier to use.

In here you can see a video screen-shot of the entire methodology:

 

Another method?

Of course… There is tons of other several methods to achieve this… The extra one I’m going to show is the one using gstreamer.

GStreamer is a pipeline-based multimedia framework that links together a wide variety of media processing systems to complete complex workflows.

For making it work, you need to first install it, and on the raspberry pi, is done in this way:

sudo apt-get install gstreamer1.0

Also, in the computer receiving the stream, it needs to be installed, I’m using Mac, so, you need to install it using brew, and is something like this:

brew install gstreamer gst-libav gst-plugins-ugly gst-plugins-base gst-plugins-bad gst-plugins-good
brew install homebrew/versions/gst-ffmpeg010

When this is done, then we can actually activate the streams… this is done by executing this line on the RPI:

raspivid -n -w 640 -h 480 -t 0 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=10 pt=96 ! udpsink host=192.168.1.3 port=9000

This will activate a stream server and send it via UDP to a host (you need to change the IP address and port to the ones you use). Also you can change the settings on raspivid, like width, height, bitrate and lots of other stuff. I’m using VGA resolution, just to make it super fast.

Now, to receive and display the stream on the host you need to execute this command:

gst-launch-1.0 -v udpsrc port=9000 caps='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264' ! rtph264depay ! video/x-h264,width=640,height=480,framerate=30/1 ! h264parse ! avdec_h264 ! videoconvert ! autovideosink sync=false

This will one a wee window and start displaying video… something like this:

gstreamer

 

The good thing about this method, is that we can actually get that stream into apps such like Tower from 3DRobotics, and fly a drone using the telemetry and the video at the same time, something similar to DJI Lightbridge, but without paying the thousands dollars this system costs.

On the RPI, you need to execute this line:

raspivid -n -w 640 -h 480 -t 0 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=10 pt=96 ! udpsink host=192.168.1.9 port=9000

And on the tablet side… you need Tower Beta, and then just configure the stream port to fit your own…