Drone following a roomba

Drone following a roomba

Oh!! it’s been a while since I write a blog!

I’m sorry for that! Very busy at the moment!

In this post I will explain about a very cool robotics project I did last year. This project is as cool as it sounds: making a drone follow a roomba using computer vision!

The idea can be broken down in different sections, one the vehicle, the communications, the control, computer vision and results…

The objective of this project is to make a drone “see” a roomba and try to follow it from the air.

On the vehicle side, I was using the great and cool BigX, which is a great vehicle with very nice performance and big!! Here is a pic of it:

BigX

On board I was using the same concept of the Flight Stack, which is the combination of a flight controller (pixhack in this case) and a raspberry pi with extra features.

The RPI was a 3, and its the one in charge of “piloting” the aircraft when I’m not doing it. Also the RPI is the one that runs the computer vision algorithm to be able to “see” the roomba. With that information (position in pixels X and Y) the RPI will compute the necessary velocity commands to steer the vehicle to the center of the target.

The RPI was also in charge of the communications, it created a network and on the ground I was using a ubiquiti AP that was connected via ethernet to my MBPR, I used this configuration because such AP gave me the range of 10km LoS (I just try it up to 3km…)

Also on board, connected to the RPI, a B101 HDMI bridge was used in order to capture the frames from the gopro camera and have them analyzed.

On the ground side as I mention before, I had the Ubiquiti AP and my laptop connected with ethernet to it. My computer logged in to the RPI via SSH in order to activate the scripts that run the main stuff. Also I had QgroundControl opened to be able to see the telemetry of the vehicle in a nice way. I was using mavproxy with udp casts. An image of how my computer screen looked was:

Ground Station Computer

In the image above you can see the position teleoperation tool from the AltaX ground station program. This module changed the position of the robot by reading the keyboard from the ground station computer, pretty neat…

On the computer vision part, I added blue tape to the top of the roomba in order to be very easy distinguishable from the environment. I also tuned as much as possible my different color tracker algorithms, you can find the code here and a video demonstrating here.

When you combine all ingredients, plus creating a velocity vector position control, you got a nice result, like the one showed in this video:

 

  • Camera: NEX-5R
  • Focal length: 16mm
  • ISO: 800
  • Shutter speed: 20s
Long autonomous mission

Long autonomous mission

Its been a while since I have some time to post in the blog. But with great powers come great responsibility hehe…

In this post I’m going to describe the process I did for the mission that was showed in the next video:

That was one of the longest autonomous missions I have done with a quadrotor. The vehicle travelled approximately 6.2km and the furthest it went from the operator was 3km. I decided to make this mission just to see if the vehicle was capable as well as the extra systems that accompany the vehicle.

The vehicle was also somewhat new, I build it and flew it 5-6 times before attempting this mission. The motors were great (4114 pro), I was using cf tarot foldable propellers and a modified cf frame.

Foldable quadrotor

These combination of components made a very efficient yet powerful vehicle, with an 5.2Ah battery, the vehicle was flying around 24min.

And as usual this vehicle contain the AltaX flight stack, which is just the combo of a companion computer and a flight controller. The companion computer is the Raspberry Pi 3, while in the flight controller I chose a pixhack. The pixhack is a version of the standard pixhawk but the IMU has internal dampening, which means not too much vibration isolation is needed, this was specially key in this frame as it does not have too much space…

Also in the communication side of things, I added a high-gain wifi dongle dual band. Such wifi dongle is powered using a BEC because by itself the rpi cannot supply enough power to have a decent range.

For camera, I was using a gopro hero 3, therefore the hdmi-csi bridge is needed. The gopro was mounted in a 3 axis gimbal, the tarot one… excellent gimbal for the gopro.

In the ground I had a directional AP that was always pointing towards the vehicle, that AP was connected to my computer using an ethernet cable therefore the network was complete. The AP I had was an Ubiquiti Loco M5, which had a great range!

So, the rpi was in charge of talking to the pixhack (via mavlink using dronekit), also running the AltaX Ground Station Console and finally transmitting the video back to the ground (video stream server).

For the video transmission, I used very similar techniques as in here. So, I was using gstreamer. The hdmi-csi bridge basically turns any hdmi device into a v4l2 device… so, it was very easy to pipe it using gstreamer.

My ground station console is basically like a pilot remote with extra power 馃槈 It is capable to send any kind of command to the pixhawk, like take off, change altitude, circle around, send a mission, manual reposition… you name it!

I chose the Scottish highlands as place to test this mission, because the remoteness of the place, I had no problems with regulations, since there is nothing around… even so, I followed the regulations in place in the UK.

The mission started by me taking off the vehicle, leaving it in loiter mode, then checking the console tool, upload the mission and start it with an enter. That sweet. Then after 17 stressful minutes, the vehicle was in sight… it was coming back, in either case, the telemetry was telling me where it was… let me say something, this loco m5 is quite a nice product!! I never reached half of the connection strength!

The main problem was at the end, the vehicle was flying head wind… therefore the motors required more juice in order for the pixhawk to maintain 5m/s… but at the end with the alarms making beep beep beep beep.. I was able to perform a somewhat hard landing. But everything was ok and the mission was successful!!

I would like to say sorry due to the low resolution of the screen recording… what happened is that I lost the original file and I only got this less-than-HD version… yuck.

In any case, I hope you enjoy the video, extra thumbs up to the person that recognizes the soundtrack!

Computer Vision Talk

Computer Vision Talk

What the funk?

 

In my most recent visit to Mexico, a very dear friend (Rolis) of mine (Aldux) invited me to give a talk about computer vision and the applications I usually use it for, which are drones of course!

Needless to say, I had a blast!! I talked a lot of the computer vision slung load technique that I used in my PhD thesis as well as the cool project I developed as Postdoc at the University of Oxford, the Kingbee project!

Also, a week before the talk, I created some new scripts on my popular computer vision repository, those ones relate to haar cascades.

Slung load recreation with microphone

One of the most interesting new items is a script that detects cars from a webcam… such webcam is an open traffic IP cam somewhere in the USA… One of the most complex parts when doing this script was to be able to open correctly the stream of images, then the haar cascade is very easily implemented, I took the trained xml files from other repositories similar to mine (proper source crediting is given of course).

You can see it in action here:

https://github.com/alduxvm/rpi-opencv/blob/master/car-detection-stream.py

 

Another cool script is one that detects people (the full body) again using as well haar cascades. This one comes from an IP camera in Spain, you can see it in action here:

https://github.com/alduxvm/rpi-opencv/blob/master/haar-detection-stream.py

 

The talk was given at聽my friends company,聽called Funktionell聽which is a pretty cool place! full of gadgets, electronics, 3d printers, engineers, programmers and designers!! Their statement is:

Somos una empresa de tecnolog铆a que se dedica a crear experiencias digitales inolvidables. Movidos por la innovaci贸n y la curiosidad por romper paradigmas, mezclamos la tecnolog铆a con la imaginaci贸n para llegar a resultados de calidad incre铆bles.

Finally, the video of the entire talk was posted on Funktionell facebook page, it can be seen here:

Visi贸n por computadora

Bienvenidos al #CodeMeetsFunk
Hablamos de Visi贸n por computadora aplicada en detecci贸n de rostros, colores y control de drones.
Ponentes: Gustavo Heras y el Dr. Aldo Vargas

Posted by Funktionell on Saturday, April 1, 2017

 

Aftermath
Slung Load Controller

Slung Load Controller

Multirotor Unmanned Aerial Vehicles (MRUAV) have become an increasingly interesting area of study in the past decade, becoming tools that allow for positive changes in today鈥檚 world. Not having an on-board pilot means that the MRUAV must contain advanced on- board autonomous capabilities and operate with varying degrees of autonomy. One of the most common applications for this type of aircraft is the transport of goods. Such applications require low-altitude flights with hovering and vertical take-off and landing (VTOL) capabilities.

Similar as before in this project we use the AltaX Flight Stack聽which is compromised by a Raspberry Pi 3 as companion computer and a naze32 as flight controller.

The slung load controller and the machine learning estimator is running on the RPI3, although of course the training of the recurrent neural network was done offline in a big desktop computer. The RPI calculates the next vehicle position based on the estimation of the position of the slung load, everything is running using our framework DronePilot聽and guess what? its open source ;). Keep reading for more details.

If the transported load is outside the MRUAV fuselage, it is usually carried beneath the vehicle attached with cables or ropes, this is commonly referred to as an under-slung load. Flying with a suspended load can be a very challenging and sometimes hazardous task because the suspended load significantly alters the flight characteristics of the MRUAV. This prominent pendulous oscillatory movement affects the response in the frequency range of the attitude control of the vehicle. Therefore, a fundamental understanding of the dynamics of slung loads as they relate to the vehicle handling is essential to develop safer automatic pilots to ensure the use of MRUAV in transporting load is feasible. The dynamics of the slung load coupled to a MRUAV are investigated applying Machine Learning techniques.

The learning algorithm selected in this work is the Artificial Neural Network (ANN), a ML algorithm that is inspired by the structure and functional aspects of biological neural networks. Recurrent Neural Network (RNN) is a class of ANN that represents a very powerful system identification generic tool, integrating both large dynamic memory and highly adaptable computational capabilities.

Recurrent neural network diagram

In this post聽the problem of a MRUAV flying with a slung load (SL) is addressed. Real flight data from the MRUAV/SL system is used as the experience that will allow a computer software to understand the dynamics of the slung in order to propose a swing-free controller that will dampen the oscillations of the slung load when the MRUAV is following a desired flight trajectory.

This is achieved through a two-step approach: First a slung load estimator capable of estimating the relative position of the suspension system. This system was designed using a machine learning recurrent neural network approach. The final step is the development of a feedback cascade control system that can be put on an existing unmanned autonomous multirotor and makes it capable of performing manoeuvres with a slung load without inducing residual oscillations.

Proposed control strategy

The machine learning estimator was designed using a recurrent neural network structure which was then trained in a supervised learning approach using real flight data of the MRUAV/SL system. This data was gathered using a motion capture facility and a software framework (DronePilot) which was created during the development of this work.

Estimator inputs-outputs

After the slung load estimator was trained, it was verified in subsequent flights to ensure its adequate performance. The machine learning slung load position estimator shows good performance and robustness when non-linearity is significant and varying tasks are given in the flight regime.

Estimator verification

Consequently, a control system was created and tested with the objective to remove the oscillations (swing-free) generated by the slung load during or at the end of transport. The control technique was verified and tested experimentally.

The overall control concept is a classical tri-cascaded scheme where the slung load controller generates a position reference based on the current vehicle position and the estimated slung load position. The outer loop controller generates references (attitude pseudo- commands) to the inner loop controller (the flight controller).

Control scheme

The performance of the control scheme was evaluated through flight testing and it was found that the control scheme is capable of yielding a significant reduction in slung load swing over the equivalent flight without the controller scheme.

The next figures show the performance when the vehicle is tracking a figure-of-eight trajectory without control and with control.

The control scheme is able to reduce the control effort of the position control due to efficient damping of the slung load. Hence, less energy is consumed and the available flight time increases.

Regarding power management, flying a MRUAV with a load will reduce the flight times because of two main factors. The first one relates to adding extra weight to the vehicle, consequently the rotors must generate more thrust to keep the desired height of the trajectory controller, hence reducing the flight time. The second factor relates to aggressive oscillations of the load for this reason. The position controller demands faster adjustment to the attitude controller which increases accordingly the trust generated by the rotors. The proposed swing-free controller increases the time of flight of the MRUAV when carrying a load by 38% in comparison with the same flight without swing-free control. This is done by reducing the aggressive oscillations created by the load.

The proposed approach is an important step towards developing the next generation of unmanned autonomous multirotor vehicles. The methods presented in this post聽enables a quadrotor to perform flight manoeuvres while performing swing-free trajectory tracking.

Don’t forget to watch the video, it is super fun:

UoG 360 Spherical

UoG 360 Spherical

Glasgow University area 360-degree spherical panoramic photo*.

DJI FC220
茠/2.2 4.7 mm
1/120 ISO 151


* Complying w/ UK Air Navigation Order (CAP393). Always remember to fly safe!

 

Computer vision using GoPro and Raspberry Pi

Computer vision using GoPro and Raspberry Pi

In this post I’m going to demonstrate how to do test some computer vision techniques using the video feed from a GoPro Hero 3 directly towards a Raspberry Pi 3.

I’m using a special bridge that has a HDMI input and as an output goes to the CSI camera port of the Raspberry Pi, so, basically as easy as using a RPI camera…

IMG_2634

This is actually not a very common technique to do, the bridge is from the company Auvidea, and the model is the B101.

And the best part is that its plug and play. I just install it on my RPI, connected the CSI cable to the camera port, turn my RPI on, turn the GoPro on, and run “raspivid -t 0” and voil脿, you will see the video on the screen!!!

Different angle of the rpi + HDMI bridge
Different angle of the rpi + HDMI bridge

After that is just question of using my computer vision repository:聽https://github.com/alduxvm/rpi-opencv, and start testing the different scripts… As usual, I created a video for you guys to see it working, take a look here:

BigX

BigX

Carbon fibre foldable quadcopter

BigX is a bespoke vehicle designed and built to support research projects. It’s big, with a 900mm wheelbase which means it can hold聽up to 21in propellers.

BigX
BigX

The frame is manufactured by SoliDrone, the model of the frame is聽FR4X 900F. I got this prototype frame in order to build it and test it. The company will start selling this great frame soon, so, check their website for updates. They have a beautiful render that you can see here:

SoliDronesolidrone

 

Specifications of the vehicle:

Frame: FR4X 900F
Motors: Foxtech S5010 288kv
Propellers: 18×6.5in CF
Wheebase: 900mm
FC: Pixhack
ESCs: Hobbywing XRotor Pro 40A
Weight(no batt): 3kg

 

The carbon fibre plates are really thick, 3mm, which makes it very very hard and solid… SoliDrone, hehe… But that is one of the reasons why is a bit heavy. But considering the type of applications this one is going to be use in, it is just right. It’s big, 900mm wheelbase, and because of that, it can be equipped with very large propellers, but being foldable, makes it very easy to transport. This is a great feature.

 

Building process:

 

So, how big it is??

DSC03989

Weight with 16,000mah battery
Weight with 16,000mah battery

At the moment I’m putting 18in props to this one, and its performing quite well, maybe I will put bigger props on the future.

Hovering time:

I did several tests using two different batteries. Both 聽batteries are Multistar LiHV from HobbyKing. The longest flight was almost 32 minutes.

Battery Flight Time
10,000 mah 25 min
16,000 mah 32 min

Then I added a Raspberry Pi and using DronePilot, I made it fly autonomously in very different ways, and it performs great!! You can see in the video how good it flies. And as usual, the Scottish weather does not help, but this vehicle was able to fly under raining conditions and strong wind gusts, with ease.

 

Video: