Computer Vision Talk

Computer Vision Talk

What the funk?

 

In my most recent visit to Mexico, a very dear friend (Rolis) of mine (Aldux) invited me to give a talk about computer vision and the applications I usually use it for, which are drones of course!

Needless to say, I had a blast!! I talked a lot of the computer vision slung load technique that I used in my PhD thesis as well as the cool project I developed as Postdoc at the University of Oxford, the Kingbee project!

Also, a week before the talk, I created some new scripts on my popular computer vision repository, those ones relate to haar cascades.

Slung load recreation with microphone

One of the most interesting new items is a script that detects cars from a webcam… such webcam is an open traffic IP cam somewhere in the USA… One of the most complex parts when doing this script was to be able to open correctly the stream of images, then the haar cascade is very easily implemented, I took the trained xml files from other repositories similar to mine (proper source crediting is given of course).

You can see it in action here:

https://github.com/alduxvm/rpi-opencv/blob/master/car-detection-stream.py

 

Another cool script is one that detects people (the full body) again using as well haar cascades. This one comes from an IP camera in Spain, you can see it in action here:

https://github.com/alduxvm/rpi-opencv/blob/master/haar-detection-stream.py

 

The talk was given at my friends company, called Funktionell which is a pretty cool place! full of gadgets, electronics, 3d printers, engineers, programmers and designers!! Their statement is:

Somos una empresa de tecnología que se dedica a crear experiencias digitales inolvidables. Movidos por la innovación y la curiosidad por romper paradigmas, mezclamos la tecnología con la imaginación para llegar a resultados de calidad increíbles.

Finally, the video of the entire talk was posted on Funktionell facebook page, it can be seen here:

Visión por computadora

Bienvenidos al #CodeMeetsFunk
Hablamos de Visión por computadora aplicada en detección de rostros, colores y control de drones.
Ponentes: Gustavo Heras y el Dr. Aldo Vargas

Posted by Funktionell on Saturday, April 1, 2017

 

Aftermath
Pixy tracking an object

Pixy tracking an object

Quick video showing how do I teach an object to the Pixy cam…

Pixy is the current CMUcam, in its version 5… I bought one a long time ago, maybe the 3… but I never had the time to use it, and now this one is easy to use (once you understand) and has very nice computer vision stuff already embedded…

This version of CMUcam has a NXP LPC4330, 204 MHz, dual core processor, actually very meaty 😛 and a Omnivision OV9715, 1/4″, 1280×800 as image sensor.

The must important part is that they say it process a image each 20 milliseconds… so, Pixy processes an entire 640×400 image frame every 1/50th of a second… 50 frames per second. This could work perfectly for putting it onboard one of my vehicles right?? keep posted 😉

This was the object being taught to the Pixy:

iPhone plastic speaker
iPhone plastic speaker

And this was the result:

 

Actually very very fast… important to notice how it keep tracking of the object regardless of its orientation.