Archive for the Vision Category

Stereo slam drunkity dunk

Posted in Python RULES!, The downside of Opensource, Vision on June 11, 2017 by asteriondaedalus

Ah well, so some tricks when using the disparity functions for generating stereo depth.  The frame from each camera needs to be converted from color to grayscale.


I have a suspicion the output is knobbled because I may need do something about calibrating this stereo pair first.  There are also processing bin sizes I can play with.  However, this is an “interesting” way of getting stereo depth but I am just buzzing out the setup using Python, OpenCV and the new camera.

The other quirk.  I seem to need to use matplotlib to draw the disparity map (being its a grayscale?).  I tried reversing the cvtColor function to go back to GBR – since the plot function of matplotlib holds the processing loop (I have to hit the window close button to loop).  Something to sort if I wanted motion displayed.

Work to go then, sort out stereo camera calibration, then try the disparity code again.

Stereo slam dunk

Posted in Python RULES!, Sensing, The Downside of software development, Vision on June 11, 2017 by asteriondaedalus

With some pain I got the stereo camera that turned up the other day, from aliexpress, to work (provisionally).


This is on my windoze PC using 64bit Stackless Python and OpenCV 3.2.

Trick, that stopped me for two days, was working out the problem where one or the other camera would work.  But both together hung.  I would swap order and get same thing.

Turned out to be USB 2.0 choking.  So fix was to work out how to set the image size small enough for the two camera streams to cooperate on the on USB port.

Camera is this one:


Which has specs of: 1280*720 MJPEG 30fps 120 degree dual lens usb camera module HD CMOS OV9712. Which is, as it turns out, a lie in this configuration.  The device is USB 2.0 so will choke when trying to pump both through at the same time.  Some work will be needed to sort the maximum resolution that the cameras can be set to – there is likely some black magic math somewhere (or trial and error).

I haven’t used much science in the selection (I waited until prices dropped and grabbed the lowest price one at the time).  I opted for wider field of view because I suspect that creates greater disparity between points to help localisation – however, don’t quote me as that is not back up by any reading at the moment.

The hangup, at the moment, is that while the two cameras are working, OpenCV does rather have various matrix types and so the rotten thing (as usual) “thin”  or sporadic documentation.

If you find “help” any it will be using deprecated functions (from previous versions of OpenCV) or in C++ etc.

Even just a disparity map, that uses the stereo image to show depth planes, needs matrix conversions.

Still, once these are worked out I can buzz out a design on the PC before migrating to an embedded form factor (C.H.I.P., ODROID-C0 or Orange Pi Zero, perhaps even old Android phone).

I am after something to pump a point cloud out.  Using mono-slam is fun but I am not sure that having to get the camera video processing and platform pose working together is happiest medium – since people are helping out with stereo camera like this especially.

BOOF working on Processing for Android!

Posted in Android, Processing, Vision on July 31, 2015 by asteriondaedalus

A little work and a little sweat and the help of Peter Abeles (the author of BOOFCV) and BOOFCV can compile in Processing for Android.

The fix is to break into the boofcv_dependencies.jar and delete the xmlpull entry under: boofcv_dependencies/org

The reason, the library is already being pulled in from somewhere else in the build and the build is not smart enough to ignore a replicated library and simply cracks up.

Simple fix.

Too early to know whether there will be any side effects – perhaps now in Java mode of the Processing IDE, since this fix appears to correct a problem in the Android mode.

In any event, we can move on with the experiments. I have quite a few android based toys with cameras after all.

JXD S7800B

JXD S7800B

water proof phone

android webcam





Posted in Android, Processing, Vision on July 20, 2015 by asteriondaedalus

Okay, so a little fiddling and changing a couple of set parameters and I have code running on my Samsung S2 that will unwrap a Bloggie lens.

Based on code by Flong but running against camera and not a saved image.

Now a bit of work to port ideas from roborealm to turn this into an obstacle avoidance sensor.  Will have to find or code related image processing in Processing or Java.  Although, might have found the best library in BOOF!  BOOF has processing library (already just now installed on my machine), camera calibration, structure from motion (OMG!), Fiducials (read Markers) (OMG!).

So, ready, steady GO!

Try, try again.

Posted in Chase That Dog!!, Doodling, Vision on July 19, 2015 by asteriondaedalus

After breaking the setup my Processing for Android installation, I opted to rip out the Android SDK, rip out Processing, delete all remaining files and folders and environment variables.

I then re-installed Processing and let the fresh Android Mode install the Android SDK (rather than hook into my own  installation).

A little wrestling with the permissions on the sketch for the camera and I was (finally) able to get both my front and my back camera running on my Samsung S2.

Now we are away with modifying a bloggie lens to sit on the phone and to use the processing unwrapping of the bloggie image to knock up a collision sensor.


One snazzy sensor

Raspberry Pirates are GO!

Posted in Raspberry Pirate, Vision on November 10, 2014 by asteriondaedalus
Battery charged ... check.

Battery charged … check.  Click on the picture.  Go on.  Look at it close up.  I am spun out at the SD card size in comparison with the board.

With my USB mouse and keyboard from Linux setup downstairs.  A bran spanking new black Ethernet cable plugged into my wireless extender.  Lights a flickering and:

A pretend Pi!

A pretend Pi! (aka Raspberry Pirate Arrrgh!)

So, I was curious, having been lazy and buying an SD card from Hardkernel, but it was Raspbian…phew.

So I downloaded bottlepy (the web framework) and:

Hello Nephew!

Hello Nephew!

No biggy but now I wait with bated breath for him to catch up.

In the meantime, I am writing up Part1 of LAB2 of the FPGA laboratories … slowly, as I am deep in Masters Dissertation at the moment.

Also wik, I am looking at Processing on Android to build my 3D sensors (both line laser and structured light).  I have (almost) sorted the line laser driver which will run off a Seeduino film with Bluetooth.

I decided to switch because there is already a structured light program running in Processing and I have dibs on a LED pico projector from China for around $100.

Not to mention the bloggy omni-lens demo code is in Processing (and you’ll recall I have three of those suckers now 😉 – yes Yes YES!

Now that’s an idea!

Posted in Android, Arduino, Sensing, Vision on May 2, 2014 by asteriondaedalus


Feast your eyes!

The beautiful accidents right.

The GripGo base, once I ripped the suction cup off (broke off without actually any resistance) was a couple of millimeters in diameter larger than a circular “ledge” on the back of the indoor rover so a liberal smear of epoxy and voila!!  An adjustable mount for my old Samsung SII.

Do notice the omnidirectional lens on the top.  That is a “spare” that turned up.  Cost $60 but came with an old knock about bloggie which I will find some other use for.

Unfortunately the second bloggie fell through, the poor dear at the other end of the interweb was too dull to know how to mail something interstate – oddly, this one comes from twice as far away.  Can’t fathom it but there are still Luddites around.

Anway, I have two of these lens as well as the “periscope” version (that I am waiting on the USB catheter camera).  So, plenty if you ask me.

I will need to get a student copy of Matlab to use a fancy calibration library which looks like it lifts any distortion out of the image once it is unwrapped.  Even if that falls through, a simple wall detector will allow this beast to avoid bumping into walls – we hope … mwahahahahaha!

Recall the phone runs S4A and Python so while the application will be in C++ or Android it can ping a socket with steering directions – or that is the plan.

Beauty is that the adjustable holder allows other configurations when other algorithms and vision approaches are being played with.

Nope, same Rover.

Nope, same Rover.