Archive for the Python RULES! Category

So far so good

Posted in MQTT, node-red, Orange Pi, Python RULES! on August 2, 2017 by asteriondaedalus

OPiZ is still up.  So I have avoided the hiccup of ethernet dropping out somehow.  I am always queasy when the problem just “disappears” but small mercies right?

The only thing that I noted was that, when the system has dropped its ethernet, I have not had a console connected.  So I have shut down the TeraTerm session, that was watching the memory usage, and left the device running now with the node-red and python test harnesses running.

We may be good now.   I hope I have properly documented the steps I went through.

Advertisements

Stereo slam drunkity dunk

Posted in Python RULES!, The downside of Opensource, Vision on June 11, 2017 by asteriondaedalus

Ah well, so some tricks when using the disparity functions for generating stereo depth.  The frame from each camera needs to be converted from color to grayscale.

stereo_slam3

I have a suspicion the output is knobbled because I may need do something about calibrating this stereo pair first.  There are also processing bin sizes I can play with.  However, this is an “interesting” way of getting stereo depth but I am just buzzing out the setup using Python, OpenCV and the new camera.

The other quirk.  I seem to need to use matplotlib to draw the disparity map (being its a grayscale?).  I tried reversing the cvtColor function to go back to GBR – since the plot function of matplotlib holds the processing loop (I have to hit the window close button to loop).  Something to sort if I wanted motion displayed.

Work to go then, sort out stereo camera calibration, then try the disparity code again.

Stereo slam dunk

Posted in Python RULES!, Sensing, The Downside of software development, Vision on June 11, 2017 by asteriondaedalus

With some pain I got the stereo camera that turned up the other day, from aliexpress, to work (provisionally).

stereo_slam1

This is on my windoze PC using 64bit Stackless Python and OpenCV 3.2.

Trick, that stopped me for two days, was working out the problem where one or the other camera would work.  But both together hung.  I would swap order and get same thing.

Turned out to be USB 2.0 choking.  So fix was to work out how to set the image size small enough for the two camera streams to cooperate on the on USB port.

Camera is this one:

stereo_slam2

Which has specs of: 1280*720 MJPEG 30fps 120 degree dual lens usb camera module HD CMOS OV9712. Which is, as it turns out, a lie in this configuration.  The device is USB 2.0 so will choke when trying to pump both through at the same time.  Some work will be needed to sort the maximum resolution that the cameras can be set to – there is likely some black magic math somewhere (or trial and error).

I haven’t used much science in the selection (I waited until prices dropped and grabbed the lowest price one at the time).  I opted for wider field of view because I suspect that creates greater disparity between points to help localisation – however, don’t quote me as that is not back up by any reading at the moment.

The hangup, at the moment, is that while the two cameras are working, OpenCV does rather have various matrix types and so the rotten thing (as usual) “thin”  or sporadic documentation.

If you find “help” any it will be using deprecated functions (from previous versions of OpenCV) or in C++ etc.

Even just a disparity map, that uses the stereo image to show depth planes, needs matrix conversions.

Still, once these are worked out I can buzz out a design on the PC before migrating to an embedded form factor (C.H.I.P., ODROID-C0 or Orange Pi Zero, perhaps even old Android phone).

I am after something to pump a point cloud out.  Using mono-slam is fun but I am not sure that having to get the camera video processing and platform pose working together is happiest medium – since people are helping out with stereo camera like this especially.

Talk about Serendipity

Posted in Python RULES!, Rant on December 8, 2016 by asteriondaedalus

So, I am writing another paper to see if I can get to a conference, next year, in Italy.

I was chasing a comment made by Jeff Southerland around software quality stats kept at Palm Inc that (apparently) showed how if you fixed a bug in your code 3 weeks are you found it, instead of the first hour after you found it, you were more or less guaranteed of taking 24 times longer to fix it.

Now, I was interested in tracking down any likely papers from Palm Inc on the subject.

Go figure the guy you invented the Palm Pilot (I had one, I saw their potential … at the time at least) has gone into AI with numenta.org a python brain thingy.

So there is some solace now with the Curie “defunctionality” – note I claim authorship of this word!

It can mean delivery of dead functionality.

We can muse over whether that is deliberate or unintentional.

But I digress.

I wonder how NuPIC compares with TensorFlow?

NuPIC is interesting because of the claim it is embeddable.

Hmmm.

 

Winding down

Posted in Open Source can be professional, Python RULES! on November 22, 2016 by asteriondaedalus

ipython

Now this is what I am talking about.

See Claus for SLAM videos using Python whohoooo!

Get yourself some ipython so you can get access to the web based notebook.

Easy enough to embed Claus in the notebook page and then doodle learning SLAM in Python in ipython notebooks because you can.

Finally!

Posted in Agent, Erlang, Python RULES!, RabbitMQ, XMPP on June 8, 2015 by asteriondaedalus

So, sorted the BlackWidow and likely the YellowJacket connectivity. The socket approach seems to be the better option, and that is TCP based, so porting of mqtt code might still be possible – although the application is simple enough so a Python adaptor on the host between the boards and the mqtt server would do just as well.

Now, the other side is that I opted for moving from emqttd to RabbitMQ as the mqtt broker.  This is due to the fact that there is likely a way to get RabbitMQ talking to Spade agents via XMPP (with some work).  This might be a way of integrating Profeta agents into the picture without deep integration into Spade that I have been currently investigating.

Not to mention likely being able to use the same inter-agent communication for eJason base experiments.

Essentially using mqtt as a ‘blackboard‘.

And they’re OFF!!!!!

Posted in Python RULES!, Uncategorized on November 13, 2014 by asteriondaedalus
muybridge_GallopingHorse(2)_

This is a famous photograph. More or less invention of moving pictures that we all know and love. Look at it, scroll your mouse wheel up and down quickly, just a quarter of a turn to go back in history two turns of a century.

So, FINALLY got the call from the nephew.

Project has changed somewhat.

The original thought was a touch screen monitor based kiosk for students to pick up daily messages held in a csv file somewhere on the network.

The change was to use Android tablets (must grate as me nephew is a true blue Appleyte), and to parse data out of a html dump!

Parse data out of a html dump?

Turns out the skool system can turn out a webpage report (so I am a little dim as to why they need do much more that set up a browser looking at that webpage).  Anyway, my nephew is excited to be working with his uncle (and I needed an excuse to by four odroid-w, the wife you know).

So, his teacher pointed him at BeautifulSoup which is a python library for sucking data out of html files.

Beautiful is right.

Here is our first experiments (the whoopses excluded):

Untitled

Basically if there are tables within the html the code can pick them out.  He had the idea himself of just reusing the html for the table in our file – brilliant.  So, only problem is other data is buried in nondescript <p> tags.

Will need to sleep on that.