Archive for the Processing Category

OMG!

Posted in Processing, Prototyping on August 23, 2015 by asteriondaedalus

I have IntelliJ IDEA on my desktop, I sort of flick between it, Eclipse JDT etc.  I had actually pulled it up to use it for my Elixir and Erlang programming.

Turns out BoofCV has a Gradle build and is IDEA friendly so I thought I would poke around and make a few changes.

One thing I wanted to do was to look at integrating sarxos webcam-capture in with BoofCV and I was poking around trying to workout how to do that.  Well, turns out there is a folder in the BoofCV project in IDEA that actually already does this so I was able to run a couple of feature and object tracking examples off my Bloggie.

I did note they were using 0.3.10 versions of webcam-capture so the “get camera by name” method was missing, so I had a look at how to add it in the IntelliJ IDEA.  I will have to get a Gradle book but it looks like an xml file pulls jar base libraries into a cache.  I did the change the dirty sneaky way and I copied 0.3.11 over 0.3.10 in the cache to see if that would let be play.  It did and I modified a BoofCV class to allow for named webcams.

I will still need to read up on Gradle to work out WTFO on the URL and the URI that seems embedded in it.  Might not be straight forward as 0.3.11 does not appear in the main or tagged branches yet (you can download a jar with that version from the sarxos site however).

So, will now look at prototyping the omnivision collision detection and fuzzy steering on my PC before porting to the Processing for Android.

I am feeling it is all doable.

Getting webcams working in Processing…

Posted in Processing, The downside of Opensource on August 22, 2015 by asteriondaedalus

… when processing video library does not help.

Some webcams don’t appear to work with Processing video library.

When you use the default example you will get the camera listed but it will never be available nor will it raise a capture event.

processingvideo

Having poked around, the solution I found was to go to:

http://webcam-capture.sarxos.pl/

Download: webcam-capture-0.3.10-dist.zip

From the zip file above unzip: bridj-0.6.2.jarslf4j-api-1.7.2.jar

Grab the latest development jar for webcam-capture from here as the one in the zip is missing the get camera by name function.  The jar at the link (when I looked last) was webcam-capture-0.3.11-20150713.101234-10.jar

Create a sketch and add all three jar to the sketch using:

addfile

Enter following code:

helpt

When you run it you should get a small window, or at least I did.

Replace my camera name with yours.  Mine is “MHS-FS2 0”.

To find the name and index for you camera use the following:

helpt2

The camera name is your “camera name” + ” ” + “index”.

Note the index will not necessarily be the same each time (if you have multiple cameras) as they might be found by system in different order each time you start up.

Multiple Java examples at author’s websites at either:

http://webcam-capture.sarxos.pl/

or

https://github.com/sarxos

Note you get get the image with:

BufferedImage image = webcam.getImage();

But you will have to convert it to PImage if using Processing window thingies.

Suspicion is currently that one uses the BufferedImage.getRBG() method with the PImage.pixels[]. This assumes PImage.pixels[] is an RBG array.  There is some example Processing code which seems to confirm this so fingers crossed.

In the meantime …

Posted in Doodling, Processing on August 14, 2015 by asteriondaedalus

… it occurred that fuzzy controllers might be possible in Processing if a fuzzy library was imported.  I opted to try out jFuzzylite as it appeared more compact than jFuzzyLogic.

I built the jar using ant and then imported it into a Processing sketch.  Just use “Add file” menu thus:

addfile

The modified example is coded below in Processing.  Quirk is it opens a small window (as expected) but fills the console with window event logging – as well as the intended print out at bottom.  So, needs some investigating but the fuzzylogic works.

jFuzzylite is also supposed to work with Android so I will switch the Android mode in and see what happens.  The fuzzy logic will be useful for the omni sensor app as the steering may need to be fuzzy to take account of the possible “noise” in the visual field.

Looking good.  Especially as it has a visual tool to design the fuzzywuzzyness that will export to various formats.

qtfuzzy

Very neat.

 __________________________________________

import com.fuzzylite.Engine;
import com.fuzzylite.FuzzyLite;
import com.fuzzylite.Op;
import com.fuzzylite.defuzzifier.Centroid;
import com.fuzzylite.imex.FldExporter;
import com.fuzzylite.norm.s.Maximum;
import com.fuzzylite.norm.t.Minimum;
import com.fuzzylite.rule.Rule;
import com.fuzzylite.rule.RuleBlock;
import com.fuzzylite.term.Triangle;
import com.fuzzylite.variable.InputVariable;
import com.fuzzylite.variable.OutputVariable;

Engine engine = null; 
OutputVariable power = null;
RuleBlock ruleBlock = null;
InputVariable ambient = null;

void setup() {
   engine = new Engine();
   engine.setName("simple-dimmer");
   
   ambient = new InputVariable();
   ambient.setName("Ambient");
   ambient.setRange(0.000, 1.000);
   ambient.addTerm(new Triangle("DARK", 0.000, 0.250, 0.500));
   ambient.addTerm(new Triangle("MEDIUM", 0.250, 0.500, 0.750));
   ambient.addTerm(new Triangle("BRIGHT", 0.500, 0.750, 1.000));
   
   engine.addInputVariable(ambient);
 
   power = new OutputVariable();
   power.setName("Power");
   power.setRange(0.000, 1.000);
   power.setDefaultValue(Double.NaN);
   power.addTerm(new Triangle("LOW", 0.000, 0.250, 0.500));
   power.addTerm(new Triangle("MEDIUM", 0.250, 0.500, 0.750));
   power.addTerm(new Triangle("HIGH", 0.500, 0.750, 1.000));
 
   engine.addOutputVariable(power);

   ruleBlock = new RuleBlock();
   ruleBlock.addRule(Rule.parse("if Ambient is DARK then Power is HIGH", engine));
   ruleBlock.addRule(Rule.parse("if Ambient is MEDIUM then Power is MEDIUM", engine));
   ruleBlock.addRule(Rule.parse("if Ambient is BRIGHT then Power is LOW", engine));
   
   engine.addRuleBlock(ruleBlock);
 
   engine.configure("", "", "Minimum", "Maximum", "Centroid");
 
   noLoop();
}
void draw() {
   StringBuilder status = new StringBuilder();
   
   if (!engine.isReady(status)) {
      throw new RuntimeException("Engine not ready. "
      + "The following errors were encountered:\n" + status.toString());
    } 

   for (int i = 0; i < 50; ++i) {
      double light = ambient.getMinimum() + i * (ambient.range() / 50);
      ambient.setInputValue(light);
      engine.process();
      println(String.format( "Ambient.input = %s -> Power.output = %s",
      Op.str(light), Op.str(power.getOutputValue())));
   }
}

BOOF working on Processing for Android!

Posted in Android, Processing, Vision on July 31, 2015 by asteriondaedalus

A little work and a little sweat and the help of Peter Abeles (the author of BOOFCV) and BOOFCV can compile in Processing for Android.

The fix is to break into the boofcv_dependencies.jar and delete the xmlpull entry under: boofcv_dependencies/org

The reason, the library is already being pulled in from somewhere else in the build and the build is not smart enough to ignore a replicated library and simply cracks up.

Simple fix.

Too early to know whether there will be any side effects – perhaps now in Java mode of the Processing IDE, since this fix appears to correct a problem in the Android mode.

In any event, we can move on with the experiments. I have quite a few android based toys with cameras after all.

JXD S7800B

JXD S7800B

water proof phone

android webcam

Cyclops!

Cyclops!

Measy

Bonus!

Posted in IOT, MQTT, Processing, Robotics, ROS on July 23, 2015 by asteriondaedalus

I found that Processing has a library for talking to MQTT so …

Will make sense when I get the 360 degree obstacle thingy going as I should be able to send “left a bit” … “righ a bit” messages to the MQTT server etc.

MQTT has much the same topic based thingy as ROS yes?  Well, sort off.  But golly, to be free of ROS, only in as far as you can work off the IOT as well – and there is likely a ROS-MQTT bridge in any event.

Better yet, it works on Processing for Android! Whooo hooo!

Now, I am in comms with the author of BOOFCV as we type here to sort out why the BOOFCV examples are not compiling for Android Mode.  Hmmm.

Voila!

Posted in Android, Processing, Vision on July 20, 2015 by asteriondaedalus

Okay, so a little fiddling and changing a couple of set parameters and I have code running on my Samsung S2 that will unwrap a Bloggie lens.

Based on code by Flong but running against camera and not a saved image.

Now a bit of work to port ideas from roborealm to turn this into an obstacle avoidance sensor.  Will have to find or code related image processing in Processing or Java.  Although, might have found the best library in BOOF!  BOOF has processing library (already just now installed on my machine), camera calibration, structure from motion (OMG!), Fiducials (read Markers) (OMG!).

So, ready, steady GO!