newspaint

Documenting Problems That Were Difficult To Find The Answer To

Monthly Archives: September 2015

JMeter – User Parameters or User Defined Variables?

You almost always want User Parameters because these are recomputed for every thread and, optionally, every iteration. Whereas User Defined Variables are processed at the start and copied to all threads.

Getting Canon MF8540Cdn Colour Laser to Work with Ubuntu 14.04.3 Trusty Tahr 64-Bit

I downloaded Canon UFRII/UFRII LT Printer Driver for Linux v3.00 from Canon Europe’s website.

After unzipping the zip file I installed the Debian packages:

# sudo dpkg -i cndrvcups-common_3.10-1_amd64.deb cndrvcups-ufr2-uk_3.00-1_amd64.deb

But after adding my printer I got the following message every time I tried to print:

Idle - src = libcanon_pdlwrapper.c, line = 514, err = 0¥nDEBUG: PID 5537 (gs) exited with no errors.

I wondered what might be missing. So I ran:

ldd libcanonufr2.so.1.0.0
	linux-gate.so.1 =>  (0xf775b000)
	libEnoJBIG.so.1.0.0 => /usr/lib/libEnoJBIG.so.1.0.0 (0xf76ef000)
	libcanon_slim.so.1.0.0 => /usr/lib/libcanon_slim.so.1.0.0 (0xf76e8000)
	libxml2.so.2 => not found
	libEnoJPEG.so.1.0.0 => /usr/lib/libEnoJPEG.so.1.0.0 (0xf76b9000)
	libcaepcm.so.1 => /usr/lib/libcaepcm.so.1 (0xf7642000)
	libc.so.6 => /lib32/libc.so.6 (0xf7496000)
	libm.so.6 => /lib32/libm.so.6 (0xf7450000)
	libdl.so.2 => /lib32/libdl.so.2 (0xf744b000)
	libpthread.so.0 => /lib32/libpthread.so.0 (0xf742f000)
	librt.so.1 => /lib32/librt.so.1 (0xf7426000)
	/lib/ld-linux.so.2 (0xf775c000)

I already had libxml2.so.2 installed on the system at /usr/lib/x86_64-linux-gnu/libxml2.so.2 so why wasn’t it being found?

# file libcanonufr2.so.1.0.0
libcanonufr2.so.1.0.0: ELF 32-bit LSB  shared object, Intel 80386, version 1 (SYSV), dynamically linked, stripped

Ah, even though I installed the 64-bit printer driver the libraries it installed were 32-bit. Now to install a 32-bit libxml2:

# sudo dpkg --add-architecture i386
# apt-get install libxml2:i386
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  gcc-4.9-base:i386 libc6:i386 libgcc1:i386 liblzma5:i386 zlib1g:i386
Suggested packages:
  glibc-doc:i386 locales:i386
Recommended packages:
  xml-core:i386
The following NEW packages will be installed:
  gcc-4.9-base:i386 libc6:i386 libgcc1:i386 liblzma5:i386 libxml2:i386
  zlib1g:i386
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,753 kB of archives.
After this operation, 12.1 MB of additional disk space will be used.

# service cups restart

The next problem was that my Printer State would say:

Processing - Connected to printer.

… but never print. So I navigated to http://localhost:631/jobs/ and found the following message:

processing since Sun 20 Sep 2015 05:21:40 PM BST "src = bidiCommon.c, line = 1088, err = 0¥nDEBUG: Connecting to x.x.x.x:9100

According to this post the bjnp back end was needed.

# sudo apt-get install cups-backend-bjnp

But then I had a program spinning at 100% when I tried to make a test page print, cnpkmoduleufr2.

So I installed libstdc++6:i386 as recommended at this page.

# sudo apt-get install libstdc\+\+6:i386
# sudo service cups restart

And now I can finally print. Let me just say – spending 2 hours of my weekend trying to print to my expensive colour laser printer has been a most unsatisfying waste of time. Canon pretend to make 64-bit drivers available but, in truth, are old 32-bit relics from an age long since past. This was painful.

How Do I Get X11 Applications Running in a LXC Container?

This demonstration assumes Ubuntu 14.04.3 Trusty Tahr running LXC.

Firstly you will want to build your LXC container.

$ sudo lxc-create lxc-create -n my-x-container -t ubuntu -- -r trusty
$ sudo lxc-start -d -n my-x-container
$ sudo lxc-ls -f
NAME            STATE    IPV4        IPV6  AUTOSTART  
----------------------------------------------------
my-x-container  RUNNING  10.0.3.101  -     NO         
$ ssh -X ubuntu@10.0.3.101 # will fail to forward X because no xauth

Before you can connect using ssh with X11 forwarding you will need the xauth package installed, run the following command from within the container after attaching to the container’s console:

$ sudo lxc-console -n my-x-container
# sudo apt-get install xauth

Note that you will need to press ctrl-A, Q to exit the console.

This is enough if you merely want to run xclock:

# sudo apt-get install x11-apps
$ ssh -X ubuntu@10.0.3.101
# /usr/bin/xclock

But you will also need dbus and dbus-x11 if you want to run Firefox:

# sudo apt-get install dbus dbus-x11
# sudo apt-get install firefox

Now you can run Firefox:

$ ssh -X ubuntu@10.0.3.101
# /usr/bin/firefox

How About Sound Over SSH/X11?

First, as your ordinary user (not root) on your desktop run the X application paprefs (from the paprefs package).

Navigate to the “Network Server” tab and select the “Enable network access to local sound devices” checkbox. Then close the window.

Enable network access to local sound devices

Enable network access to local sound devices

To confirm this is working from your desktop type xprop -root PULSE_SERVER and you should see tcp:localhost:4713 listed, e.g.:

$ xprop -root PULSE_SERVER
PULSE_SERVER(STRING) = "{0e1ca025b25b2cccd20066e2c5f303c0}unix:/run/user/1000/pulse/native tcp:localhost:4713 tcp6:localhost:4713"

Next you will need to install the pulseaudio package in your container.

# sudo apt-get install pulseaudio

And then every connection you make you will have to define the PULSE_SERVER environment variable to tell X11 applications where to connect to in order to send audio.

The SSH connection you make will not only have to do X forwarding (-X) but also connect locally to port 4713 whenever a connection is made to port 14713 within your container/SSH session (-R 14713:localhost:4713).

$ ssh -X ubuntu@10.0.3.101 -R 14713:localhost:4713
# export PULSE_SERVER="tcp:localhost:14713"
# /usr/bin/firefox # now can watch YouTube videos with sound from container

Weather Panel Plugin in Trusty Tahr Shows No Data

In Xubuntu 14.04.3 Trusty Tahr a fresh install results in “no data” being displayed in the XFCE panel Weather Update plugin.

I looked up the version of the plugin I had installed:

$ dpkg -l |grep xfce4-weather-plugin
ii  xfce4-weather-plugin                      0.8.3-1
                  amd64        weather information plugin for the Xfce4 panel

The solution I found (for my amd64 platform) was to visit https://launchpad.net/ubuntu/trusty/amd64/xfce4-weather-plugin/0.8.3-1ubuntu0.1, download the debian package.

Then run:

$ sudo apt-get purge xfce4-weather-plugin
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  xfce4-weather-plugin*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 3,465 kB disk space will be freed.
Do you want to continue? [Y/n] y
(Reading database ... 185289 files and directories currently installed.)
Removing xfce4-weather-plugin (0.8.3-1) ...
Processing triggers for hicolor-icon-theme (0.13-1) ...
$ sudo dpkg -i ~/Downloads/xfce4-weather-plugin_0.8.3-1ubuntu0.1_amd64.deb
Selecting previously unselected package xfce4-weather-plugin.
(Reading database ... 184935 files and directories currently installed.)
Preparing to unpack .../xfce4-weather-plugin_0.8.3-1ubuntu0.1_amd64.deb ...
Unpacking xfce4-weather-plugin (0.8.3-1ubuntu0.1) ...
Setting up xfce4-weather-plugin (0.8.3-1ubuntu0.1) ...
Processing triggers for hicolor-icon-theme (0.13-1) ...

Now I have the plugin reporting weather information.

Thread Pool Pattern for Node.JS using Q.js Promises

I wanted to emulate the Thread Pool Pattern in Node.JS using Q.js promises.

Of course Node.JS does not have multiple threads for the user to use. Which is why Q.js is used in the first place – to make concurrency programming easier using the event/callback model that Node.JS implements.

Given an array of functions which return promises I wanted a pattern function that provided a customisable n number of workers. These workers would run asynchronously processing one function from the list at a time, and when each returned promise resolved it would then fetch the next function from the list – until no functions were left to process. When the last worker finished processing it would then resolve the promise returned by the pattern function to the caller. Usually there would be many tasks and limited number of workers.

Why would I want to do this? Let’s say I have a hundred or thousand URLs I want to fetch. If I put them all into a list and called Q.allSettled then Node.JS would attempt to fetch them all at the same time – this could result in excessive delays, a peak in network traffic, or might be plain impolite on the target webserver. Instead it may be better to download a maximum of 4 web pages at any one time.

For simplicity’s sake here is an example. Let’s create a function that returns a promise:

// delayms() - returns a promise that resolves approximately n milliseconds into the future
//   - ms - milliseconds to wait before resolving promise
function delayms( ms ) {
  var deferred = Q.defer();

  setTimeout( function () { deferred.resolve( "Finished after " + ms + "ms" ); }, ms);
  return deferred.promise;
}

Now let’s create a list of functions that return a promise (we could call these functions “tasks”). It’s important that the array contains functions, not the result of a function that returns a promise (i.e. a promise).

For example:

// create array of functions to process
var tasklist = [ ];
for ( var item = 0; item < 12; item++ ) {
  tasklist.push(
    function () {
      console.log( "Going to wait 1 second..." );
      return delayms( 1000 ); // return promise as last statement
    }
  );
}

// let's throw in an error, too
tasklist.push(
  function () {
    throw new Error( "Deliberate error!" );
  }
);

Now we have an array of tasks to process. How about we process them all – but a maximum of 3 running at any one time? We would want to do something like:

var Q = require('./q.js'); // load Q.js library

Q.fcall(
  // call a function, this function resolves immediately
  function () {
    console.log( "Starting" );
  }
).then(
  // do the list of tasks, max concurrency of 3
  function () {
    // return the promise
    return workermodel( 3, tasks );
  }
).then(
  // output the list of results
  function ( results ) {
    console.log( "End: " + JSON.stringify( results, null, 2 ) );
  }
).fail(
  function ( reason ) {
    console.error( "Error: " + reason );
  }
).done();

Well all we need now is the workermodel() function to run through our list of tasks:

// workermodel() - returns a promise that resolves after task list processes with max concurrency
//   - workers - maximum concurrency or number of workers
//   - tasklist - array of functions that return a promise
var debug = true;
function workermodel( workers, tasklist ) {
  var taskidx = 0;
  var workersfree = 0;

  // if no tasks to perform return a completed promise with empty array
  if ( tasklist.length == 0 ) {
    return Q.fcall( function () { return []; } );
  }

  var deferred = Q.defer();

  // if less tasks than workers, limit workers to task size
  if ( tasklist.length < workers ) {
    workers = tasklist.length;
  }

  // results will go into this array
  var resultsarray = new Array( workers );

  var getNextTaskGenerator = function () { }; // place holder

  var startNextTask = function ( innerworkeridx, innertaskidx ) {
    if ( debug )
      console.error( new Date() + " task[" + innertaskidx + "/" + tasklist.length + "] assigned to worker[" + innerworkeridx + "]" );
    var nextTask = Q.fcall(
      function () {
        return tasklist[innertaskidx]();
      }
    );
    nextTask.then(
      function ( value ) {
        if ( debug )
          console.error( new Date() + " task[" + innertaskidx + "] resolved on worker[" + innerworkeridx + "]" );
        resultsarray[innertaskidx] = { state: "fulfilled", value: value };
        return getNextTaskGenerator( innerworkeridx )();
      },
      function ( reason ) {
        if ( debug )
          console.log( new Date() + " task[" + innertaskidx + "] rejected on worker[" + innerworkeridx + "]" );
        resultsarray[innertaskidx] = { state: "rejected", reason: reason };
        return getNextTaskGenerator( innerworkeridx )();
      }
    );

    return nextTask;
  };

  getNextTaskGenerator = function ( workeridx ) {
    return function () {
      if ( debug )
        console.error( new Date() + " getnext task[" + taskidx + "] for worker[" + workeridx + "]" );
      if ( taskidx < tasklist.length ) {
        var nextTask = startNextTask( workeridx, taskidx );
        taskidx++;
        return nextTask;
      } else {
        workersfree++;
        if ( workersfree == workers ) {
          if ( debug )
            console.error( new Date() + " workermodel RESOLVE" );
          deferred.resolve( resultsarray );
        } else {
          if ( debug )
            console.error( new Date() + " no more work but " + ( workers - workersfree ) + " workers busy" );
        }
      }
    };
  };

  // start workers
  for ( var workeridx = 0; workeridx < workers; workeridx++ ) {
    startNextTask( workeridx, taskidx );
    taskidx++;
  }

  if ( debug )
    console.error( new Date() + " RETURNING PROMISE" );
  return deferred.promise;
}

Like Q.allSettled() this function always resolves and when it does the value is a list of associative arrays, each row (related to the function in the tasklist provided) contains a state field of “fulfilled” or “rejected” and if “fulfilled” then the “value” field will contain the result. Otherwise the “reason” field will contain the error.

What is the output of this script (with debugging turned on)?

Starting
Sat Sep 12 2015 19:48:29 GMT+0100 (BST) task[0/13] assigned to worker[0]
Sat Sep 12 2015 19:48:29 GMT+0100 (BST) task[1/13] assigned to worker[1]
Sat Sep 12 2015 19:48:29 GMT+0100 (BST) task[2/13] assigned to worker[2]
Sat Sep 12 2015 19:48:29 GMT+0100 (BST) RETURNING PROMISE
Going to wait 1 second...
Going to wait 1 second...
Going to wait 1 second...
Sat Sep 12 2015 19:48:30 GMT+0100 (BST) task[0] resolved on worker[0]
Sat Sep 12 2015 19:48:30 GMT+0100 (BST) getnext task[3] for worker[0]
Sat Sep 12 2015 19:48:30 GMT+0100 (BST) task[3/13] assigned to worker[0]
Sat Sep 12 2015 19:48:30 GMT+0100 (BST) task[1] resolved on worker[1]
Sat Sep 12 2015 19:48:30 GMT+0100 (BST) getnext task[4] for worker[1]
Sat Sep 12 2015 19:48:30 GMT+0100 (BST) task[4/13] assigned to worker[1]
Sat Sep 12 2015 19:48:30 GMT+0100 (BST) task[2] resolved on worker[2]
Sat Sep 12 2015 19:48:30 GMT+0100 (BST) getnext task[5] for worker[2]
Sat Sep 12 2015 19:48:30 GMT+0100 (BST) task[5/13] assigned to worker[2]
Going to wait 1 second...
Going to wait 1 second...
Going to wait 1 second...
Sat Sep 12 2015 19:48:31 GMT+0100 (BST) task[3] resolved on worker[0]
Sat Sep 12 2015 19:48:31 GMT+0100 (BST) getnext task[6] for worker[0]
Sat Sep 12 2015 19:48:31 GMT+0100 (BST) task[6/13] assigned to worker[0]
Sat Sep 12 2015 19:48:31 GMT+0100 (BST) task[4] resolved on worker[1]
Sat Sep 12 2015 19:48:31 GMT+0100 (BST) getnext task[7] for worker[1]
Sat Sep 12 2015 19:48:31 GMT+0100 (BST) task[7/13] assigned to worker[1]
Sat Sep 12 2015 19:48:31 GMT+0100 (BST) task[5] resolved on worker[2]
Sat Sep 12 2015 19:48:31 GMT+0100 (BST) getnext task[8] for worker[2]
Sat Sep 12 2015 19:48:31 GMT+0100 (BST) task[8/13] assigned to worker[2]
Going to wait 1 second...
Going to wait 1 second...
Going to wait 1 second...
Sat Sep 12 2015 19:48:32 GMT+0100 (BST) task[6] resolved on worker[0]
Sat Sep 12 2015 19:48:32 GMT+0100 (BST) getnext task[9] for worker[0]
Sat Sep 12 2015 19:48:32 GMT+0100 (BST) task[9/13] assigned to worker[0]
Sat Sep 12 2015 19:48:32 GMT+0100 (BST) task[7] resolved on worker[1]
Sat Sep 12 2015 19:48:32 GMT+0100 (BST) getnext task[10] for worker[1]
Sat Sep 12 2015 19:48:32 GMT+0100 (BST) task[10/13] assigned to worker[1]
Sat Sep 12 2015 19:48:32 GMT+0100 (BST) task[8] resolved on worker[2]
Sat Sep 12 2015 19:48:32 GMT+0100 (BST) getnext task[11] for worker[2]
Sat Sep 12 2015 19:48:32 GMT+0100 (BST) task[11/13] assigned to worker[2]
Going to wait 1 second...
Going to wait 1 second...
Going to wait 1 second...
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) task[9] resolved on worker[0]
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) getnext task[12] for worker[0]
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) task[12/13] assigned to worker[0]
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) task[10] resolved on worker[1]
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) getnext task[13] for worker[1]
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) no more work but 2 workers busy
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) task[11] resolved on worker[2]
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) getnext task[13] for worker[2]
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) no more work but 1 workers busy
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) task[12] rejected on worker[0]
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) getnext task[13] for worker[0]
Sat Sep 12 2015 19:48:33 GMT+0100 (BST) workermodel RESOLVE
End: [
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "fulfilled",
    "value": "Finished after 1000ms"
  },
  {
    "state": "rejected",
    "reason": {}
  }
]

All of the above is my own work. It is free for others to use.

See Also

  • bluebird Javascript promises library has a concurrency option which would appear to be similar in functionality

Accessing Lenovo ThinkServer TS140 BIOS

On boot press:

Key Description
F1 BIOS menu
F10 Diagnostics
F12 Boot device selection
Ctrl-P Management Engine BIOS Extension (MEBx)
Enter List of above functions (menu)

How To Get Cookies From Node.JS HTTP Response

So you make a HTTP request using the http library and you want the cookies returned by the remote webserver (maybe for a later request, a session cookie for example)?

The approach I ended up taking in Node.JS was to inspect the response.headers[“set-cookie”] field which is set to an array of cookies if they have been given in the server response.

e.g.

var http = require( "http" );
var url = require( "url" );

var urlstring = "http://www.voa.com/";
var parsedurl = url.parse( urlstring );
var options = {
  hostname: parsedurl.hostname,
  port: ( parsedurl.port || 80 ), // 80 by default
  method: 'GET',
  path: parsedurl.path,
  headers: { },
};

var request = http.request(
  options,
  function ( response ) {
    // display returned cookies in header
    var setcookie = response.headers["set-cookie"];
    if ( setcookie ) {
      setcookie.forEach(
        function ( cookiestr ) {
          console.log( "COOKIE:" + cookiestr );
        }
      );
    }

    var data = "";
    response.on(
      "data",
      function ( chunk ) { data += chunk; }
    );

    response.on(
      "end",
      function () {
        console.log( "STATUS:" + response.statusCode );
        console.log( "  DATA:" + data );
      }
    );
  }
);

request.on(
  "error",
  function( err ) {
    console.error( "ERROR:" + err );
  }
);

request.end(); // let request know it is finished sending

This will output cookie information like:

COOKIE:JSESSIONID=7bcc3a128; Path=/

If you want to send cookies with your request add a header to the options you pass the request, e.g.:

var cookies = [
  "JSESSIONID=c2aa6fa21",
  "site_ip=10.18.32.12",
];

options.headers["Cookie"] = cookies.join( "; " );